At work we’re downgrading from InfluxDB2.0 to 1.8 and are getting the following error while pushing our existing data into an 1.8 database:
ts=2021-11-25T20:27:17.744523Z lvl=info msg=“Error adding new TSM files from snapshot. Removing temp files.” log_id=0Y1sAhVW000 engine=tsm1 trace_id=0Y2CLGJ0000 op_name=tsm1_cache_snapshot error=“rename /mnt/InfluxData/1.8/data/logger/autogen/7/000001781-000000001.tsm /mnt/InfluxData/1.8/data/logger/autogen/7/000001781-000000001.tsm.tmp: resource temporarily unavailable”
I’ve seen a lot of similar people having this problem, and it has to do with the maximum file limit having been reached. I already upped the open file limit from 1024 to 1048576 and I’m still getting this error. It looks like the actual set file limit has not been reached:
$ sudo ls /proc/72219/fd -l | wc -l
5325
So maybe there is something we can do to allow our NAS to allow more open files. At the same time, it seems that the number of open files grows as the database grows. Is there an end to this, or is having a arbitrary large database simply not possible because you’ll approach having an infinite number of files open on the file system? Is there a way to limit this problem?
Regards,
Tom