Resource temporarily unavailable

At work we’re downgrading from InfluxDB2.0 to 1.8 and are getting the following error while pushing our existing data into an 1.8 database:
ts=2021-11-25T20:27:17.744523Z lvl=info msg=“Error adding new TSM files from snapshot. Removing temp files.” log_id=0Y1sAhVW000 engine=tsm1 trace_id=0Y2CLGJ0000 op_name=tsm1_cache_snapshot error=“rename /mnt/InfluxData/1.8/data/logger/autogen/7/000001781-000000001.tsm /mnt/InfluxData/1.8/data/logger/autogen/7/000001781-000000001.tsm.tmp: resource temporarily unavailable”

I’ve seen a lot of similar people having this problem, and it has to do with the maximum file limit having been reached. I already upped the open file limit from 1024 to 1048576 and I’m still getting this error. It looks like the actual set file limit has not been reached:

$ sudo ls  /proc/72219/fd -l | wc -l 

So maybe there is something we can do to allow our NAS to allow more open files. At the same time, it seems that the number of open files grows as the database grows. Is there an end to this, or is having a arbitrary large database simply not possible because you’ll approach having an infinite number of files open on the file system? Is there a way to limit this problem?



I’ve seen this a few times with the Kapacitor service and found that increasing the limits in the recommended files doesn’t always work.

I found that if i edit the systemd unit for the server and increase the open files limit for the service itself, I’ve had better luck. I would assume (I’m not 100%) that the limit you set for the systemd unit must be equal to or less that the limits you set in the server confugration.


Definitely google that though, to be sure. of course, you might have tried that already. In which case I’m not sure what else to suggest.