Too many open files

Hi guys,

I hope you can help me. I have massive problems with my influxDb.

My hardware:

  • raspberryPi 3 with 16GB SD
  • openHabian OS


  • openhab2
  • homegear
  • influxDb
  • grafana

I want to store a few temperature values of my openhab installation in the database. The first three or four days I had no problems. If I now want to start the influxDB with the following command I get the o.g. Error:

/ usr / bin / influxd -config /etc/influxdb/influxdb.conf

Failed to open shard: 4: [shard 4] error opening file /influxdb/data/openhab_db/autogen/4/000000134-000000001.tsm: open /influxdb/data/openhab_db/autogen/4/000000134-000000001.tsm: too many open files service = store

The only special feature is that I save the data, meta and wal folder on a network drive.

What can I do to regain my temperature readings? Where is my mistake?

 ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 7695
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 7695
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Is nobody here who can help me?

hi…here you have some linux kernel configurations…maybe they will help:
on you /etc/sysctl.conf :
vm.max_map_count = 8245240
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
net.ipv4.tcp_rmem = 4096 25165824 25165824
net.ipv4.tcp_wmem = 4096 65536 25165824
net.core.optmem_max = 25165824
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.tcp_congestion_control = htcp
net.core.default_qdisc = fq
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_syn_retries = 2
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_max_syn_backlog = 4096
kernel.randomize_va_space = 2

on your /etc/security/limits.conf add this to the last line:

  • soft nofile 500000
  • hard nofile 500000

and reboot

let me know

The service does not start despite the changes. It takes a bit longer until the error occurs, but unfortunately it does not work. What could be the problem? Is the Raspberry too weak for an InfluxDB?

I run an Influx VM with Kapacitor and grafana with other services in a Ubuntu 16 / 2 CPU and 4 GB RAM (very old server)
so influx should run without any issues…

Can you add some logs so we can help you ? thanks

It may be that the system allows enough open files, but the user that InfluxDB runs as is more restricted.

grep files /proc/$(pidof influxd)/limits