I would like to bring down the memory usage on influxdb-1.5.2-1.x86_64 ( Out of memory: Kill process)

influxdb

#1

I would like to configure influxdb to use 512MB of memory. Is that even possible? How to do this? What can I do to bring back this memory usage?

I am a bit worried that influxdb seems to be very inefficient in a small environment (at least the default configuration), I would be a ‘mess’ on a larger scale deployment.
There are here lots of unhandled OOM issues:
url Out Of memory when SELECT * FROM LIMIT 1 on 128GB host (Influxdb 1.2.4)
url Memory usage on Low-end hardware
url InfluxDB v1.3.1: how to reduce memory usage?
url Influxdb-1.3.6 fatal error: out of memory

I guess I don’t have that many ops. I have 4 nodes that send data every 10s. And have one grafana dashboard refreshing every 5/10s. So why this huge memory usage?

url: docs.influxdata.com/influxdb/v1.5/guides/hardware_sizing/

Jun 12 22:07:04 db1 kernel: [10681]   994 10681  5473561   358426    3621   365088             0 influxd
Jun 12 22:07:04 db1 kernel: Out of memory: Kill process 10681 (influxd) score 731 or sacrifice child
Jun 12 22:07:04 db1 kernel: Killed process 10681 (influxd) total-vm:21894244kB, anon-rss:1433704kB, file-rss:0kB, shmem-rss:0kB
Jun 12 22:07:04 db1 kernel: [10681]   994 10681  5473561   358426    3621   365088             0 influxd
Jun 12 22:07:04 db1 kernel: Out of memory: Kill process 10681 (influxd) score 731 or sacrifice child
Jun 12 22:07:04 db1 kernel: Killed process 10681 (influxd) total-vm:21894244kB, anon-rss:1433704kB, file-rss:0kB, shmem-rss:0kB





mysql     1080  0.0  0.0 113264     4 ?        Ss   Apr20   0:00 /bin/sh /usr/bin/mysqld_safe --basedir=/usr
postgres  1191  0.0  0.0 193196    64 ?        Ss   Apr20   0:00 postgres: logger process
postgres 19352  0.0  0.0 252248   612 ?        Ss   Jun13   0:25 postgres: x x x.x.x.x(53824) idle
postgres  1066  0.0  0.0 236428   624 ?        S    Apr20  24:55 /usr/pgsql-9.5/bin/postgres -D /var/lib/pgsql/9.5/data
postgres  1453  0.0  0.0 236428   728 ?        Ss   Apr20   1:17 postgres: writer process
postgres  1454  0.0  0.0 236428  1168 ?        Ss   Apr20   0:51 postgres: wal writer process
postgres  1455  0.0  0.0 237112  1532 ?        Ss   Apr20  30:10 postgres: autovacuum launcher process
postgres  1452  0.0  0.1 236428  2084 ?        Ss   Apr20   0:35 postgres: checkpointer process
postgres  1456  0.0  0.1 198296  2164 ?        Ss   Apr20  55:46 postgres: stats collector process
root     18063  0.0  0.1 234240  2660 pts/0    S+   12:18   0:00 mysql -p
mysql     1533  1.7 10.5 1209888 197444 ?      Sl   Apr20 1374:06 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --log-error=/var/log/mariadb/mariadb.log --open-files-limit=3072 --pid-file=/var/run/mariadb/mariadb.pid --socket=/var/lib/mysql/mysql.sock
influxdb 15804 63.2 67.6 20140780 1271600 ?    Ssl  11:39  49:00 /usr/bin/influxd -config /etc/influxdb/influxdb.conf

Where is an advisory for small setup for monitoring collectd/influxdb/grafana
#2

Hello, Hello, what about an alternative time series database server that can do this? Or has better support?


#3

Hi @f1outsourcing,
Any updates on resolving this OOM issue.
I am stuck with the similar issue as 90% of the 64GB RAM is held.
Any kind of help will be greatly appreciated

Thanks in advance
-Avin