InfluxDB memory consumption optimization

Hello Community,
we are using InfluxDB in our project and we are experiencing OOM Killed more than frequently.
We are shipping data series to influx each 10Minutes. We are running Influx DB in Docker containers with 256Mo allocated.
On some instances, we have noticed that the OOM killed are occuring at the same hour.

Another interesting point is that we also stared the InfluxDB container, without shipping Time Series to it, OOM Killed still occurs after some time.

My question would be to know if there are any parameters, memory settings we need to turn off, or processes we need to stop to the reduce the OOM Killed problem.

Hi,

try to use the following within your docker cmd:

-e INFLUXDB_DATA_CACHE_MAX_MEMORY_SIZE=32G -e INFLUXDB_REPORTING_DISABLED=true

This would limit the use memory to 32GB and disables the reporting to the Influx creators. Of course you could use less memory, but remember, that too few memory isn’t a good decision. We require right now something like 14GB of memory footprint for an influx with 11 days uptime and input from several hundred servers every 30 seconds. As well as input from infrastructur every second. Right now our DB has a size of 150GB.

And we have never seen an OOM kill of the container. But of course we haven’t limited the available memory for the specific container yet. But this shouldn’t be a problem if properly set.

Best Regards,

Stephan