I am running influxdb version 1.7 inside a docker container but recently i saw that the memory usage went up to 95% i.e. using 30gb of ram out of 32gb.I was not able to root cause it as to know what is causing such high memory usage so that i can work on it. Therefore, i wanted to know if there is any way to diagnose the problem or any CLI commands to check what is causing this issue.I did check the cardinality which is estimated at 118666. I am not so sure whether cardinality is an issue or there is any other problem.Please help.
@mrAbhishek - How much data do you have (in GB)? If more than 30GB, then this memory profile is not unexpected as InfluxDB memory maps the storage files into memory for better performance - it will attempt to use as much as the OS (or docker container) will give it.
If you’re not experiencing any problems, I would not dwell on the memory usage taking up most of what is available.
hey @philjb, thanks for replying.Sorry i mistakenly wrote that it is taking 30gb ram out of 32 gb(it was total ram usage), actually it is using around 18-20 gb of ram and i checked out that influxdb occupies 6.3gb of data on disk.So,
This is not the expected behavior and how can i debug to know what is causing this issue
Is there any setting in configuration of influx so that i can limit the memory usage by influx.
Using the new numbers, and generally speaking, your memory usage is not unexpected given we need to account for golang’s heap, the series file index, and memory mapping of the data store.
You can decrease the memory in the container which will set a cap on the amount influxdb will use. You can also reduce your data usage by decreasing the retention period.
There are no adverse problems but we are still looking on how to limit the ram usage by influx. Do you think cache-snapshot-memory-size might help or any other configuration setting because we don’t want to cap it from the container side.
There’s no setting specifically for controlling memory usage within Influx 1.x. You can use OS level settings to change how much memory is available for mmap. Indirectly, you can reduce data stored in Influx through a smaller retention period and/or downsampling. If you do constrain memory, I would expect to see decreased performance as the OS pulls mmap’d data in and out more frequently.
I have one more question say if we don’t configure anything , will influx take up all the memory and OOM will happen or will influx bring down memory usage automatically when its nearing hundred?
Most of the memory usage by Influxdb is memory mapped files which the OS controls. The OS should release pages as it needs memory for other uses, at the expensive of more disk io.
Hi, using InfluxDB 1.8 inside a Docker container on an Ubuntu 20 virtual machine, with 32GB RAM and 8GB swap (as reported by htop).
Data on disk is more than 110GB and the kernel keeps killing the process due to Out Of Memory errors (found usign dmesg).
What should I do? Increasing the available RAM won’t be an option - we may add another 16GB but that would be a hard limit.
The data has been collected in around a year of usage. I think I could reduce various settings about data you already mentioned (retention, etc), but I guess this will only move the OOM forward in time, not avoid it at all, am I right?