InfluxDB v2 High RAM usage and leading into OOM and constant restart

Hello everyone,

We are currently using InfluxDB 2.7.1 for collecting time series data in our application. We have recently had issues that Influx

would not start for several weeks until some retention-policies kicked in and deleted data. InfluxDB would always consume increasing amounts of

RAM on startup until crashing with an OOM-Exception.

For context:

Current version of Influx is deployed using docker.compose in a server with share resources. We need to limit InfluxDB’s RAM consumption to

maintain the server stability. The server has 32GB of RAM but influx is limited to 10GB using docker. During runtime we see spikes of RAM

and CPU usage correlating with queries made by other applications, but no issues from that.

The issue:

After a scheduled restart end of November the DB would always use the full allocated 10GB of RAM as soon as one of the other applications connected

to it, crashing with a OOM Exception after ~30 Seconds. No usage of the DB was possible. Waiting for longer periods of time between starting up the DB

and connecting to it did not change the behavior. Just booting the DB did not produce the problem. Now in January the configured Data-retention removed

most of the Data and starting and using the DB is working again. Currently we have 3 different buckets with 30d,90d and auto-gen as their retention policy.

Most of the data is in the 30d bucket.

So, the current hypothesis is that as the data on disk was large this lead to the crash. As it will now increase again, we are afraid we will have the same problem

again at the end of the month.

Our Setup

InfluxDB OSS v2.7.1 in docker

OS Red Hat Enterprise Linux Server release 7.9 (Maipo)

Linux Kernel 3.10.0-1160.105.1.el7.x86_64

Server RAM 32GB

InfluxDB RAM limitation using docker 10GB.

Buckets and retention policy:

3 relevant buckets with 3 retention policies 30d,90d and autogen

Storage size on disk around 20GB(current InfluxDB data storage)

Total disk Size(Shared by Application) 500GB

Available Disk Size 165GB
Any help is highly appreciated
Thank you :slight_smile: Ping @Anaisdg

1 Like

From my experience, RAM high consumption can come from a too high cardinality of the data, take a look at that.

@thomasbromkapsdata Thank you for your response and really sorry for the delay. So yes I observed that one of the bucket had very high cardinality(~6 Mil) and as we reduced the cardinality(by deleting older data) the RAM usage went down.But the with the same data i.e. with ~6Mil series cardinality I started the InfluxDB locally on my system the database was able to start within a ~10 sec and did not consume more than a 1GB of RAM and currently running smooth.My local setup is as follows

  • docker-compose version 1.25.0, build

  • InfluxDB version 2.7

I am unable to under stand what exactly is different in these 2 scenarios? Any help is appreciated

ping @Anaisdg @grant1 @scott

Hi, I have the same problem.

My Influxdb version is 2.7.1 and rigth now I have 1.8GB of data stored for constant RAM usage by the container of 13GB…

Did you find any solution?

We have the same issue. We increase available RAM, and it just takes it all.

Do you know how to limit influx to access only a certain amount of RAM? Our instance is killed by the system, not by some internal error.

@Belly_Musketier did it start for you after upgrading to 2.7?

Do you know how influx uses RAM? What is a store there? We also have a new server with limited data and huge RAM usage.

ping @Anaisdg @grant1 @scott

Any follow-up here? I also notice that influxd (v1.8.10) is using the most amount of memory on my server! This is insane. 2GB of RAM…

Besides the dependency with cardinality. Are there other ways of reducing the RAM usage?