High memory usage problem

Hi, I’ve tested whether it works with high recommended hardware. Memory is 32GB and the number of processes is 24. I have written many data with 2.5 million series. I think that it should not have a problem with this spec because the document says that 32+GB and 8+ cpu core spec proceed with the series between 10 million and 1 million. But I got 95% memory usage with writing data, and the client had ‘read timed out’ and the influx might finally be dead by itself. How could I resolve this problem? What do I edit the configuration for it? ( I just increased the write-time-out…)

2 Likes

I’ve found that even if there was no writing, the memory increased to 97% and was dead. It might be because of high series. should do I scale up the hardware? Or could I edit something about configuration?

We are experiencing similar problems with high memory usage associated with http requests to write data and large queries from Grafana. We are running on a AWS c4.2xlarge (15.0 GB RAM 8 vCPUs). The influx process then crashes, and has to be restarted. This happens approximately every 48 hours, where normal memory usage goes from 70% to over 98%. This is problematic because it is our production server and numerous Kapacitor alerts depend on it.

We are waiting for a response from Influx support regarding configuration settings in [coordinator] and [http] blocks to in version 1.3.0 to see if we can alleviate the issue. We will also try increasing the AWS instance to a c4.4xlarge (30.0 GB RAM 16 vCPUs).

FWIW, I too have high memory usage (influxdb-1.2.0-1.x86_64 on Centos 7). Every day the memory usage grows, our monitoring alerts, then Influx does some sort of clear-out and then it’s fine for a while again. Queries are impacted by high memory and the clean-up though.

I looked into it and checked all the suggested things like the number of fields, tags and measurements. I couldn’t ever figure out how I could realistically change any of the data to be more ‘influx-friendly’ - not that I imagine our data is perfect, but as far as I can see, we’re just pushing it too hard (which is a shame, because we’ve got loads more possible uses for Influx that we can’t start to implement).

IMHO, I’d like to see some way to limit the various memory pools that Influx has (like you can with most RDMSes) so that I could tune it to fit in the available RAM. If I limit something, then I can accept that it will cause more data to be kept on disk rather than in RAM, with the consequent reduction in performance. At the moment though, it feels very much like I don’t have proper control over the server.

Since you are testing this storage could you test this with version 1.3 (nightly builds). It has the option to move the memory indexing to disk. Also using a disk with fast IOPS will be useful - I assume you are not using slower disks than SSDs.

The best solution I could find is to downsize the data using Continuous Queries and smart querying the data. The memory consumption is directly promotional to the number of records you query will go over (so downsizing the data does help significantly). The memory cleaning is part of the Garbage collector in GO; after the query is clear the memory cleanup is performed automatically by the process.

The version 1.3 will help with the indexes but the queries will still utilize the memory for performing different functions on the data.

Yeah, I update the influx to version 1.3 , and this problem is solved. Thanks, guys :slight_smile:.

I am running 1.3.6 and still having the problem. Just rebooted, no queries, no writes, and memory usage went through the room VIRT=20.7G, RES, 9.69G, SHR 3.9G