Why my Influxdb memory rss graph looks like this

Hi All,

I have Influxdb 1.2.2 running inside Docker container running CentOs 7.
I have a single additional database - ‘mydb’, for which I configure the below retention policy:
client.create_retention_policy(name=‘short’, duration=‘6h’, replication=‘1’, database=“mydb”, default=True)
The data is written in 1 second batches. Everything else is default.

Now, I see the following memory consumption graph:

Is there any explanation to such memory consumption behavior?


RSS isn’t a good way to observe memory usage of a program written in Go, primarily because Go is a garbage-collected language. Here’s one blog post that offers some detail.

To observe the memory that Go is actively using, use the SHOW STATS query or the /debug/vars endpoint, either of which expose the Go runtime MemStats struct.

That being said, my guess would be that you’re running some queries that result in a bit of garbage being created, causing the incline and plateau; and then the garbage collector releases some of that back to the operating system, causing the dropoffs.

1 Like