Hi,
I’m testing InfluxDB v1.5 and I’m sending 20M points distributed in 1M series by batch of 20000. I’m using on-disk indexing (index-version = “tsi1”) and autogen RP.
My hardware sizing is : 8GB RAM and 4 CPU core. I know I do not follow yet the recommended hardware sizing but I do not think this comes into account here and I will change soon.
With Grafana, I’m monitoring the VM InfluxDB is running on and InfluxDB itself through the _internal database.
Here is the evolution of the mem.ramUsed and mem.ramTotal metrics when loading my 20M points :
I’m trying to understand how InfluxDB works under the hood and I can’t figure out why is the RAM decreasing about 5 to 10 minutes after the insertion is done.
I checked some metrics and tested some config modification :
- There is no compaction (of any type) happening at this moment and the size of disk.used does not change.
- The modification of the cache-snapshot-write-cold-duration config value does not affect the duration after which the RAM starts to decrease.
- RAM cached (mem.Cached metric) size does does not change when mem.ramUsed size decreases.
I would like to understand why is the RAM behaving in such a way not to experience undesirable RAM evolution before changing my hardware sizing.
Thank you for you help,
Bastien