Hi, my prototyping and testing with InfluxDB was stopped for a few months, (but I will start again in days for a real system), but I experienced some problem with memory management too, and I didn’t succeed in obtaining deterministic informations about these.
(Check my post: Memory usage forever growing with INF RP? )
I can tell you to check both: your DB structure and Retention Policies.
In my understanding, memory management is always slowly increasing due to indexes partially in RAM (to make querying faster), but the “big steps” in memory needs are due to:
- new Series (keep in mind that not only a new Measurement is a new Serie, but also a new value used for the first time for a Tag, so if you have Tags with several values…they should be turned into Fields.
- creation of new shards of data due to Retention Policies (something will be replicated, so the same “samples” stored with a retention policy of 2 days will consume more memory than if stored with a retention policy of 2 weeks. So…use a long retention period, or a short one, but move old data elsewhere as soon as possible …and delete them from the machine that is collecting new data .