Memory usage forever growing with INF RP?

We are using InfluxDB in a test machine,
and we are trying to figure out how much memory influxd will use.
I already know that there are no parameters to set a maximum limit,
that it depends heavily on the number of Series and not Measurements,
and that is lower if TSI indexing is used.

I guess that the memory usage will slowly grow up forever, or at least until Retention Policy will delete some data (and indexes), even if I define a fixed number of Measurements and different values for tags (that will lead to a fixed number of Series).
But I didn’t find yet a way to make any forecast…

In my last test, using InfluxDB 1.5.1 with default RP (infinite, and shard duration default of 7 days),
InfluxD is filled with simulation data: it started using 700MB, then as the days passed it used more memory (peaks of about 900MB), then as the second Shard was created it grew clearly, up to peaks of more than 1 GB.

None of my Series will become “cold”, never: any of our measurement can really change value whenever it wants, and none of the measurement will be “dismissed” in months, and no new measurements/series will be added, so the simulation is not far away from reality.

Does anybody know any calculation/tool/argument that can help to forecast the upper limit of RAM usage,
avoiding months-lasting tests?

Thanks

well, shared are created based on the retention policy, so couldn’t you inserts many months of data, and then periodically, insert/update data throughout that multi-month time frame, to keep a shard from going cold?

As a way to test memory usage in that scenario?

Thank you. Yes, it is possible, I will try something similar to what you suggested.

But…to be more clear: in Production environment, none of my Measurement/Series will become cold, but I will only append new values in the most recent shard, I will not insert or update data in the older Shards…I will only perform some SELECT query involving old data; so I’m not sure how RAM usage is impacted by the number of shards.

In your experience, is my hypothesis right? (that RAM usage will grow forever until Retention Policy will delete some data)

Does anybody (also InfluxData guys…) know how to forecast an upper limit to RAM used by Influxd, in this “fixed Series number” scenario?

Below you can see in black the Working Set Private of Influxd over about seven days.

OutOfMemory will be unlikely but never “impossible”?
Any protection mechanism in influxDB settings is possible?