We are using InfluxDB in a test machine,
and we are trying to figure out how much memory influxd will use.
I already know that there are no parameters to set a maximum limit,
that it depends heavily on the number of Series and not Measurements,
and that is lower if TSI indexing is used.
I guess that the memory usage will slowly grow up forever, or at least until Retention Policy will delete some data (and indexes), even if I define a fixed number of Measurements and different values for tags (that will lead to a fixed number of Series).
But I didn’t find yet a way to make any forecast…
In my last test, using InfluxDB 1.5.1 with default RP (infinite, and shard duration default of 7 days),
InfluxD is filled with simulation data: it started using 700MB, then as the days passed it used more memory (peaks of about 900MB), then as the second Shard was created it grew clearly, up to peaks of more than 1 GB.
None of my Series will become “cold”, never: any of our measurement can really change value whenever it wants, and none of the measurement will be “dismissed” in months, and no new measurements/series will be added, so the simulation is not far away from reality.
Does anybody know any calculation/tool/argument that can help to forecast the upper limit of RAM usage,
avoiding months-lasting tests?
Thanks