According to Influx’s documentation,
“The increase in RAM needs relative to series cardinality is exponential […]”
Is the exponential increase per:
- Measurement
- Database
- Or DB Engine ?
-
At “worst” it has to be merely per DB Engine. That is, if I have 10 DB Engines running on some server (evenly distributed, each with 1/10 of the series), I’d be cutting down the “exponential” growth of RAM requirement by 1/10 (while of course adding tons of overhead, though still cutting down the “exponential” for sure).
-
If the “exponential” growth is per Database-total, I have a few high-series Measurements that I can move into separate DBs (most extreme scenario, ending up with 1 DB for each Measurement).
-
If the “exponential” growth is per Measurement, then I guess there’s nothing in the schema / architecture that I can change to try to prevent the exponential growth … except maybe break down some data into different Measurements (ex: Measurement_Successful_Attempts / Measurement_Failed_Attempts).