RAM grows exponentially from Series Cardinality per _?

According to Influx’s documentation,

"The increase in RAM needs relative to series cardinality is exponential […]"

Is the exponential increase per:

  1. Measurement
  2. Database
  3. Or DB Engine ?

  1. At “worst” it has to be merely per DB Engine. That is, if I have 10 DB Engines running on some server (evenly distributed, each with 1/10 of the series), I’d be cutting down the “exponential” growth of RAM requirement by 1/10 (while of course adding tons of overhead, though still cutting down the “exponential” for sure).

  2. If the “exponential” growth is per Database-total, I have a few high-series Measurements that I can move into separate DBs (most extreme scenario, ending up with 1 DB for each Measurement).

  3. If the “exponential” growth is per Measurement, then I guess there’s nothing in the schema / architecture that I can change to try to prevent the exponential growth … except maybe break down some data into different Measurements (ex: Measurement_Successful_Attempts / Measurement_Failed_Attempts).

@paulo That documentation is old and inaccurate. The RAM usage grows linearly with the number of series per instance. Also we have a major change coming to the series index that will only hold in memory series that were recently written to.