InfluxDB users, what is your series cardinality?

We have 1million unique series at the moment running on a dedicated server:
128GB RAM,
E5-1650 v3 @ 3.50GHz 12 threads

Soon we plan to have 4 million series and going to upgrade to 256GB RAM with same CPU. Hopefully it will work without any problems.

Share your setup here, I am curious what others are doing.

1 Like

@jimaek My biggest influx instance has ~1.75M series. Its running on 48GB RAM with 16 cores on DO. I recently had to upgrade it from 16GB with 8 cores. It has fairly low (200-300/second) write throughput however so I’m able to get away with a bit less RAM.

1 Like

For those that want to check, see:

FWIW, my meagre numbers are 363 total. I had to check the time to now() - 20s to get anything.

InfluxDB 1.2.0 with ~20M Series on 64GB 4CPU VM and SAN storage with HDD.
~15K points/sec input rate. Select queries from grafana are not fast (< 1-20s) but acceptable in my case.

Series cardinality actually has two factors you should keep in mind:

  1. overall number of series trying to keep around 10M
  2. percentage of all series taken up by individual measurements

I would argue that the second bullet is actually the most important given the current index implementation (and from our experience). We were stuck in a situation where one measurement consumed >90% of all series! Queries to any measurement outside of that high-consumer were peppy, but any queries to the problematic measurement were painfully slow.

To work around this we split the data into multiple measurements with distinct names + tag/field sets. Our current metrics are now hovering around 4M total series with no more than 20% consumed in a given measurement.

Two other things to note…keep your shard duration down (we set to 24h) and use the influx_inspect report -detailed /path/to/shard/num periodically to check/confirm your assumptions.

[telegraf@hostname ~]# influx_inspect report -detailed /data/influxdb-data/metrics/tg_udp/autogen/80
File                    Series  Load Time
000002685-000000009.tsm 86800   14.48951ms
000002685-000000010.tsm 92842   12.960935ms
000002685-000000011.tsm 122727  13.021455ms
000002685-000000012.tsm 215579  29.913666ms
000002685-000000013.tsm 205107  25.127459ms
000002685-000000014.tsm 175803  20.049604ms
000002685-000000015.tsm 173341  20.876741ms
000002685-000000016.tsm 217583  31.234417ms
000002685-000000017.tsm 65644   19.076629ms
000002685-000000018.tsm 86856   20.725886ms
000002685-000000019.tsm 51993   12.721017ms
000002685-000000020.tsm 16989   2.95557ms
000002685-000000021.tsm 24415   3.361626ms
000002685-000000022.tsm 76576   16.115752ms
000002685-000000023.tsm 112803  25.944044ms
000002685-000000024.tsm 110493  29.59524ms
000002685-000000025.tsm 145284  28.684915ms
000002685-000000026.tsm 223899  24.122768ms
000002685-000000027.tsm 267428  24.175316ms
000002685-000000028.tsm 215458  21.993494ms
000002685-000000029.tsm 175496  19.747682ms
000002685-000000030.tsm 107031  12.916261ms
000002685-000000031.tsm 115859  14.703002ms
000002685-000000032.tsm 54781   7.669004ms
000002685-000000033.tsm 232092  33.904967ms
000002685-000000034.tsm 294668  43.143068ms
000002685-000000035.tsm 292889  55.489898ms
000002685-000000036.tsm 242793  35.731078ms
000002685-000000037.tsm 28417   4.196708ms

    Total (est): 4260868
  Measurements (est):
    timesync: 10638 (0%)
    custom: 12 (0%)
    network: 898003 (21%)
    system: 26341 (0%)
    iostat: 244745 (5%)
    netif: 141381 (3%)
    cgroup: 62150 (1%)
    cpu: 37340 (0%)
    custom_two: 61340 (1%)
    custom_three: 82996 (1%)
    hw_dell: 55105 (1%)
    diskio: 318962 (7%)
    custom_four: 99059 (2%)
    custom_five: 16379 (0%)
    custom_six: 23258 (0%)
    nstat: 794522 (18%)
    custom_seven: 112259 (2%)
    custom_eight: 99427 (2%)
    mem: 37110 (0%)
    rdmsr: 3297 (0%)
    custom_nine: 16445 (0%)
    custon_ten: 16496 (0%)
    custon_eleven: 72 (0%)
    mysql_variables: 4722 (0%)
    custom_twelve: 69 (0%)
    swap: 22340 (0%)
    disk: 857866 (20%)
    custom_thirteen: 81832 (1%)
    hardware: 7369 (0%)
    kernel: 14967 (0%)
    mysql: 5186 (0%)
    nvidia: 71828 (1%)

I am wondering if someone has pushed the new 1.5 release with TSI to more than 100million.

We want to add a new tag to our data that would push our unique series to more than 100million. And I am afraid that Influx won’t be able to handle that.
And even if its able to handle it then how long it would take to run a medium complexity query on top of millions of data points grouped by the new tag.

I have one DB with 62965446 series ( i think thats the highest).

2 nodes, 200gb each 4 cores. Fell over on a daily basis