Capacity planning for InfluxDB storage



I have gotten started with InfluxDB for collecting systems and HTCondor job scheduler metrics in our cluster, which will grow up to a few hundred compute nodes (in addition to about 6 servers, which we also want to monitor). Monitoring done with Grafana, and collection done with Telegraf. I am currently collecting the basic network and systems metrics with Telegraf, and perhaps from 6 to 12 different metrics from HTCondor.

Work is being done in the lab, but I would like to understand how to determine what storage space is necessary for the InfluxDB database, for planning services when we go into production? I am unsure yet how long we must retain the data for, so it could be 3 mos, 6 mos, or 1 yr. Any guidance would be greatly appreciated!


This is from the “How Much Storage Do I Need?” section of the Hardware Sizing Guide.

Database names, measurements, tag keys, field keys, and tag values are stored only once and always as strings. Only field values and timestamps are stored per-point.

Non-string values require approximately three bytes. String values require variable space as determined by string compression.

The amount of storage space used will be based on the number of measurements you have, the frequency of those measurements, the amount of data in each write, how compressible that data is, and the amount of time you will retain the data, as well as the variance of these factors over time as a result of business inputs.

As a result, the best way to do this kind of capacity planning is empirically; write data for a period of time and observe how much storage it uses and how that usage changes over time. When adding new services, perform these same tests to understand the additional data a service will generate, and continuously monitor your usage for unexpected changes.


Thank you Noah! I will examine the sizing guide