What are storage and downsampling best practices?

We’ve recently implemented an influxdb/telegraf/grafana solution. We use telegraf plugins to gather data from snmp, apache, haproxy, rabbitmq and the default system metrics (cpu, load, disk, etc). This results in a variety of different time series measurements. Some, like the spring v2 actuator snmp metrics result in a single “value” field per measurement, however, most measurements have multiple fields. Our setup results in 90+ measurements.

We would like to roll up data into three additional retention policies where the field names are the same as they are in the default policy (as opposed to mean_field_name for example). By using the same names, we can then have the same graphs in grafana view any retention policy.

Currently, that means three continuous queries for each measurement which can ultimately result in hundreds of CQs. I’m curious what the best practices are for CQs and metrics. For instance:

– should we have configured our collectors to deliver a single value per measurement? The generic CQ example seems to assume a single field per measurement… (see: [feature request] Be able to rename Measurements created with CQ wild card · Issue #4775 · influxdata/influxdb · GitHub). We could use a single CQ per retention policy if we only had one “value” field per measurement, but then we would have thousands of measurements in the db, which seems unmanageable (SHOW MEASUREMENTS for instance would be unruly).

– on the other hand, are we better off with measurements that have many fields, but we have many CQs to roll them up (as I described above).

– or is there some other strategy that I’m missing altogether?

I appreciate the help.
Mike

Hello @mdaigle,

When you start to have a large amout of CQs, we recommend offloading that workload to Kapacitor. Please take a look at this blog and using Kapacitor as a Continuous Query engine to get you started.

Thank you. I will check it out.

1 Like