Best practices / examples for downsampling in Flux

Are there any examples for down-sampling more than one fields ?

I need to down-sample more than 30 fields. Right now I am doing it using a single continuous query, but it’s not clear to me how It should be done in flux.

SELECT mean("field1") AS field1, min("field1") AS field1_min, max("field1") AS field1_max ..... field35

It appears I’ll have to write a separate query for each field. Or is there a way to do aggregation across fields like above?

Also, should I put them all in one task or they should each to go separate tasks ?

Hello @tintin,

Yes, I think you’re right. In 1.x you’ll have to create multiple CQs.

However, if you’re willing to upgrade to 2.x you can use tasks to downsample multiple fields in one task. Your downsampling task would look something like:

option task = {name: "Downsampling CPU", every: 1m}
 
data = from(bucket: "my-bucket")
|> range(start: -task.every)
|> filter(fn: (r) => r._measurement == "my_measurement")
 
data
|> filter(fn: (r) => r._field == "f1")
|> mean()
|> set(key: "field_type",value: "f1_mean")
|> to(bucket: "downsampled", org: "my-org", tagColumns: ["field_type"])
 
data
|> filter(fn: (r) => r._field == "f2")
|> mean()
|> set(key: "field_type",value: “f2_mean")
|> to(bucket: "downsampled", org: "my-org", tagColumns: ["field_type"])