Guys, just trying to improve performance of my queries where I do have a quite a lot queries plus some of them are quite complex and time consuming to calculate. So when I have a graph showing last 3 months average it takes minutes to calculate.
Here is example of simple query and I am thinking of using flux to run every hour and write back the results to influxdb. Therefore I can have just easy query to calculate average of all the values when using graphs for more than 1 hour and only keep calculating results up to 1 hour.
Does anybody knows how am I able to run query e.g. every hour and write back the results to influxdb with new field i.e. r_value_avg_quality
? What are the options here ?
I expect that using of python for example may be one of the possible way…if so, any example for such a cases ?
Thanks
cycle = from(bucket: "test")
|> range($range)
|> filter(fn: (r) =>
r._measurement == "fautest" and
r.trigger =~ /cycle|interval/ and
r._field == "countfld")
|> group(columns: ["_measurement", "_field"])
|> sum(column: "_value")
|> map(fn:(r) => ({
r with _time: r._time,
_value_zero:
if r._value == 0 then "reduce"
else "keep"
})
)
|> filter(fn: (r) =>
r._value_zero == "keep")
reject = from(bucket: "test")
|> range($range)
|> filter(fn: (r) =>
r._measurement == "fautest" and
r.trigger =~ /reject|interval/ and
r._field == "countfld")
|> group(columns: ["_measurement", "_field"])
|> sum(column: "_value")
quality = join(tables:{reject:reject, cycle:cycle}, on:["_start"])
|> map(fn:(r) => ({
_value: ((float(v: r._value_cycle) - float(v: r._value_reject)) / float(v: r._value_cycle)) * 100.00 }))
quality|> yield()