Performance of tasks

Hi,

I am trying a simple task on influxdb 2.0 and it seems that its execution time is a lot slower than i would expect(execution time ~0.8s).
The result of the query used for the task is 350 rows. Any ideas / thoughts ?

import “math”

option v = {bucket: “”, timeRangeStart: -1h, timeRangeStop: now()}
option task = {name: “testing”, every: 1m}

from(bucket: “customerA_bucket”)
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) =>
(r[“_measurement”] == “iot_data”))
|> filter(fn: (r) =>
(r[“_field”] == “00020102” or r[“_field”] == “00020101”))
|> pivot(rowKey: [“_time”], columnKey: [“_field”], valueColumn: “_value”)
|> map(fn: (r) =>
({r with “_value”: math.abs(x: r[“00020102”] - r[“00020101”]), “_field”: “10”}))
|> to(bucket: “test”, org: “example”)

Hello @Geo_Gkez,
Welcome!
Your query looks good to me. But it might be worth trying this approach. Based on your query it looks like your r[“_field”] == “00020102” and r[“_field”] == “00020101” values occur at the same timestamp. You could group by “_time” then apply the difference() function for each table that presumably has two rows in it (from “00020102” and “00020101” ). Then you could use conditional mapping to multiply the value by -1 if the value is negative.

This could work better because filter + group is a pushdown pattern. To learn more about optimizing Flux queries please reference:

I’ll also share this question with the Flux team though. Thanks for this good question.

Hi Anais Thanks for reply,

your suggestion is work fine when I haven’t a lot of data, but when I a lot of data is more efficient to use the above query