Trying to migrate all my metrics from influx 1.8 to 2.x and have one query that just seems to take forever.
I’m trying to track the % used on 16 HDDs for my NAS over a 30 day range. In 1.8, it ran just fine and never timed out. Now, it seems to take almost 30s just to run.
Here’s the actual flux query:
from(bucket: "armin")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "disk")
|> filter(fn: (r) => r["_field"] == "used_percent")
|> filter(fn: (r) => r["device"] == "mergerfs" or r["device"] == "sda" or r["device"] == "sdb" or r["device"] == "sdc" or r["device"] == "sdd" or r["device"] == "sde" or r["device"] == "sdf" or r["device"] == "sdg" or r["device"] == "sdi" or r["device"] == "sdh" or r["device"] == "sdj" or r["device"] == "sdk" or r["device"] == "sdl" or r["device"] == "sdm" or r["device"] == "sdn" or r["device"] == "sdo")
|> group(columns: ["device"])
|> aggregateWindow(every: v.windowPeriod, fn: last, createEmpty: false)
|> yield(name: "last")
It works and returns the data that I’m looking for, but it takes almost 25 seconds to run in the Influx Data Explorer. Takes even longer when running via Grafana.
I’m sure there’s a simple tweak I can do to make it a lot more efficient, just not sure as I’m learning flux as I go.