I have data returning a single series, so I have _value and _time. Now I need to aggregate by aggregate window. I am trying to find all outliers by aggregating using both min and max (so I want the min and max value for the aggregation). This will allow me to find spikes and drops when looking at graphs through time. I do not want multiple columns - one for min and one for max, I want a single column (so one line in grafana), making sure in each aggregation window I pick one min and one max and never miss the min and max values.
Thank you. I got to this, which works (for whoever might be looking for/at this):
data = () => from(bucket: "aerolab")
|> range(start: v.timeRangeStart, stop:v.timeRangeStop)
|> filter(fn: (r) =>
r._measurement == "nsDevice" and
r._field == "ReadBytesPerSecond" and
r.nodeIdent =~ /${nodeIdent:regex}/
)
min = data()
|> aggregateWindow(every: v.windowPeriod, fn: min)
max = data()
|> aggregateWindow(every: v.windowPeriod, fn: max)
union(tables: [min, max])
The nodeIdent is for a multi-select filter in in grafana (dropdown variables). Now my problem is that, while this works, it produces 2x the number of datapoints that I should be producing (since I am picking 2 datapoints per window instead of one).
So a followup question, which would solve this for me, is there a way to make the aggregateWindow 2x the variable?
Side note for others who might find this thread, the reason data = is a function and not a result is because it works much faster this way. Something to do with pipeline optimisation.