@Marion_Akagi I simplified your query a little bit and it should be much more performant. You don’t need a join and I condensed the mulitple map()
calls down to a single map()
call. In that, I explicitly map only the columns you want returned so it drops everything else. It also duplicates the discharge_cycles
columns as discharge_cycles_total
. I then run cumulativeSum()
on the discharge_cycles_total
column:
from(bucket: "rdata")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "site")
|> filter(fn: (r) => r["point_name"] == "kW")
|> aggregateWindow(every: 1d, fn: sum)
|> map(
fn: (r) => {
dischargemwhs = if r._value > 0.0 then r._value / 3600000.0 else 0.0
chargemwh = if r._value < 0.0 then r._value / (-3600000.0) else 0.0
discharge_cycles = dischargemwhs / 15.0
return {
time: r._time,
dischargemwhs: dischargemwhs,
chargemwh: chargemwh,
discharge_cycles: discharge_cycles,
discharge_cycles_total: discharge_cycles,
}
},
)
|> cumulativeSum(columns: ["discharge_cycles_total"])
Let me know if this solves your problem.