Sum of points in 6 series

hello,

I used to work with Influx, but now I’m trying Flux, and I don’t understand a lot of things…
with this query, I obtain 6 graphs :

from(bucket: "BddNew")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "mqtt_consumer")
  |> filter(fn: (r) => r["topic"] == "/Solaire/ECU/408000047303/out" or r["topic"] == "/Solaire/ECU/408000047265/out" or r["topic"] == "/Solaire/ECU/408000047182/out") 
  |> filter(fn: (r) => r["_field"] == "p0" or r["_field"] == "p1")
  |> aggregateWindow(every:10s, fn: mean, createEmpty: false)

I have 3 topics with 2 series each. I agregate with 10s because the data are not exactly at the same time.

I want the sum of all 6 series (one graph), I have tried a lot of queries, read the doc, but no succes, Flux language is difficult for me…
If someone can help me…

Hello @YellowSub,
yes Flux is quite challenging at first but once you understand it, it opens up a lot more possibilities /offers more power.

There are two main ways I can think about solving this problem:

import "csv"
csvData = "#group,false,false,true,true,false,false,true,true,true
#datatype,string,long,dateTime:RFC3339,dateTime:RFC3339,dateTime:RFC3339,double,string,string,string
#default,mean,,,,,,,,
,result,table,_start,_stop,_time,_value,_field,_measurement,location
,,0,2021-11-28T10:35:11.432769193Z,2021-12-28T10:35:11.432769193Z,2021-11-28T11:00:00Z,83.75,degrees,average_temperature,coyote_creek
,,0,2021-11-28T10:35:11.432769193Z,2021-12-28T10:35:11.432769193Z,2021-11-28T12:00:00Z,81.5,degrees,average_temperature,coyote_creek
,,0,2021-11-28T10:35:11.432769193Z,2021-12-28T10:35:11.432769193Z,2021-11-28T13:00:00Z,79.5,degrees,average_temperature,coyote_creek
,,1,2021-11-28T10:35:11.432769193Z,2021-12-28T10:35:11.432769193Z,2021-11-28T11:00:00Z,0.9810000000000001,water_level,h2o_feet,coyote_creek
,,1,2021-11-28T10:35:11.432769193Z,2021-12-28T10:35:11.432769193Z,2021-11-28T12:00:00Z,0.6603000000000001,water_level,h2o_feet,coyote_creek
,,1,2021-11-28T10:35:11.432769193Z,2021-12-28T10:35:11.432769193Z,2021-11-28T13:00:00Z,1.2026999999999999,water_level,h2o_feet,coyote_creek
,,2,2021-11-28T10:35:11.432769193Z,2021-12-28T10:35:11.432769193Z,2021-11-28T11:00:00Z,3.87275,water_level,h2o_feet,santa_monica
,,2,2021-11-28T10:35:11.432769193Z,2021-12-28T10:35:11.432769193Z,2021-11-28T12:00:00Z,3.5991,water_level,h2o_feet,santa_monica
,,2,2021-11-28T10:35:11.432769193Z,2021-12-28T10:35:11.432769193Z,2021-11-28T13:00:00Z,3.1909,water_level,h2o_feet,santa_monica
,,3,2021-11-28T10:35:11.432769193Z,2021-12-28T10:35:11.432769193Z,2021-11-28T11:00:00Z,73.75,degrees,average_temperature,santa_monica
,,3,2021-11-28T10:35:11.432769193Z,2021-12-28T10:35:11.432769193Z,2021-11-28T12:00:00Z,81.9,degrees,average_temperature,santa_monica
,,3,2021-11-28T10:35:11.432769193Z,2021-12-28T10:35:11.432769193Z,2021-11-28T13:00:00Z,79.2,degrees,average_temperature,santa_monica
"
 data = csv.from(csv: csvData)
  |> range(start: -30d, stop: now())
  |> filter(fn: (r) => r["_measurement"] == "h2o_feet" or r["_measurement"] == "average_temperature")
  |> filter(fn: (r) => r["_field"] == "degrees" or r["_field"] == "water_level")
  |> filter(fn: (r) => r["location"] == "coyote_creek" or r["location"] == "santa_monica")
  |> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)
  |> limit(n:3)

data 
  |> yield(name: "4 lines")

data
  |> group() 
  |> pivot(rowKey:["_time"], columnKey: ["_field", "location"], valueColumn: "_value")
  |> map(fn: (r) => ({ r with _value: r.degrees_coyote_creek + r.degrees_santa_monica + r.water_level_coyote_creek + r.water_level_santa_monica }))
  |> yield(name: "solution 1")

data  
  |> group(columns: ["_time"], mode:"by")
  |> sum(column: "_value")
  |> group() 
  |> yield(name: "solution 2")

Solution 1 involves pivoting the data around the timestamp column…essentially transforming the table into a table you’d expect to see in a SQL db. Where the columns are field1_tag1, field1_tag2, field2_tag1…etc and the values of those columns are the values for those fields with those tags. Then you can use the map() function to sum across the columns on each row.

Solution 2 involves grouping on time so that you can add the values with the sum() function across these smaller tables that are grouped by time and then ungrouping with the empty group() function.

Anaïs, thank you very much for your answer.
I don’t understand everything, but I will try…

Happy new year to all!