Combine Rows at same time point

In the picture below you can see that I have 4 rows all occurring at the same time. I would like to create a query that will combine these 4 points into one row for that time grouped on the ‘deviceId’ column. The resulting value of that row should be 1 if all the rows have value of 1 or 0 if any of the rows contain a 0 at that time.

Any tips on how I might go about this are greatly appreciated. Thank you!

Hi @dapuli,
This should be possible. Here is my example and we can modify it to fit yours:

from(bucket: "Jetson")
 |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
 |> filter(fn: (r) => r["_measurement"] == "exec_jetson_stats")
 |> filter(fn: (r) => r["_field"] == "jetson_CPU1" or r["_field"] == "jetson_CPU3" or r["_field"] == "jetson_CPU2")
 |> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)
 |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")
 |> map(fn: (r) => ({ r with _value: if r.jetson_CPU1 == r.jetson_CPU2 and r.jetson_CPU2 == r.jetson_CPU3  then 1 else 0  }))
 |> yield(name: "compare")

In my case, I have 3 fields so I pivot based on the field column. In your case its 4 tags so you will pivot on the tag column:

 |> pivot(rowKey:["_time"], columnKey: ["moduleId"], valueColumn: "_value")

Now that your moduleId will each have its own column (device, edgehub, etc.) we can compare these to one another. In my case, I compared 3 but the log is the same. Let me know if something like this works for you. Here is the result:

Thank your for the help! I was able to use the example to achieve what I needed. I do have a follow up question though.

Say for example I did not know all the names in the moduleId column (if this query was running in another app). If this was the case I can not hard code the names in the mapping function.

Is there a way I could drop the other columns and loop through the remaining columns detecting that they all == 1 and then returning 1 as a value in that case?

Below is the query I used to solve my original question.

from(bucket:"Values") 
|> range(start:-90d, stop: now()) 
|> filter(fn: (r)=> r["_measurement"] == "deviceHealth" and r["deviceId"] == "Bengough-WTP-Controller" ) 
|> aggregateWindow(every: 30m, fn: last) 
|> fill(usePrevious: true) 
|> fill(value: 0.0)
|> pivot(rowKey:["_time"], columnKey: ["moduleId"], valueColumn: "_value")
|> drop(columns: ["_measurement", "_start", "_stop", "deviceId", "hubName", "_field"])
|> map(fn: (r) => ({ r with _value: if r.Device == 1.0 and r.Modbus == 1.0 and r.ModbusAdapter == 1.0 and r["$edgeHub"] == 1.0 then 1 else 0  }))

@Anaisdg could @dapuli second query be achieved using a regular expression to check all columns?

Hello @dapuli,
Flux isn’t turing complete and there isn’t looping capabilities but that’s on the roadmap. You could definitely try to use regex though to just keep the columns you want:

mtachingTags = schema.tagKeys(bucket: "system")
|> filter(fn: (r) => r._value =~ /_s.*/)
|> findColumn(fn: (key) => true, column: "_value")

from(bucket: "system")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "cpu")
  |> filter(fn: (r) => r.cpu== "cpu-total" or  r.cpu== "cpu-0")
  |> limit(n: 3)
  |> keep(columns: mtachingTags)