Flux: Joining tables of different row size

My challenge is fairly straightforward. I’m building a table for visualization in Grafana. This table consists of a number of current values for a set of Stations (host).

What I do so far is query last() per relevant variable and join() the resulting tables on “host”, creating something like this:

              Value 1         Value 2         Value 3
 Station 1    X               1               4
 Station 2    X               1               2
 Station 3    Y               1               3

This is easy enough and works well. With one exception. There is one variable for a sensor that is not present at all stations. Consequently, when I try to join that particular result table to the table that I’m building, the row for that entire station is removed.
Instead, I would like to simply show a 0 value for that non-existent value.

What I want:

              Value 1         Value 2         Value 3      value 4
 Station 1    X               1               4            2
 Station 2    X               1               2            2
 Station 3    Y               1               3            0 <--

What I get:

              Value 1         Value 2         Value 3      value 4
 Station 1    X               1               4            2
 Station 2    X               1               2            2

I’ve been trying to think of a decent workaround, but so far I’m coming up empty.
Anyone have a clever suggestion?

Hello @christiaan,
Have you tried mapping values based on whether they exist?

Hi @Anaisdg,

I’ve tried variations of mapping() and set(), but my problem is that since the given sensor does not exist for one Station, the query does not return that station at all in the result table.
So there is no row where r._value can exist or not exist.
Maybe I misunderstand your suggestion, in which case; could you be more explicit?

The relevant part of the query looks as follows:

sensor1= from(bucket: "StationData-Test")
  |> range(start: v.timeRangeStart, stop:v.timeRangeStop)
  |> filter(fn: (r) =>
    r._measurement == "plc" and
    r._field == "sensor1"
    )
  |> group(columns: ["host"] )
  |> sort(columns: ["_time"])
  |> keep(columns: ["host", "_time", _value"])
  |> last()

sensor2 = from(bucket: "StationData-Test")
  |> range(start: v.timeRangeStart, stop:v.timeRangeStop)
  |> filter(fn: (r) =>
    r._measurement == "plc" and
    r._field == "sensor2" 
    )
  |> group(columns: ["host"] )
  |> sort(columns: ["_time"])
  |> keep(columns: ["host", "_time",_value"])
  |> last()

join(
  tables: {sensor1:sensor1, sensor2:sensor2},
  on: ["host"]
)

Which does:

Get Sensor1:
Returns 5 stations, with values

Get sensor2:
Returns 4 stations, with values

Join tables Sensor1 and Sensor2:
Returns 4 stations, with both values

It feels like I basically want to do an outer join, which is unfortunately not possible.

** EDIT **

I’ve implemented a hack-around that works, but it’s not pretty.
I query on Sensor2 OR Sensor (on the station with a single Sensor, it’s not numbered) in my second query. If Sensor2 is not present, the second part of the OR statement comes in effect and Sensor is taken instead. I then do a mapping where, if _field is Sensor, I set the _value to -1 (indicating not present).

Sensor2 = from(bucket: "StationData-Test")
  |> range(start: -20m)
  |> filter(fn: (r) =>
    r._measurement == "plc" and
    (r._field == "Sensor2" or
    r._field == "Sensor")
  )
  |> group(columns: ["host"] )
  |> sort(columns: ["_time"])
  |> last()
   |> map(fn: (r) => ({
      host: r.host,
      _value: if r._field == "Sensort" then -1 else r._value
    }))

Since I’m building up a table of only about 30 stations, this trick should probably hold, but it feels… icky.
Still open to better/cleaner solutions

Same situation here. Did you find a prettier solution for this?