I am storing positions received from a GPS in the DB with lat and lon floatfields (and s2_cell_id). Now I am trying to retrieve a downsampled track, a series of timestamped latitude & longitude pairs. Nothing fancy, just pick a sample for each period.
This is what I came up with:
lat = from(bucket: "thebucket")
|> range(start: 2023-06-11T07:29:12.305Z, stop: 2023-06-11T12:07:27.957Z)
|> filter(fn: (r) => r.context == "XXXX" and r._measurement == "navigation.position" and r._field == "lat")
|> drop(columns: ["s2_cell_id"])
|> aggregateWindow(every: 15000ms, fn: first)
|> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")
lon = from(bucket: "thebucket")
|> range(start: 2023-06-11T07:29:12.305Z, stop: 2023-06-11T12:07:27.957Z)
|> filter(fn: (r) => r.context == "XXXX" and r._measurement == "navigation.position" and r._field == "lon")
|> drop(columns: ["s2_cell_id"])
|> aggregateWindow(every: 15000ms, fn: first)
|> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")
join(tables: {lat, lon}, on: ["_time"])
|> keep(columns: ["_time", "lat", "lon"])
but I feel that I am doing things wrong, retrieving the fields separately and then joining. It really doesn’t perform well and seems overly complex.
Any pointers to where I should be educating myself or a more efficient solution?