Combined min, max, mean and sum query - best way?

I have a query I run in my OSS 1.7 influxdb that looks like this:

SELECT mean('temperature') AS MeanTemp, max('temperature') AS MaxTemp, min('temperature') AS MinTemp, mean('flow') AS MeanFlow, max('flow') AS MaxFlow, min('flow') AS MinFlow, sum('credit') AS TotalCredits FROM MyMeasurement WHERE time >= startTime and time <= endTime

The result is a single row that has the mean, min and max values for the temperature and flow and the total credits for the defined time window. I’d like to replicate this in 2.0 cloud via API and came up with the following - it looks kind of messy and I’m sure there must be a better way to do this that is more concise:

credits = from(bucket: "mybucket")
|> range(start: startTime, stop: stopTime)
|> filter(fn: (r) => r.measurement == "MyMeasurement")
|> filter(fn: (r) => r._field == "credit")

temperature = from(bucket: "mybucket")
|> range(start: startTime, stop: stopTime)
|> filter(fn: (r) => r.measurement == "MyMeasurement")
|> filter(fn: (r) => r._field == "credit")

flow = from(bucket: "mybucket")
|> range(start: startTime, stop: stopTime)
|> filter(fn: (r) => r.measurement == "MyMeasurement")
|> filter(fn: (r) => r._field == "credit")

TotalCredits = credits
|> set(key: "_field", value:"TotalCredits")
|> drop(columns: ["_start", "_stop", "_time"])
|> sum()

MaxTemp = temperature
|> set(key:"_field", value:"MaxTemp")
|> drop(columns: ["_start", "_stop", "_time"])
|> max()

MinTemp = temperature
|> set(key:"_field", value:"MinTemp")
|> drop(columns: ["_start", "_stop", "_time"])
|> min()

MeanTemp = temperature
|> set(key:"_field", value:"MeanTemp")
|> drop(columns: ["_start", "_stop", "_time"])
|> mean()

MaxFlow = flow
|> set(key:"_field", value:"MaxFlow")
|> drop(columns: ["_start", "_stop", "_time"])
|> max()

MinFlow = flow
|> set(key:"_field", value:"MinFlow")
|> drop(columns: ["_start", "_stop", "_time"])
|> min()

MeanFlow = flow
|> set(key:"_field", value:"MeanFlow")
|> drop(columns: ["_start", "_stop", "_time"])
|> mean()

union (tables: [MeanTemp, MaxTemp, MinTemp, MeanFlow, MaxFlow, MinFlow, TotalCredits])
|> pivot(rowKey: ["_measurement"], columnKey: ["_field"], valueColumn: "_value")
|> yield()

What’s the best way to perform a query like this?

@roba You could use reduce() to create a custom function that returns the min, max, and mean of tables in the input stream, then use that to calculate the min, max, and mean values of the flow and temperature. The credit count would be a just a straight sum with some column duplication to give the data something to join on. You would then join the three streams of tables together. It’ll save you a couple lines and from having to read the data so many times in a single query:

minMaxMean = (tables=<-) =>
  tables
    |> reduce(
      identity: {count: 0.0, sum: 0.0, min: 0.0, max: 0.0, mean: 0.0},
      fn: (r, accumulator) => ({
        count: accumulator.count + 1.0,
        sum: r._value + accumulator.sum,
        min: if accumulator.count == 0.0 then r._value else if r._value < accumulator.min then r._value else accumulator.min,
        max: if accumulator.count == 0.0 then r._value else if r._value > accumulator.max then r._value else accumulator.max,
        mean: (r._value + accumulator.sum) / (accumulator.count + 1.0)
      })
    )
    |> drop(columns: ["count", "sum"])
    |> duplicate(column: "_stop", as: "_time")

baseData = from(bucket: "mybucket")
  |> range(start: startTime, stop: stopTime)
  |> filter(fn: (r) => r.measurement == "MyMeasurement")

temperature = baseData
  |> filter(fn: (r) => r._field == "temperature")
  |> minMaxMean()

flow = baseData
  |> filter(fn: (r) => r._field == "flow")
  |> minMaxMean()

credits = baseData
  |> filter(fn: (r) => r._field == "credit")
  |> set(key: "_field", value:"total_credits")  
  |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value")
  |> sum()
  |> duplicate(column: "_stop", as: "_time")

join1 = join(tables: {temperature: temperature, flow: flow}, on: ["_time"])

join(tables: {join1: join1, credits: credits})

Thanks Scott! I’ll give it a try.