Grafana heatmaps with histogram()

I spent the best part of a day figuring this out and thought it worth sharing.

I really like Grafana heatmaps (Grafana = organisational decision) to look at response times, showing the general trend and outliers in one place. Its easy to do in Prometheus, and I wanted to figure out how to do in Influx:

This is what I came up with.

Add some test data:

test duration=1 1620407874
test duration=1 1620407875
test duration=1 1620407876
test duration=10 1620407877

Create a task to generate the histogram, which this query replicates. In real life, you would run this every 5 minutes say looking back over the last 5 minutes. Run this a few times to generate the “downsample” data:

from(bucket: "test-bucket")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "test")
  |> filter(fn: (r) => r["_field"] == "duration")
  |> histogram(bins: [0.0, 0.5, 1.0, 2.0, 5.0, 10.0, 30.0])
// make the histogram "non cumalative", Grafana doesn't offer the option like it does for Prometheus datasource
  |> difference()
// add back time and change le to a string else to() fails
  |> map(fn: (r) =>	({_time:now(),_field:r._field, _value:r._value, le:string(v: r.le)}))
  |> set(key: "_measurement", value: "test-downsample")
  |> to(bucket: "test-bucket",tagColumns:["le"])

Now in Grafana, query:

from(bucket: "test-bucket")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "test-downsample")
  |> filter(fn: (r) => r["_field"] == "duration")
  |> map(fn: (r) =>	({_time:r._time,_field:r.le, _value:r._value}))
  |> sort(columns:["_time"])

And now we can see a moving histogram, the le “1” has a count of 3 and le “10” has a count of 1. The dots appear where I run the query to downsample the data:

If i missed an easier way to do this, please let me know!

Hello @thopewell,
Thanks so much for sharing!!
Solution above.

Thanks a lot, this was really helpful for me. I did a similar task (snippet below) but also I did have prebucketed data from my application database, and this post helped understand how to format and write it to influxdb.

To contribute, here is a flux task that can also deal with backfilling historical data when running the first time.

import "influxdata/influxdb/tasks"
option task = { 
  name: "Matches heatmap",
  every: 1h,
}

from(bucket: "rawdata")
    |> range(start: tasks.lastSuccess(orTime: 2021-11-01T00:00:00Z))
    |> filter(fn: (r) => r["_measurement"] == "measurement")
    |> filter(fn: (r) => r["_field"] == "value")
    |> group()
//my values were integer so must convert to floats for histogram
    |> toFloat()
    |> window(every: task.every)
//linearBins helper function
    |> histogram(bins: linearBins(start: 0.0, width: 100.0, count: 62))
    |> difference()
//_start is the start of the window period
    |> map(fn: (r) => ({_time: r._start, _field: "value", _value: r._value, le: string(v: r.le)}))
    |> set(key: "_measurement", value: "measurement_name")
    |> to(bucket: "hourly_heatmap", tagColumns: ["le"])