Hi, I’m trying to create a single stat grafana panel that shows a percentage; the backend is InfluxDB 2.1 and I’m using Flux. This percentage can be computed by diving these two queries’ results:
from(bucket: "metrics")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "energy_meters" and r["_field"] == "site_instant_power")
|> integral(unit: 1h)
|> group(columns: ["_field"])
and
from(bucket: "metrics")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "energy_meters" and r["_field"] == "load_instant_power")
|> integral(unit: 1h)
|> group(columns: ["_field"])
and then multiplying the quotient by 100.
My question is how can I weave these together into a single Flux query for this panel? Thanks.
Also, for reference, this is the InfluxQL query I used to use on v1.8:
SELECT (1-integral("site_instant_power", 1h)/integral("load_instant_power", 1h))*100 FROM "energy_meters" WHERE $timeFilter fill(null)
Make each call separately and assign the ensuing tables to variables. Join the tables, then map (if necessary) over the joined tables. Psuecode …
site_instant_power = from(bucket: "metrics")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> ...
load_instant_power = from(bucket: "metrics")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> ...
experimental.join(left: site_instant_power, right: load_instant_power, fn: (left, right) => ({ <whatever calculations here> }))
|> map(fn: (r) => ({ < whatever calculations here> }))
|> yield()
This content may be helpful: Querying and Data Transformations | Time to Awesome
Thanks, following your suggestions I produced this solution which works in my panel:
energy = from(bucket: "metrics")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "energy_meters")
p_site = energy
|> filter(fn: (r) => r["_field"] == "site_instant_power")
|> integral(unit: 1h)
p_load = energy
|> filter(fn: (r) => r["_field"] == "load_instant_power")
|> integral(unit: 1h)
join(tables: {p_load: p_load, p_site: p_site}, on: ["_measurement"])
|> map(fn: (r) => ({
r with
self_powered: (1.0 - r._value_p_site / r._value_p_load) * 100.0
})
)
|> keep(columns: ["self_powered"])
|> yield(name: "self_powered")
wfjm
January 6, 2022, 12:57pm
4
Is usage of
really the proper solution for arithmetic with fields of a single measurement ?
My understanding is that pivot()
is best practices method, see
Hello Everyone.
I am trying to combine a few join’s via the below code. In short, I have a time series graph that I would like to start at value 0 for a given time range instead of the initial dynamic value.
For example; if the time series starts at value 123, then 123 would have to be subtracted from every value displayed to create a graph that starts at 0.
InfluxDB: v1.8.10
Chronograf: v1.8.10
Apologies for the image, I am only allowed to post one embed. Please open the image in a new win…
With schema.fieldsAsCols()
doing field arithmetic becomes really easy and compact, as explained by @Anaisdg .
Beyond that, join()
seems to have a performance problem, see
Poor performance for join(): cpu and memory grows quadratically with row count
and experimental.join()
seems to have a stability problem, see
Tried experimental.join(): good performance, but also "panic: unknown type invalid"
In my experience, if you have multiple series with the same timestamps (typically by writing multiple fields in the same line of line protocol), then pivoting is, indeed much easier. But if the fields are written separately with different timestamps, then you need to normalize the times and join.
@wfjm perhaps you could provide a code sample to demonstrate the pivoting approach?
wfjm
January 6, 2022, 3:35pm
6
True, some form of aggregation is almost always required, either for the reason you mention, so simply to reduce the amount of data to what can be shown in the visualization (usually the number of pixel in the pane displayl).
A simple pattern I use is
from(bucket: "dca")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "TfcMonitor")
|> filter(fn: (r) => r["_field"] == "dtq_min" or r["_field"] == "dtq_50")
|> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: true)
|> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value")
|> map(fn: (r) => ({ r with _value: r.dtq_50 - r.dtq_min }))
|> keep(columns: ["_time", "_value", "host", "oid"])
|> yield()
That script
selects two fields dtq_min
and dtq_50
aggregates with mean
calculates the difference of the two fields
and keeps results organized by two tag keys host
and oid
Here mean
is used as aggregator, but any other should work too.
The Flux
above is equivalent to the InfluxQL
(using Grafana macros)
SELECT mean("dtq_50")-mean("dtq_min")
FROM "TfcMonitor"
WHERE $timeFilter
GROUP BY time($__interval), "host", "oid" fill(null)
So even with InfluxQL
that was easily possible.