Pls code review

What am I doing wrong?
What here makes me literally a bad person?
Pls give feedback.

import "strings"
import "date"
import "math"

bepis = from(bucket: "mqtt_raw")
    |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
    |> filter(fn: (r) => r.topic == "emon/ct1" or r.topic == "emon/ct2" or r.topic == "emon/ct3" or r.topic == "emon/ct4")
    |> keep(columns: ["_time", "_value", "topic"])
    |> elapsed(unit: 1ms)
    |> map(fn: (r) => ({r with _value: math.abs(x: float(v: r._value)) * float(v: r.elapsed) / 3600000.0, topic: strings.trimPrefix(v: r.topic, prefix: "emon/")}))
    |> aggregateWindow(every: duration(v: v.myWindowPeriod), fn: sum)
    |> keep(columns: ["_time", "_value", "topic"])

total = bepis
    |> group(columns: ["_time"], mode: "by")
    |> sum(column: "_value")
    |> group(columns: [], mode: "by")
    |> map(fn: (r) => ({r with topic: "total"}))

union(tables: [bepis, total])

I’m also particularly curious:

  1. Is there a meaningful difference between returning one or many tables for graphing?
  2. Is there a way to create a Variable of type Duration? It doesn’t seem so.
  3. Is it possible to create a Key-Value Variable with Query constructor?
    It seems to only be possible to have separate keys and values in Map mode.
    Or is there some record key like _key that would allow renaming them in the dropdown?
  4. Which of these operations should be reordered or regrouped for better performance?
    For example, I know I shouldn’t put some kind of a substring predicate or contains( value: r.topic, set: ["ct1", "ct2]) in filter instead of concrete values, as that slows it down to a crawl. Or that leaving in the first keep actually improves runtime. But perhaps I am missing some specific blessed string-prefix-match operator that has a fast path for filtering tags?
  5. Is my group|>sum|>group([]) actually the fastest/most idiomatic way of adding everything together? I would not be surprised if I am doing something very silly due to my poor understanding of the result/stream/tables model despite reading Flux data model | Flux 0.x Documentation and InfluxDB data elements | InfluxDB OSS 2.1 Documentation :horse:

@qm3ster I going to put it down to cranky Flux rather than you being a bad person :wink: . What error are you experiencing?

Absolutely none! It works, and reasonably fast for the size of my data and the shoebox machine it’s on!

Ah perfect! I was going to say as that looks like solid Flux scripting to me. I shall take a look at the other points you just added and get back to you.

@qm3ster ,
Sorry for the late reply. Here are my answers:

  1. Is there a meaningful difference between returning one or many tables for graphing?
    This really depends on your desired goal for the data. It is perfectly reasonable to have more than one table map to a graph

  2. Is there a way to create a Variable of type Duration ? It doesn’t seem so.
    You could create a dashboard variable but based on your next question i am not quite sure this is what you want

  3. Is it possible to create a Key-Value Variable with Query constructor? It seems to only be possible to have separate keys and values in Map mode. Or is there some record key like _key that would allow renaming them in the dropdown?
    Sadly this is not possible at the moment with regards to dashboard variables. Once they are set they are kind of set. Unless you are using a Flux function to define them. Then you could be a bit cheeky and modify the data but this would require saving the data back to your bucket. I would support dynamic variables across the board and something I recommend putting in a feature request I would support this. If I have missed the point here let me know.

  4. Which of these operations should be reordered or regrouped for better performance?
    For example, I know I shouldn’t put some kind of a substring predicate or contains( value: r.topic, set: ["ct1", "ct2]) in filter instead of concrete values, as that slows it down to a crawl. Or that leaving in the first keep actually improves runtime. But perhaps I am missing some specific blessed string-prefix-match operator that has a fast path for filtering tags?
    Have you tried using regex instead? It would be better than the contains functions: Use regular expressions in Flux | InfluxDB Cloud Documentation

  5. Is my group|>sum|>group([]) actually the fastest/most idiomatic way of adding everything together? I would not be surprised if I am doing something very silly due to my poor understanding of the result/stream/tables model despite reading Flux data model | Flux 0.x Documentation and InfluxDB data elements | InfluxDB OSS 2.1 Documentation :horse:
    Correct this will group all values into a single table

Thank you for your reply!

2, 3. it seems like only the Map type of Variable can have named variants:

Name1,"value 1"
Name2,"value 2"
Name3,"value 3"

the rest (CSV and Query) allow defining only the values, and the name is equal.
For example, this is as far as I got:

import "array"
rows = [
    {_value: string(v: 1d)},
    {_value: string(v: 1h)},
    {_value: string(v: 5m)},
    {_value: string(v: 30s)},
]
array.from(rows: rows)

furthermore, the produced strings got sorted lexographically, resulting in
1d,1h,30s,5m string selection.
While my ideal would be something like

import "array"
rows = [
    {_key: "daily", _value: 1d},
    {_key: "hourly", _value: 1h},
    {_key: "5 min", _value: 5m},
    {_key: "½ min", _value: 30s},
]
array.from(rows: rows)

resulting in typed v.myVariable of type duration, presented in the dropdown in the table order, not jumbled.

  1. Thank you, I will evaluate the performance of regexes next time I want something of the sort, since you do recommend them for the high-performance coarse initial filtering.
    Do you happen to know if there is any RegexSet in regex - Rust - like optimization for clauses like
|> filter(fn: (r) => r._measurement =~ /^foo/ or r._measurement =~ /^bar/ or r._measurement =~ /^baz/)

implemented/planned?
Or am I limited to the likes of

|> filter(fn: (r) => r._measurement =~ /^(?:foo|bar|baz)/)

Again, thank you for all the help!