Having names in dashboard variables, which map to id for the flux query

Doing my first queries with InfluxDB and flux but somehow got lost on that one.

Having a PostgreSQL table which describes a factory with its machines. It has for example:

SQL table Machine:
id name
1 CNC machine TRB65
2. Drilling machine XQZ…

In InfluxDB I write and query the machines’ sensors data. The sensors write to InfluxDB, tagged with just the machine’s id, but not the machine names. As the machines and sensors might be changed/renamed later on.

In InfluxDB the user should have a dashboard with the Variable “MachineNames” with human-friendly selection names. After the selection I want these names translated into machineIds, which can be used in the dependent query.

How do I do that?

This is my starting point, but I I’m not able to join these information:

import "sql"
import "influxdata/influxdb/secrets"

connectionString = secrets.get(key: "POSTGRES_CONNECTION")

 machines = sql.from(
  driverName: "postgres",
  dataSourceName: "${connectionString}",
  query: "SELECT name,id as \"machineId\" FROM \"public\".\"Machine\" WHERE \"orgId\"=1 ORDER BY \"orgId\",\"id\" LIMIT 300 OFFSET 0;"
)
 |> rename(columns: {name: "machineName"})


data = from (bucket: "factory")
  |> range(start: -1d)
  |> filter(fn: (r) => (r._measurement == "power" and r._field == "auto" ))
  |> toInt()
  |> group(columns: ["machineId"], mode: "by")
  |> sort(columns: ["_time"] , desc: false)
  |> elapsed()
  |> filter(fn: (r) => (r._value == 1))

  join(tables: {m: machines, d: data}, on: ["machineId"], method: "inner")

...here I want to filter with machineId...

Hello @Tobias_Klemmer,
What errors are you getting?

Thank you Anais.
No Error. Just: “No Result”

To confirm, How many rows does machines contain? Same for data .

If they both have the rows you expect, can you copy the last few rows from each (pipe into tail() ) , and then paste them in your reply in another code block. I’m thinking something like data type or cAsE or space padding is different between them for the machineId column - but let’s see.

At the moment it is only 4 records in machines and some hundred records with sensor data for machine for id 1. Machine table is more or less just master data, and I expect at most around 100 records for one organization.

Result: _result
Table: keys: []
    machineName:string               machineId:int  
----------------------  --------------------------  
               Traub 6                           1  
               Traub 7                           2  
            Maschine 3                           3  
            Maschine 4                           4
Result: _result
Table: keys: [machineId]
      machineId:string           _field:string            orgId:string                      _stop:time                      _time:time                  _value:int          groupId:string     _measurement:string                     _start:time          plantId:string           sensor:string             work:string          shiftId:string           teamId:string                 elapsed:int  
----------------------  ----------------------  ----------------------  ------------------------------  ------------------------------  --------------------------  ----------------------  ----------------------  ------------------------------  ----------------------  ----------------------  ----------------------  ----------------------  ----------------------  --------------------------  
                     1                    auto                       1  2021-07-23T07:07:40.470990755Z  2021-07-23T05:12:24.000000000Z                           1                       1                   power  2021-07-22T07:07:40.470990755Z                       1            sensor000001                    true                       1                       2                         336  
                     1                    auto                       1  2021-07-23T07:07:40.470990755Z  2021-07-23T05:39:05.000000000Z                           1                       1                   power  2021-07-22T07:07:40.470990755Z                       1            sensor000001                    true                       1                       2                        1007  
                     1                    auto                       1  2021-07-23T07:07:40.470990755Z  2021-07-23T05:53:09.000000000Z                           1                       1                   power  2021-07-22T07:07:40.470990755Z                       1            sensor000001                    true                       1                       2                         165  
                     1                    auto                       1  2021-07-23T07:07:40.470990755Z  2021-07-23T06:37:37.000000000Z                           1                       1                   power  2021-07-22T07:07:40.470990755Z                       1            sensor000001                    true                       1                       2                         334  
                     1                    auto                       1  2021-07-23T07:07:40.470990755Z  2021-07-23T06:52:40.000000000Z                           1                       1                   power  2021-07-22T07:07:40.470990755Z                       1            sensor000001                    true                       1                       2                         197  

Thank you, you asked the right question:
In PostgreSQL “id” is the “machineId”. The problem seems to be the data type int4 in PostgreSQL, and the tag data type string in InfluxDB. Wasn’t aware, that tags are treated as strings. So I need to transform the machineIds into strings first.

The following code delivers results now:

import "sql"
import "influxdata/influxdb/secrets"

connectionString = secrets.get(key: "POSTGRES_CONNECTION")

machines=sql.from(
  driverName: "postgres",
  dataSourceName: "${connectionString}",
  query: "SELECT name as \"machineName\",id as \"_value\" FROM \"public\".\"Machine\" WHERE \"orgId\"=1 ORDER BY \"orgId\",\"id\" LIMIT 300 OFFSET 0;"
)
 |> toString()
 |> rename(columns: {_value: "machineId"})
 |> filter(fn: (r) => (r.machineName  == v.MachineName))


data = from (bucket: "bucket")
  |> range(start: -1d)
  |> filter(fn: (r) => (r._measurement == "power" and r._field == "auto" ))
  |> toInt()
  |> group(columns: ["machineId"], mode: "by")
  |> sort(columns: ["_time"] , desc: false)
  |> elapsed()
  |> filter(fn: (r) => (r._value == 1))

join(tables: {m: machines, d: data}, on: ["machineId"], method: "inner")

May I ask, what is the best practice in InfluxDB? Would you rather create the data points with the machineName instead of the machineId? This could be easily filtered with a variable in InfluxDB. But the machineName might be changed by users in PostgreSQL. Then I lose the meaning of already existing datapoints, if a user changes the machine name. This I’d like to avoid, as I don’t want to do mass updates on tags in InfluxDB, if this is possible at all. What would your recommend?

Glad I could help - and pat yourself on the back for outlining your problem so clearly too. :+1:

My understanding is that this is a speed optimization since the tags are used as a index in influxdb. By making all tags be of string type, it would simplify the code involved and maximize the query performance would be my guess.

To your challenge of data types, One alternative is that you might be able to get away without making the machine id a tag, and just leave it as an ordinary field, which can be any data type.

Yes I agree , loosing the identity of historical data would be very undesirable, and using some other internal key like you’ve already planned to do is a good way to prevent that. You might want to consider now while designing this how you would re-associate orphaned data unless you can prove the risk is completely eliminated. Something like an intermediate aliases table perhaps. Food for thought anyway.

These 3 links on this page help cover some of what you’re after.

If you have further questions hopefully @Anaisdg or one of the others can help advise further.