Flux query to create table from field content (fields vs tags)

Consider some simple Golang code to set a measurement like this:

p = influxdb2.NewPointWithMeasurement("test_data_tags").
    AddTag("id", "1234").
    AddTag("name", "joe").
    AddTag("age", "25").
    AddTag("weight", "175").
    AddField("val", 1).
    SetTime(time.Now())
writeAPI.WritePoint(context.Background(), p)

In Grafana, I can add a “Table” with the following query:

from(bucket: "TestBucket")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "test_data_tags")
  |> group()
  |> keep(columns: ["id","name","age","weight",])
  |> unique(column: "id")

That gives me a table with the above four columns. Something like this:
2022-02-09_11-14-37

The column headers are the tags. The rows are each individual ID. Each cell contains the tag content for that id. You can use Grafana transformations to make the table look really nice.

But I want to use fields and not tags. I want the Golang code to be something like this:

p := influxdb2.NewPointWithMeasurement("test_data_fields").
	AddTag("id", "1234").
	AddField("name", "joe").
	AddField("age", 25).
	AddField("weight", 175).
	SetTime(time.Now())
writeAPI.WritePoint(context.Background(), p)

I’m having difficulty coming up with the Flux query so that the table looks the same as the “test_data_tags” measurement. Any suggestions? Thank you.

Hi @btuser,
Welcome to the community! I have you considered using the pivot() function. This will allow you to store your parameters as fields and then transform them into columns while running the flux query.

pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")

Hi @btuser,
Sorry for the late reply. Hmm that does seem odd. Does this happen at all time ranges (i.e. 7 days, 30 days). Do you also see this issue when you create the table in a dashboard and refresh the dashboard? Sometimes UI nodes can generate some unusual results when deleting specific columns within the data explorer. Normally a deploy and refresh sorts them out. The other possibility is that you have historic malformed data which is why I am asking if you see this issue at different time periods.

Interestingly enough, I can no longer reproduce the issue. If I see it again, and if I can come up with repro steps, I’ll post back here. Thank you again for the excellent support.

1 Like