How do I select multiple field columns with flux?

equivalent influxQL:

SELECT a, b where a = 'hello' and b > 10;

@manoj_m, you’d do it with filter():

// ...
  |> filter(fn: (r) => r.a == "hello" and r.b > 10)

@scott Is this correct for value-fields (no-tag fields)? I am trying to execute a simple select via Chronograf but it does not seem to be working the way you explained.

Assumed I would have inserted data with
insert test a=10,b=40
insert test a=20,b=30
insert test a=20,b=40

Now I want to retrieve BOTH a and b variables of the second datapoint.
This is very simple in influx:
select a,b from test where a>10 AND b=30

But I have absolutely no idea how to acomplish this in flux?
At least via Chronograf it appears that every value is returned as stand-alone and can be addressed via _field for field name and _value for the actual value.

This gets me to:

from(bucket: "test/autogen")
|> range(start: MyStartTime, stop: MyEndTime)
|> filter(fn: (r) => r._measurement == "test" )
|> filter(fn: (r) => r._field == "a" )
|> filter(fn: (r) => r._value > 10)

But I have no idea how to combine this with the value of field b. Your solution suggests to use
r.a, r.b but this does not work in case a, b are regular fileds.

Any help would be greatly appreciated!

Sorry, I’m not sure why, but in my initial answer I assumed a and b were tags, not fields. You can use pivot() to pivot a and b fields into the same rows:

from(bucket: "test/autogen")
  |> range(start: MyStartTime, stop: MyEndTime)
  |> filter(fn: (r) => r._measurement == "test" and r._field =~ /a|b/ )
  |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")
  |> filter(fn: (r) => r.a > 10 and r.b == 30)
3 Likes

@scott Thanks a lot for your speedy reply! It works perfectly now!

@Aga-Schwarz Happy to help!

Hi Scott, I don’t want to do the same question again but basically i have the same issue.
I have been reading carefully the question and the reply but I’m struggling on this from days.
I have a bucket with 10 columns and i would like to get two/three of them.

This is my query:

query = ’ from(bucket:ā€œb_boxā€)’
'|> range(start: 2021-03-05T18:55:00Z, stop: 2021-03-05T18:56:00Z) ’
ā€˜|> filter(fn: Ā® => r._field =~ /ā€œst_millisā€|ā€œst_levelā€/)’
ā€˜|> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: ā€œ_valueā€)’

I have the data on the range indicated but the result is always nothing. I can query one column with no issues, apply a filter and others but getting two columns… no way. Could you please give me some tips? Thanks!

@Scav It likely means that your timestamps don’t align perfectly so there are no common timestamps to pivot on. One thing you can try is using truncateTimeColumn() to reduce the precision of your timestamps. I don’t know how precise your data is, but the following example rounds timestamps off to seconds:

query = from(bucket:ā€œb_boxā€)
  |> range(start: 2021-03-05T18:55:00Z, stop: 2021-03-05T18:56:00Z)
  |> filter(fn: (r) => r._field =~ /st_millis|st_level/)
  |> truncateTimeColumn(unit: 1s)
  |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: ā€œ_valueā€)

Hi Scott,

This si the python code that I’m using to insert the data in Influx. I’m receving a JSON message, opening it up and feeding the db so the time stamp is exactly the same.

  st_price = str(js_msg["st_price"])
                st_millis = str(js_msg["st_millis"])
                st_level = str(js_msg["st_level"])


   data = "mem,_box=a "  \
                       "st_millis=" + st_millis + ","\
                       "st_level=" + st_level + "," \
                       st_millis + "000000"

                write_api.write(b_box, org, data)

st_millis contain the millis, the bad thing that I’m doing is to add the ā€œ000000ā€ but I’m sure 100% that the time stamp is the same because is the same message and they are added all together.

I tried your suggestion but still nothing, unfortunatelly.

query = ' from(bucket:"b_box")' \
    '|> range(start: 2021-03-05T18:55:00Z, stop: 2021-03-05T18:56:00Z) '\
    '|> filter(fn: (r) => r._field =~ /"st_millis"|"st_level"/ )' \
    '|> truncateTimeColumn(unit: 1s)'\
    '|> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value")' 

What else could I check?
Many thanks!

@Scav Remove of the quotes from your Flux regex. It’s trying to match those as literal characters in your field names.

# ...
  '|> filter(fn: (r) => r._field =~ /st_millis|st_level/ )' \

nope, it doesn’t like it.

return self.values.__getitem__(key)
KeyError: '_field'
Process finished with exit code 1

What can i check on the UI, that maybe can help?

Thanks

That looks like a Python error, not a Flux error, so it’s something in the Python code. That regex in Flux won’t match against anything in InfluxDB unless your field keys actually have quotes in them.

Does changing your string formatting change anything?

query = ''' from(bucket: "b_box")
    |> range(start: 2021-03-05T18:55:00Z, stop: 2021-03-05T18:56:00Z)
    |> filter(fn: (r) => r._field =~ /st_millis|st_level/ )
    |> truncateTimeColumn(unit: 1s)
    |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value")
'''

This work fine:

query = ' from(bucket:"b_box")' \
    '|> range(start: 2021-03-05T18:55:00Z, stop: 2021-03-05T18:56:00Z) '\
    '|> filter(fn: (r) => r._field == "st_level")'

And I’m using "st_level"

Apologies, i got a bit lost what did you mean by ā€œchanging my string formattingā€?

The Python string formatting. Right now you’re using single-line string formatting (''). Using triple quotes allows you to do multi-line strings in Python.

if i write it like that:

query = ''' from(bucket: "b_box")
    |> range(start: 2021-03-05T18:55:00Z, stop: 2021-03-05T18:56:00Z)
    |> filter(fn: (r) => r._field =~ /"st_millis"|"st_level"/ )
    |> truncateTimeColumn(unit: 1s)
    |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value")
'''

same as before nothing back. If i write it as you suggested, i have an error:

/usr/bin/python3.6 /InfluxQuery_1.py
Traceback (most recent call last):
    results.append((record.get_field(), record.get_value()))
    return self["_field"]
    return self.values.__getitem__(key)
KeyError: '_field'

Process finished with exit code 1

Thanks for the help!

Thanks @scott,

Your reply helped me with a problem, I was facing for a while. I appreciate your help.

1 Like