equivalent influxQL:
SELECT a, b where a = 'hello' and b > 10;
equivalent influxQL:
SELECT a, b where a = 'hello' and b > 10;
@manoj_m, youād do it with filter()
:
// ...
|> filter(fn: (r) => r.a == "hello" and r.b > 10)
@scott Is this correct for value-fields (no-tag fields)? I am trying to execute a simple select via Chronograf but it does not seem to be working the way you explained.
Assumed I would have inserted data with
insert test a=10,b=40
insert test a=20,b=30
insert test a=20,b=40
Now I want to retrieve BOTH a and b variables of the second datapoint.
This is very simple in influx:
select a,b from test where a>10 AND b=30
But I have absolutely no idea how to acomplish this in flux?
At least via Chronograf it appears that every value is returned as stand-alone and can be addressed via _field for field name and _value for the actual value.
This gets me to:
from(bucket: "test/autogen")
|> range(start: MyStartTime, stop: MyEndTime)
|> filter(fn: (r) => r._measurement == "test" )
|> filter(fn: (r) => r._field == "a" )
|> filter(fn: (r) => r._value > 10)
But I have no idea how to combine this with the value of field b. Your solution suggests to use
r.a, r.b but this does not work in case a, b are regular fileds.
Any help would be greatly appreciated!
Sorry, Iām not sure why, but in my initial answer I assumed a
and b
were tags, not fields. You can use pivot()
to pivot a
and b
fields into the same rows:
from(bucket: "test/autogen")
|> range(start: MyStartTime, stop: MyEndTime)
|> filter(fn: (r) => r._measurement == "test" and r._field =~ /a|b/ )
|> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")
|> filter(fn: (r) => r.a > 10 and r.b == 30)
@scott Thanks a lot for your speedy reply! It works perfectly now!
@Aga-Schwarz Happy to help!
Hi Scott, I donāt want to do the same question again but basically i have the same issue.
I have been reading carefully the question and the reply but Iām struggling on this from days.
I have a bucket with 10 columns and i would like to get two/three of them.
This is my query:
query = ā from(bucket:āb_boxā)ā
'|> range(start: 2021-03-05T18:55:00Z, stop: 2021-03-05T18:56:00Z) ā
ā|> filter(fn: Ā® => r._field =~ /āst_millisā|āst_levelā/)ā
ā|> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: ā_valueā)ā
I have the data on the range indicated but the result is always nothing. I can query one column with no issues, apply a filter and others but getting two columnsā¦ no way. Could you please give me some tips? Thanks!
@Scav It likely means that your timestamps donāt align perfectly so there are no common timestamps to pivot on. One thing you can try is using truncateTimeColumn()
to reduce the precision of your timestamps. I donāt know how precise your data is, but the following example rounds timestamps off to seconds:
query = from(bucket:āb_boxā)
|> range(start: 2021-03-05T18:55:00Z, stop: 2021-03-05T18:56:00Z)
|> filter(fn: (r) => r._field =~ /st_millis|st_level/)
|> truncateTimeColumn(unit: 1s)
|> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: ā_valueā)
Hi Scott,
This si the python code that Iām using to insert the data in Influx. Iām receving a JSON message, opening it up and feeding the db so the time stamp is exactly the same.
st_price = str(js_msg["st_price"])
st_millis = str(js_msg["st_millis"])
st_level = str(js_msg["st_level"])
data = "mem,_box=a " \
"st_millis=" + st_millis + ","\
"st_level=" + st_level + "," \
st_millis + "000000"
write_api.write(b_box, org, data)
st_millis contain the millis, the bad thing that Iām doing is to add the ā000000ā but Iām sure 100% that the time stamp is the same because is the same message and they are added all together.
I tried your suggestion but still nothing, unfortunatelly.
query = ' from(bucket:"b_box")' \
'|> range(start: 2021-03-05T18:55:00Z, stop: 2021-03-05T18:56:00Z) '\
'|> filter(fn: (r) => r._field =~ /"st_millis"|"st_level"/ )' \
'|> truncateTimeColumn(unit: 1s)'\
'|> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value")'
What else could I check?
Many thanks!
@Scav Remove of the quotes from your Flux regex. Itās trying to match those as literal characters in your field names.
# ...
'|> filter(fn: (r) => r._field =~ /st_millis|st_level/ )' \
nope, it doesnāt like it.
return self.values.__getitem__(key)
KeyError: '_field'
Process finished with exit code 1
What can i check on the UI, that maybe can help?
Thanks
That looks like a Python error, not a Flux error, so itās something in the Python code. That regex in Flux wonāt match against anything in InfluxDB unless your field keys actually have quotes in them.
Does changing your string formatting change anything?
query = ''' from(bucket: "b_box")
|> range(start: 2021-03-05T18:55:00Z, stop: 2021-03-05T18:56:00Z)
|> filter(fn: (r) => r._field =~ /st_millis|st_level/ )
|> truncateTimeColumn(unit: 1s)
|> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value")
'''
This work fine:
query = ' from(bucket:"b_box")' \
'|> range(start: 2021-03-05T18:55:00Z, stop: 2021-03-05T18:56:00Z) '\
'|> filter(fn: (r) => r._field == "st_level")'
And Iām using "st_level"
Apologies, i got a bit lost what did you mean by āchanging my string formattingā?
The Python string formatting. Right now youāre using single-line string formatting (''
). Using triple quotes allows you to do multi-line strings in Python.
if i write it like that:
query = ''' from(bucket: "b_box")
|> range(start: 2021-03-05T18:55:00Z, stop: 2021-03-05T18:56:00Z)
|> filter(fn: (r) => r._field =~ /"st_millis"|"st_level"/ )
|> truncateTimeColumn(unit: 1s)
|> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value")
'''
same as before nothing back. If i write it as you suggested, i have an error:
/usr/bin/python3.6 /InfluxQuery_1.py
Traceback (most recent call last):
results.append((record.get_field(), record.get_value()))
return self["_field"]
return self.values.__getitem__(key)
KeyError: '_field'
Process finished with exit code 1
Thanks for the help!
Thanks @scott,
Your reply helped me with a problem, I was facing for a while. I appreciate your help.