Flux store data into new measurement using HTTP API

Hello,

How can I store the output into a new measurement, let’s say “Total_Processor_Time”
I am looking for capability like kapacitor’s InfluxDBOut node.

curl -XPOST localhost:8086/api/v2/query -sS
-H ‘Accept:application/csv’
-H ‘Content-type:application/vnd.flux’
-d ‘from(bucket:“test”)
|> range(start:-15m)
|> filter(fn:(r) => r._measurement == “Processor” and
r._field == "Percent_Processor_Time"and
r.instance == “_Total”)
|> group(columns: [“host”, “instance”])’

Not sure how if this can be used and how to implement this:

Thanks

@Anaisdg is this possible?

I also tried pandas with influxdb

But getting an error using ‘_write_client’ is not defined.


NameError Traceback (most recent call last)
in
4 #system_stats.set_index(“_time”)
5
----> 6 _write_client.write(bucket.name, record=df_after_timeindex, data_frame_measurement_name=‘Processor_1’,
7 data_frame_tag_columns=[‘Processor_1’])

NameError: name ‘_write_client’ is not defined

1 Like

Whoops you need to instantiate your write client like so:
_write_client = client.write_api(write_options=WriteOptions(batch_size=1000,
flush_interval=10_000,
jitter_interval=2_000,
retry_interval=5_000))

@Ashish_Sikarwar,
You can use the to() flux function to write the data to a new measurement like so:
|> to(bucket: “test”, measurementColumn: “Total_Processor_Time” ) assuming that you already have a measurement with that name in that bucket. Otherwise you could just write the data to a new bucket.

1 Like

Awesome i will try that.
Thanks a lot @Anaisdg

@Anaisdg one more thing.
I need to query last 30 days data but only wants mean of data points with certain timestamp with five minutes of interval (not all the data of last 30 days)
like from
11:00 am to 11:05 am, 11:05 am to 11:10 am
There is a function which help to get hourly data but i need hours+minute
|> hourSelection(start: date.hour(t: now()), stop: date.hour(t: now()))
Will “date” package be of any help and how can i use it.
https://docs.influxdata.com/flux/v0.65/stdlib/date/

For example take this:

query= '''
import "date"
from(bucket: "yourdb")
|> range(start:-30d)
|> filter(fn: (r) => r._measurement == "Processor")
|> filter(fn: (r) => r._field == "Percent_Processor_Time")
|> filter(fn: (r) => r.instance == "_Total")
|> filter(fn: (r) => exists r._value)
|> aggregateWindow(every: 10m, fn: mean)
|> group(columns: ["instance", "host"])
'''

@Anaisdg We got this but we do not see timestamps _time being stored, not sure why?

import "date" 

min_duration = date.minute(t: now()) - 15
min_duration_end = date.minute(t: now()) 
hour_duration_end = date.hour(t: now()) 

hour_duration = if min_duration < 0 then date.hour(t: now())-1 else date.hour(t: now()) 
hour_duration_start = if hour_duration == -1 then 23 else hour_duration

min_duration_start = if min_duration < 0 then 60 - min_duration else min_duration 

from(bucket:"yourdb/autogen") 
    |> range(start:-30d) 
    |> filter(fn: (r) =>
        r._measurement == "Processor" and
        r._field == "Percent_Processor_Time"and
        r.instance == "_Total")
    |> filter(fn: (r) => 
        date.minute(t: r._time) > min_duration_start and 
        date.hour(t: r._time) == hour_duration_start)
    |> filter(fn: (r) => 
        date.minute(t: r._time) <= min_duration_end and 
        date.hour(t: r._time) == hour_duration_end)
    |> filter(fn: (r) => exists r._value)
    //|> hourSelection(start: date.hour(t: now()), stop: date.hour(t: now()))
    //|> keep(columns: ["_time","_value"]) 
    |> aggregateWindow(every: 5m, fn: mean)
    //|> mean()

@Ashish_Sikarwar Is there not a _time column when you run the query as-is (with the mean() call commented out? mean() itself is an aggregate function, which does remove the _time column, but if you’re running the query as you pasted it in, it should still have a time column.

1 Like

This topic was automatically closed 60 minutes after the last reply. New replies are no longer allowed.