Improve query with Timeout exceeded when I increase time range

Hello,

I am inserting PLC data every second in InfluxDB. In this data I have a boolean variable that shows is the machine is running or not. I am counting the amount of stops (the state changes from running=True to running=False). The query that I am using it works fine when I have litlle time ranges, but for example when I want to extract the stop amount of 48 hours, it needs more than a minute returning the query and I receive the next error message:

Post “http://localhost:8086/api/v2/query?org=1”: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

This is the query that I am using:

import “influxdata/influxdb/monitor”
from(bucket: v.bucket)
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r[“_measurement”] == “PLC_DATA”)
|> filter(fn: (r) => r[“_field”] == “running”)
|> map(fn: (r) => ({ r with _level: if r._value then “ok” else “warn” }))
|> monitor.stateChangesOnly()
|> group()
|> sort(columns: [“_time”])
|> filter(fn: (r) => r[“_level”] == “warn”)
|> count()

Is there any way to improve this query and receive faster the result of the query?

Thanks for your help!

@Anaisdg Have I got any way to improve this query?

Hello @mmmmgggg,
Do you know how many points you have in that time range give or take?
What version of InfluxDB are you using?
It looks like the sort is redundant so you could remove that.

Additionally are you trying to count when the level changes to warn only? If so you could do:

    |> monitor.stateChanges(toLevel: "warn")

If so then the sort() and the last filter() are redundant.

Finally you should consider performing the map as a part of a task. To address these issues (or the issues in this blog could pertain to you as well):

Summary of the blog above:
Leverage the task system in InfluxDB to get the query performance I expect by creating a continuous compute solution so that InfluxDB reduces my expensive queries into component parts.

Hi @Anaisdg
Thank you for your support. I understand the limitation with queries with large amount of points executing locally, so I have decided to do something similar to the tasks. I have decided to check the variable changes in my Python code where I read PLC data and where I upload them to the InfluxDB, and when I have a state change I will upload it to a different measurement.
Thank you!

1 Like

@mmmmgggg
Of course that’s what we’re here for :slight_smile: