Hello @Azimuth ,
Hmm I’m not sure I haven’t encountered this issue a lot.
I’m seeing other users encounter it though. Have you investigated any of the solutions mentioned here:
opened 01:56AM - 13 Nov 22 UTC
closed 03:40PM - 14 May 23 UTC
I have a Python script which makes 66 read queries into a local InfluxDB databas… e using 32 clients. It measures response time of each query and the total time. With version 2.0.4 the total time for these queries to complete was 50 seconds.
These read queries only differ in measurement filter:
```
from(bucket:"A")
|> range(start: 1670617080, stop: 1664578800)
|> filter(fn: (r) => r._measurement == "M" and (r._field == "F1" or r._field == "F2" or r._field == "F3" or r._field == "F4" or r._field == "F5" or r._field == "F6"))
|> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value")
|> keep(columns: ["_time","F1","F2","F3","F4","F5","F6"])
|> map(fn: (r) => ({r with _time: uint(v: r._time)}))
```
and they return tables with 1 `uint` and 6 `float` columns and up to ~100,000 rows.
I upgraded from version 2.0.4 to 2.5.1. Now only 34 out of 66 queries complete while the rest get stuck forever. When that happens `influxd` writes the following error message (~20 times a minute) into the system log:
```
msg="internal error not returned to client" log_id=0e7eiZSl000 handler=error_logger error="context canceled"
```
I downgraded through earlier versions until I found one that completes all the queries - version 2.0.7. With version 2.0.8 half of the queries never complete.
With version 2.0.4 the total time for these queries was 50 seconds. With version 2.0.5 the total query time doubled.
__Expected behavior:__
66 read queries complete.
__Actual behavior:__
34 out of 66 read queries complete while the rest get stuck forever.
__Environment info:__
* System info: `Linux 5.15.0-52-lowlatency x86_64`, Ubuntu 20.04 LTS.
* InfluxDB version: `InfluxDB v2.5.1 (git: 5b6fdbf05d) build_date: 2022-11-02T18:06:28Z`
* Other relevant environment details: `influxdb-client 1.34.0`.
__Config:__
```
query-concurrency = 32
query-queue-size = 32
```
__Logs:__
```
Nov 13 01:50:56 influxd-systemd-start.sh[3874]: ts=2022-11-13T01:50:56.555889Z lvl=warn msg="internal error not returned to client" log_id=0e7eiZSl000 handler=error_logger error="context canceled"
Nov 13 01:51:00 influxd-systemd-start.sh[3874]: ts=2022-11-13T01:51:00.051461Z lvl=warn msg="internal error not returned to client" log_id=0e7eiZSl000 handler=error_logger error="context canceled"
Nov 13 01:51:02 influxd-systemd-start.sh[3874]: ts=2022-11-13T01:51:02.330052Z lvl=warn msg="internal error not returned to client" log_id=0e7eiZSl000 handler=error_logger error="context canceled"
Nov 13 01:51:03 influxd-systemd-start.sh[3874]: ts=2022-11-13T01:51:03.608479Z lvl=warn msg="internal error not returned to client" log_id=0e7eiZSl000 handler=error_logger error="context canceled"
```
__Performance:__
[iostat.txt](https://github.com/influxdata/influxdb/files/9996356/iostat.txt)
[profiles.tar.gz](https://github.com/influxdata/influxdb/files/9996357/profiles.tar.gz)
Is it when you’re running queries? What queries are you running?
Are you using Grafana?
After starting Influxdb it works for a while, then completely stops working with queries or data point ingest, timing out the http endpoint. InfluxDB console reports messages like this :
ts=2023-07-11T12:01:27.527233Z lvl=warn msg="internal error not returned to client" log_id=0iy3ZWxG000 handler=error_logger error="context canceled"
It seems to stall during ingest requests performed from go client with 10 threads in synchronous writes, which doesn’t seem to be that much.
Only solution is to …