Is InfluxDB fast enough for this?

Hi all, I’m new to InfluxDB

I’ve recently installed InfluxDB on a local server (Ubuntu 24.04, Core i5 8500, 16GB RAM) to collect robot telemetry and create a dashboard from it, and it feels like I’m running into a speed ceiling. I’m writing ~450 data points (3 integer fields per point) per second, and even though the dashboard is refreshing once per second, the new data does not seem to be showing up at all. The graph just ends, despite the fact that my Python is continuously writing new data. I also have a gauge for battery %, and the number on it is significantly outdated.

Looking at load on the server in btop, CPU usage seems near zero, disk I/O is at 2-4MB/s, network I/O is similarly near idle. So what is going on here? Why does InfluxDB seem to discard most of my data?

For my query, I’m doing this:

from(bucket: "RobotTelemetry")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "power")
  |> filter(fn: (r) => r["_field"] == "current")
  |> filter(fn: (r) => r["robot_name"] == "Albert")

I specifically omitted the averaging/aggregation since it was hiding short peaks (current spikes can be 0.5s, so a 10s window just gets in the way)

Are my expectations of InfluxDB unreasonable? Or am I doing something wrong?

@Yonatan This doesn’t seem like a super heavy write workload. How are you writing the data? Just through a Python script? Telegraf? Is there anything in the logs to suggest that writes are being rejected?

I’m writing using the influxdb-python library, I’m creating a point with the data and publishing it using WriteApi.write(...)

I checked the logs using docker logs using docker logs --follow influxdb and I don’t see anything being printed out while running all the writes

@Yonatan So do the writes never actually get written or are they delayed?

@scott I wrote a burst of data at a high speed for around a minute, when starting to write, 4 seconds of data showed up, and since then nothing more. It has been ~20 minutes and no new data has arrived.

Interesting. And nothing in the logs around the time that writes stop landing?

I’m curious, is it bucket-specific? If you write to a different bucket, do the writes work? What about just a straight API call rather than using the Python client. For example, if you create a new bucket and try to write data with a cURL request:

curl --request POST \
"http://localhost:8086/api/v2/write?org=<YOUR_ORG>&bucket=<YOUR_BUCKET>&precision=ns" \
  --header "Authorization: Token <YOUR_API_TOKEN>" \
  --header "Content-Type: text/plain; charset=utf-8" \
  --header "Accept: application/json" \
  --data-binary '
    airSensors,sensor_id=TLM0201 temperature=73.97038159354763,humidity=35.23103248356096,co=0.48445310567793615 1721826812564960000
    airSensors,sensor_id=TLM0202 temperature=75.30007505999716,humidity=35.651929918691714,co=0.5141876544505826 1721826815811478000
    '

Does the write succeed? Do you get any errors back?

I tried running your command on a test bucket 6 times while running the robot, looks like all 6 went through while the data the robot sent was showing the same behavior.

This makes me think something might be failing client side. Anything in your Python logs?

I’m not seeing anything in the Python output. Note that I’m using write_options=ASYNCHRONOUS, could async writes drop silently if there are too many for the client to handle?

I’m not sure. I’ve never personally seen this behavior. You might consider creating an issue on the Python client repository.