Suggestions to improve bad performance using HTTP-Requests to write data

Dear Community,

I am writing a program for status monitoring and I am using InfluxDB to store my status information. The devices are spread inside a network and collect status information each second. The time series data is sent via HTTP to the database.
In total I have about 40.000 values per second and the database ingests it easy. BUT only if I batch the data and use a few HTTP-Requests. Since there are about 200 devices and each is sending its data, there are 200 HTTP-Request. And here is where it gets nasty.
I have a stable digest rate of about 0,3 seconds per HTTP-Request, regardless of its size (few hundred values to a few thousand). BUT only if I keep it at max 30 HTTP-Requests per second. Between 30 and 40 requests per second the time needed goes absolutely nuts and once I hit 41 requests the database returns timeout.
The program works as intended if I I batch the data in 10 second intervals and send 20 request each second. But this still feels very slow.
Since I am still quite new to InfluxDB and would like to improve, I would like to understand the reason for this. Is it because I missed some stuff in the config? Is it just that way? Hardware? Wrong way to measure? Something not Influx related at all?

Using this in Grafana to get write duration:

X-Influxdb-Version: 1.0.2

Any insights are much appreciated.

So I have been testing on some other computers and it seems like it is related to the engine its running on. I have had measurements which were over twice as fast as in my original tests.
At this point my best bet is that it is related to the hardware.

Any tips / insights are very welcome.