Is performance of InfluxDB OSS enough for this task? (~12,700,000 records/d)

Hello all,

I have the following setting:
I expect data from a combined sensor at a rate of 200/s. The record consists of acceleration data (a.X, a.Y) and GPS- data (Lat, Lon). In addition there is data from two force sensors, giving data each from two force components (Fle, Fri).
I set the timestamp of the records while writing to InfluxDB, because it is not sure, that the latency of data is always constant. For testing purpose I set the starttime 24 hours in the past and increment the timestamp with everey record with 5 milliseconds. This means, I write 12,280,000 records to the freshly declared bucket. This takes a lot of time, but is faster than realtime (this means, I write more than 200 records/s). The python code is shown below:

import influxdb_client, os, time
from influxdb_client import InfluxDBClient, Point, WritePrecision
from influxdb_client.client.write_api import SYNCHRONOUS

token = os.environ.get("INFLUXDB_TOKEN")
org = "Institute"
url = "http://localhost:8086"
bucket="Institute-Bucket"

write_client = influxdb_client.InfluxDBClient(url=url, token=token, org=org)
write_api = write_client.write_api(write_options=SYNCHRONOUS)
# Starttime -24h in ms
timestamp = int(datetime.now().timestamp() - 86400)*1000
# (60*60*24) seconds * 200/s => 17280000
for value in range(17280000):
  Fle1 = int(350+700*math.sin(math.radians(value)))
  Fri1 = int(350+700*math.cos(math.radians(value)))
  Fle2 = int(300+700*math.sin(math.radians(value)))
  Fri2 = int(300+700*math.cos(math.radians(value)))
  ts = timestamp + 5*value
  points = [Point("measurement5").tag("SensorTyp", "Vehicle").tag("SensorID", "1000").field("a.X", random.randint(-9, 9)).field("a.Y", random.randint(-9, 9)).field("Lat", 52.465827).field("Lon", 13.504945).time(ts, WritePrecision.MS),
            Point("measurement5").tag("SensorTyp", "Force").tag("SensorID", "2010").field("Fle", Fli1).field("Fri", Fri1).time(ts, WritePrecision.MS),
            Point("measurement5").tag("SensorTyp", "Force").tag("SensorID", "2011").field("Fle", Fle2).field("Fri", Fri2).time(ts, WritePrecision.MS) ]
  write_api.write(bucket=bucket,  org=org, record= points)
  if value % 100 == 0:
    time.sleep(0.01) # separate points by 0.01 second
    print ("value " + str(value))

When testing the query- performance, I used the following code:

query_api = write_client.query_api()

query = """from(bucket: "Institute-Bucket")
 |> range(start: 2024-11-27T23:30:00Z, stop: 2024-11-27T23:33:00Z)
 |> filter(fn: (r) => r._measurement == "measurement5")
 |> filter(fn: (r) => r.SensorID == "2011")"""
vorher = datetime.now()
tables = query_api.query(query, org="Institut FES")
nachher = datetime.now()
print (nachher - vorher)

for table in tables:
    for record in table.records:    		
        print(record.get_time(), record.get_field(), record.get_value())

With this I am querying 3 minutes in the middle of the data bucket, filtering for a certain tag.
But this query takes InfluxDB 27s:

>>> print (nachher - vorher)
0:00:27.654377

The machine is a Windows10 computer, Intel(R) Xeon(R) CPU E5-1620 0 @ 3.60GHz 3.60 GHz, 8 GB RAM.

I didn’t expect this to be so slow. So I assume, my too naive approach doesn’t use the full capabilities of InfluxDB.
I would like the query time to be less than a second. How can I achieve this?