Write large lp files to influxdb3 core runs out of memory

I have installed influxdb3 core for the first time., I used the docker container to run it on my Intel mac. After loading and querying a small lp file, I tried a much larger one, about 25GB.
This file loads fine on influxdb2, also in a container on my same laptop.
In Influxdb3 I quickly use up all 24 GB allocated to docker. Influx then emits a “killed” mesage and exits the ‘write’ command.
I magine that influx is loading the entire file into memory, or creating the entire database in memory or both.
I an running with --object-store file and --data-dir is a docker volume
The only .wal file find is tiny
-rw-r–r-- 1 influxdb3 influxdb3 1747 May 9 17:43 00000000001.wal

Is there something to do differently so this will work?
A memory or swap setting. ?
I would rather not split my lp files, since it represents a unit of data from my source, but can if there is not better option.

My docker run command looks like this:
docker run -it
-v /var/lib/data/influxdb3:/home/influxdb3/.influxdb3
influxdb3:latest influxdb3 serve
–node-id localhost
–object-store file
–data-dir /home/influxdb3/.influxdb3

My write command (run from inside the container, reading the lp from the mounted drive) looks like this;
influxdb3 write --database my_db --precision ns --accept-partial --file ${FULL_PATH_TO_FILE}

Hello @charles59,
How many points are you trying to write?
Might want to try a ulimit:

docker run -it \
  --ulimit nofile=65536:65536 \
  -v /var/lib/data/influxdb3:/home/influxdb3/.influxdb3 \
  influxdb:3-core influxdb3 serve \
  --node-id localhost \
  --object-store file \
  --data-dir /home/influxdb3/.influxdb3

Let me share internally as well, thanks!

Can you please also email support@influxdata.com? Thank you!

about 50 million.
This is a typical row (anonymized)
datapoint,sourcefile=my_source_file.csv,file_id=275,event=event_name,device=00-01052,source=source_1,uid=ae093b,validated=true id=37720432i,v1=1234.0000000000000000,v2=-1234.3140494285714286,v3=1234.3667664285714,v4=2i,shortstring=“abcdef” 1663192099000

Hello @charles59,
Yes you might have to do this write batched or chunked (client libraries offer this functionality). I’ll ask around if there are any other best practices.

OK I can split the files. Best practices like max I put file size would be great to know

Thanks