I have installed influxdb3 core for the first time., I used the docker container to run it on my Intel mac. After loading and querying a small lp file, I tried a much larger one, about 25GB.
This file loads fine on influxdb2, also in a container on my same laptop.
In Influxdb3 I quickly use up all 24 GB allocated to docker. Influx then emits a “killed” mesage and exits the ‘write’ command.
I magine that influx is loading the entire file into memory, or creating the entire database in memory or both.
I an running with --object-store file and --data-dir is a docker volume
The only .wal file find is tiny
-rw-r–r-- 1 influxdb3 influxdb3 1747 May 9 17:43 00000000001.wal
Is there something to do differently so this will work?
A memory or swap setting. ?
I would rather not split my lp files, since it represents a unit of data from my source, but can if there is not better option.
My docker run command looks like this:
docker run -it
-v /var/lib/data/influxdb3:/home/influxdb3/.influxdb3
influxdb3:latest influxdb3 serve
–node-id localhost
–object-store file
–data-dir /home/influxdb3/.influxdb3
My write command (run from inside the container, reading the lp from the mounted drive) looks like this;
influxdb3 write --database my_db --precision ns --accept-partial --file ${FULL_PATH_TO_FILE}