I’m bulk importing some existing data from Postgres into InfluxDB v2.
Originally I’d query the postgres table daily for some time series value and generate annotated CSV files that I’d subsequently ingest into Influx from the command line using influx write.
This works fine for files up to around 100k lines but when I try to ingest larger files either individually or in sequence (I’m using a shell script to call influx write repeatedly) I get the following error:
Error: Failed to write data: unexpected error writing points to database: engine: write /Users/account/.influxdbv2/engine/wal/76d17c4d98ca948a/autogen/3/_00001.wal: file too large.
I’ve searched around a bit and can’t find anything useful. I’m using InfluxDB v2.0.3.
I’ve tried playing around with the number of tags I have (usually 2) and their range of values (over a thousand down to one) and the result is the same.
I tried ingesting in batches of 5,000 and it’s usually after 20-25 rounds that this happens. I do suspect the ingestion goes fine but something I’m unaware of ‘fills up’ and prevents me from ingesting anything until I wait a prolonged period of time or restart Influxd itself.
I also tried taking the converted Line Protocol format using dryrun from influx write and ingesting via the Influx UI, same error message.
Conclusion: it’s not the format I’m ingesting it, it’s something inside.
What am I missing?