Hi,
I am migrating from InfluxDB v2 to v3.7.0 Core on RHEL 9.6 using file-based storage.
I imported 25 line protocol files which is around 96K records (each file not exceeds 10k records), but after the import i noticed that 12GB RAM used (80%) (Server: 15GB RAM total).
Still we have 6 to 7 million records to import. Memory grows and doesn’t release. No parquet files created yet, waited for more than 6 hours (only i can see WAL files created).
Setup:
InfluxDB: v3.7.0 Core
Storage: File-based (–object-store file)
Config: --force-snapshot-mem-threshold 70%, --wal-snapshot-size 100
What is the recommended batch size (records per write) for imports to optimize memory?
Should there be a time interval/delay between file imports?
Will parquet files eventually be created to free memory?
Any guidance on import strategy for large migrations would be appreciated. Thanks.
