I’m currently in the middle of doing some testing of some other databases (we’ve been using Cassandra for time series storage for 3 years) and I’m trying to load up at least a subset of the ~15 billion points we have in order to do some testing of InfluxDB. I realize it should take a bit to load that much data, but at the rate I’m going it’s going to take several weeks. I’ve got a Python script running on an EC2 machine in the same region as my InfluxCloud-based database which is just pulling the data from CSV files and then using the influx python client library to write those values to Influx. I’ve got it running in batches of 8000 points and each write to influx is taking about 4-8 seconds. I’ve read that performance is best in batches of 5000-8000 points and each batch of writes (around 750,000 points) I’m pulling from a CSV file are for the same measurement + tag, in chronological order. From what I’ve read, that should give me really good performance. Any other tips or am I doing something wrong?