I’m using the /write api and a java http connection to migration some data which is 10 and 30 minute interval data, spanning just a few months, to up to 12-16 months, written to a shared 8-12 measurements, distinguished by tags for each datasource. We’re writing the data oldest to newest. This is a 1 time operation, to migrate some data into influx measurements.
I’m also using batching, flushing the data out every 5 seconds. This normally equates to about 3-4 days of timestamps, and each timestamp has about 1000 values (it can vary though, depending on the datasource, but always written to the same field keys in a measurment, but not always all the field keys have values. )
In this picture, The system was steady state, just writing some live data. Then, we began this import/migration of summarized data around 9am. It ran until 2am the next day, when it was OOM killed for using too much memory. The green line is Total memory. The red line is resident memory. The server had 8GB or memory. All the data should be going into about 7 various retention policies.
Any thoughts on why the Total memory for the influxd process continues to increase slowly over the 17 hour time frame? It’s like something isn’t being flushed. or cleared in terms of memory. Just trying to get an idea of what the cause is for this.