I have been using Influx with some stock options minutes data - and it was working fine on a 8c/16GB AWS instance, consistently using 80% of available RAM.
I have only one measurement, and one tag (Stock symbol) - which has around 500 values - and around 9 float64 values per measurements.
So it was working fine until I decided to import more points - roughly 1 Million minutes since 2010.
I use the GoLang API , and following the recommendations with a 1000 minutes per batch - doing around 1000 batch requests per stock. It inserts fast, but hell ! The RAM goes up the roof - and crashes the server after few minutes of running.
Basically I started the import at the time I started writing this post (and a raw install of influx), and now i look at it, and 12GB of RAM are used by Influx.
Once it crashes, Influx restarts and instantly fills up the RAM - then after few dozen minutes, the RAM usage starts decreasing.
What are my options? Do I have to rent a HUGE AWS machine to populate the dataset first - and migrate to a smaller specs machine after that?
is there anything wrong?
If nothing is wrong, maybe we should update the doc explaining what’s going on when inserting huge amount of measurements …