Well, running with SSD wasn’t much better, infact, I had a whole lot more crashes towards the end of the run. as in, influxd went from 6GB to 70GB memory in 60 seconds! it only would like for 3-4 minutes before OOM killer killed it.
Here’s the memory chart. the different colors are because they are different process id’s.
Here is the Disk % busy, and the IO’s for the same time range for the SSD (where the influx data is stored).
Some decent ios.
Now interestingly, the drive I used yesterday, sdc, is the main linux LVM drive, sdc.
It also showed extremely high IO. As i think about it more, I bet this is the OS paging the 70GB of memory that influx wants to use, but can only access 20GB. The solution is obviously to add more memory to this system, but I wanted to see if someone could tell me, is this normal behavior, or potentially a bug in influx. My cardinality is low (around 105000) I have it on an 8 core system. 32GB ram, using solid state. I’m not sure what my points per second insert rate is. Is that somewhere in the _internal database?