Hi @Jeffery_K
I’m not sure about a writes per second measurement but you can find out how many writes in bytes there has been using the _internal database. From there you could probably work out an average writes per second over a set time period.
You can check that in influx : select * from “write” order by desc limit 10 (just so it doesn’t query the whole database) when you’re using the _internal database
I use the following queries with Grafana to work out bytes written:
SELECT difference(mean(“read_bytes”)) FROM “autogen”.“diskio” WHERE (“host” =~ /^$datasource$/)
SELECT difference(mean(“write_bytes”)) FROM “autogen”.“diskio” WHERE (“host” =~ /^$datasource$/)
If you swap out the regex part for the data source you should be abel to get some info and work from there. I have a dashboard that shows this in mb/gb so i can see how much data is being written.
As far as i can remember, InfluxDB will use as much memory as it can - i think it’s to do with caching when it starts (i did find out from the support people but i honestly can’t remember for certain), After a while it should settle down.
For the SSD did you change the influx config to use disk instead of memory? i think the default is
inmem
but you can change it to tsi1
. Still though i’m not sure that will help with the initial system start as Influx still needs the memory to get going.
The problem is if it OOM’s every time it won’t settle down. Giving it more ram will just mean influx uses it. Which might get past the initial loading, but i had 2 nodes with 200gb in each and that would still fill over time.