What option is the best for InfluxDB Internal metrics (I have 3 databases with 3 retention policies inside the InfluxDB database 1 Instance) and I want to monitor whole InfluxDB activity?
The 2. option?
2. /etc/influxdb/influxdb.conf
[monitor] #Whether to record statistics internally.
store-enabled = true #The destination database for recorded statistics
store-database = “_internal” #The interval at which to record statistics
store-interval = “10s”
I use the _internal database and have installed telegraf on both data nodes that send metrics into a seperate InfluxDB and kapacitor instance.
_internal is decent for query times, series, cardinality and general DB stuff. If you are running the monitoring on a separate influx and kapacitor instance then you will need to add your main cluster as a data source in Chronograf or Grafana
then the system.inputs plugin in telegraf will get your diskio and network and so on.
In short, i use both telegraf on each node and the internal database. And if you’re using Grafana to visualise this then you can build a pretty decent dashboard based on some of the Linux and InfluxDB grafana templates.
In my case I’m using Telegraf -> InfluxDB -> Grafana.
And I was curious to see the best approach for the 1 VM instance of InfluxDB (where I have multiple DBs inside this instance) if _internal is the best way or better one is to have on telegraf.conf [[inputs.influxdb]].
Grafana and InfluxDB on the same instance (1 VM), and I want to monitor InfluxDB internal Metrics for this with the same InfluxDB + Grafana Dashboard.
that sounds a bit like my testing stack but i run Kapacitor on there as well. I couldn’t say for definite which would be the best approach, but i’m fairly certain you can’t get all of the information you want from just the one database (_internal).
In the end i worked from two Grafana dashboards - One which i had set up linux OS information and another for InfluxDB. I just merged the two together from there.
Those dashboards aren’t mine but they are the ones i used to work out what i needed. Ultimately though the best approach depends on the sort of data you are after, but considering the small foot print from telegraf it’s worth using both.
I found combining them both and using template variables to filter the hosts and data sources gave me some good info about the individual databases.
Thank you for your remarks.
I know the Dashboards, I’ve created some improved Dashboards for a lot of topics, e.g. System, HAproxy, Redis, etc…
I’ll publish them on Grafana this month.
I’m searching all the time the best approach for each of them (best practices).
I’d be interested in taking a look once you have published them if thats ok. I found the dashboards on there are helpful but as with a lot of things its never quite one size fits all.