Hi,
I need to monitor servers with unsupported OS by influx (AIX/Solaris)
I would like to know how influx grabs the data for CPU/DISK and so on, so I could trigger script from inputs.exec that will use curl once the CPU/DISK measurement will be done.
I imagined something like
#!/bin/bash
myserver = `hostname`
# load influx stuff or something
disk = influx.disk()
cpu = influx.cpu()
curl -XPOST "http://localhost:8086/write?db=mydb" -d 'cpu,host=myserver,region=uswest load=cpu.load'
curl -XPOST "http://localhost:8086/write?db=mydb" -d 'disk,host=myserver,disk_used=disk.used, disk_total=dis.total'
This should work in theory, perhaps you could read the solaris implementations in gopsutil to determine how to collect cpu and disk on Solaris, this is the library we use in Telegraf to collect this information.
Thanks for you reply.
Ok, I see, I could use psutil (as I dont know anything with go but a little bit more with python)
But how are the CPU/DISK measurement supplied ?
I mean at each xx secondes defined in the telegraf.conf, you trigger a call to gopsutil to get the system data and do something that looks like the CURL line I wrote ?
I ask that to be sure that if I follow that path, influx and telegraf will miss nothing.
You will have to read the source code for details, but it looks like disk looks in /etc/mnttab and unfortunately there is no implementation currently for cpu on Solaris.
You can collect 600 stats from AIX and inject them in to InfluxDB using njmon.
Open source and precolpiled for varios AIX version see nmon for Linux | Site / Njmon