Telegraf: diskio plugin timing issue on linux

According to it states that read_time and write_time are in milliseconds. I’m not sure if this is correct. I’ve tested on RHEL 6.8 and RHEL 7.3 using telegraf-1.3.2-1.x86_64.rpm.

The times are implausible for milliseconds, but if they were actually microseconds - I could believe it.
For some fairly small reads and writes, I’m seeing times of 4 seconds - when using milliseconds as a scale (via Grafana), but when using microseconds, that becomes in the 4 ms ballpark for a disk write to spinning disk - this I can believe.

Is this plugin documentation correct, or do I need to chase this further? Does anyone else see anything like this?

On linux these stats come from /proc/diskstats, looking at the documentation these readings are claimed to be in milliseconds.

Right. I’ve read that already.

I figured that’s where telegraf gets those values. I see those values exactly being stored in influx - as I expect - which is why I wondered if it’s just simply a documentation issue. (perhaps on the Linux kernel and Telegraf side too)

That document you link @daniel refers to a pretty ancient kernel…relatively speaking. The RHEL’s I’m looking at are kernel 3.10.0 and 4.1.12. I’m wondering if older 2.4/2.5/2.6 kernels had a different level of timing resolution?

I suppose I need a Linux kernel geek, not an Telegraf geek.

Or I can try “vintage” kernel to test my theory…

1 Like

I would be very surprised if they changed this in the kernel, they are very strict about changes visible to userspace. One thing I do not understand is what exactly “(as measured from __make_request() to end_that_request_last())” is referring to.

Heh! Not breaking user space. I wouldn’t want to get on the bad side of Linus. I’m probably speaking blasphemy already. :slight_smile:

@Daniel - Do you see the same phenomena I’m seeing? (if you are on a linux box…) - Or am I totally wet on this?

tl;dr: There is a telegraf documentation gap. There is no relationship made from telegraf keys to kernel value names.

Paging @daniel

This horse isn’t quite dead yet.

Indeed all measurement are in milliseconds.

After reading the linux document stat.txt, it gives more details about what the counters actually are and how to interpret them. It pertains to /sys/block/device/stat, but as stated previously - “don’t break user space” - these are the same values from /proc/diskstats anyway.

My misunderstanding comes from mapping of telegraph “key” to the kernel counters - and fundamentally understanding what the kernel counter actually means (due to this disconnect)

For example telegraf “read_time” actually maps to (well - I think it does…) - excerpt from stat.txt

read ticks, write ticks
These values count the number of milliseconds that I/O requests have
waited on this block device.  If there are multiple I/O requests waiting,
these values will increase at a rate greater than 1000/second; for
example, if 60 read requests wait for an average of 30 ms, the read_ticks
field will increase by 60*30 = 1800.

This means the read_tick (aka read_time) is inclusive of the items on the disk I/O queue. That’s why the value appears to be very high. In order to understand a single I/O service time one would need the queue length as well. However - telegraf doesn’t seem to expose the kernel value for “in_flight” that contains the number of I/O’s currently in flight.I believe this to be the queue depth. (if it does - I don’t know what it’s called - see, there is that documentation problem I’m talking about…)

Being an average queue length and not an instantaneous queue length - we need to consider utilizing Little’s Law from Queuing Theory, the result is only valid when the system is in equilibrium state - when average and instantaneous queue lengths are equal.

Ultimately - this is a documentation shortfall of the diskio plugin in my opinion, which should clearly state the “mapping” of telegraf keys to the kernel counters. This would alleviate the confusion.


We should definitely try to improve the documentation here, lets try to link away to some of these documents we have found and perhaps mention the source file for Linux, but not go into too much depth directly in our docs. This plugin is also meant to be multiplatform, so that needs to be kept in mind.

Do you think you could open a pull request with an update to the documentation?

I don’t mind updating the document (I’ve done so in the past for other telegraf plugins), but if I knew the mappings - I wouldn’t be raising the issue - right!?

If someone can tell me what these map to, I can do a P/R - but I’m at an impasse until that happens.

1 Like