Metric tracking with external plugin


I am having quite a problem with using Metric Tracking in an external input plugin that acts as a service.

First of all, the plugin starts various goroutines and gets notified via a notify mechanism if any metrics are available in the source.

I add these metrics using:
where m.acc is an instance of type telegraf.TrackingAccumulator.

The problem: m.acc.Delivered() is never fired.

I have added various prints in the source code of accumulator.go and can see that onDelivery is never called.

In telegraf/docs/ at master · influxdata/telegraf · GitHub it is described that this notification must in principle be sent by the output. But this does not seem to be the case with an external plugin.
My metrics arrive at the output and are also successfully stored in my InfluxDB instance, but my plugin does not get notified of this happening.
I would expect that shim must have to handle this somehow…

To test this, any input plugin that pulls the metrics independently can be changed to an external input plugin.
I have tested this with mqtt_consumer, for example. However, it should be noted that a parser must be added here explicitly.
In this example, only max_undelivered_messages messages are processed. After that, the plugin can no longer do anything.

See the following test outputs with max_undelivered_messages set to 10:

telegraf | 2024-06-25T12:36:23Z E! [inputs.execd] stderr: “[mqtt_consumer][onMessage] id: 7”
telegraf | 2024-06-25T12:36:28Z E! [inputs.execd] stderr: “[mqtt_consumer][onMessage] id: 8”
telegraf | 2024-06-25T12:36:32Z D! [outputs.influxdb_v2] Wrote batch of 2 metrics in 13.371321ms
telegraf | 2024-06-25T12:36:32Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
telegraf | 2024-06-25T12:36:33Z E! [inputs.execd] stderr: “[mqtt_consumer][onMessage] id: 9”
telegraf | 2024-06-25T12:36:38Z E! [inputs.execd] stderr: “[mqtt_consumer][onMessage] id: 10”
telegraf | 2024-06-25T12:36:42Z D! [outputs.influxdb_v2] Wrote batch of 2 metrics in 5.369446ms
telegraf | 2024-06-25T12:36:42Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
telegraf | 2024-06-25T12:36:52Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
telegraf | 2024-06-25T12:37:02Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
telegraf | 2024-06-25T12:37:12Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
telegraf | 2024-06-25T12:37:22Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics

hmm I have not dug into this too much, but I do not believe external plugins would work with tracking metrics. Thinking about how execd works, we send signals to whatever is run and have no other paths to communicate with it. With shim I believe is the same. So we would not have anyway to say, go read more or metrics were delivered.

That is true, I know that an external input plugin is expected to write the metrics in stdout.
Which is perfectly fine, as the system can be implemented very easily for external plugins.

But, then I would still expect that either
a) shim takes care that at least writing to stdout is acknowledged, or
b) metric tracking is not allowed in an external plugin at all

After all, the used accumulator is provided by shim itself:

In general, I don’t see any problem with having shim do this.
Yes, I have no guarantee that the metrics have really been processed by the output, but at least the functionality is then more reliable than before, as my plugin knows whether the metrics really have been written to stdout or not if I am using the accumulator.
Nobody says that metric tracking in an external plugin must necessarily follow the same definition. The functionality here can be completely different, which is perfectly fine, as long as this is communicated somewhere.

So I assume that all metrics end up in the metricCh chan when the accumulator is used.
The processing of this pipeline and the final output via stdout is done here:

So what would be the problem with simply adding an m.Accept() or maybe even better an m.Drop() after line 124? I honestly don’t know.