I was trying to set up a telemetry system for our product and wanted to make it modular. We used snap-telemetry framework earlier, but seeing as it has been discontinued, we were looking for alternatives.
From my understanding, telegraf could be a pretty good alternative. It have appropriate streaming type input plugins and output plugins for all third-party applications that we care about (like influxDB and prometheus).
I was wondering though like snap, is there a possibility in telegraf to have the plugins be changed on the fly? And be able to mux the metrics from the input plugin to multiple output plugins through a cli/api type configuration?
is there a possibility in telegraf to have the plugins be changed on the fly
Not really, the best way currently is to change the config file and send a SIGHUP. Telegraf will send all data and then restart itself. We have plans to improve this functionality to reduce downtime, but in most cases the interruption caused by reloading is less than a second.
be able to mux the metrics from the input plugin to multiple output plugins through a cli/api type configuration
Telegraf sends all input data to all outputs by default, this can be restricted using the filtering selectors. There isn’t a cli for Telegraf, all changes are made in the configuration file.
Hi @daniel , I am getting sflow data using sflow input plugin and sending it to elasticsearch using elastic output plugin . Now my requirement is to have kafka in between sflow data and elasticsearch . 1st data should go to kafka topic using kafka output plugin and then we consume that data from kafka topic using kafka conumer input plugin and then finally it should go to Elastic using elastic output plugin . is it possible in Telegraf ? . It can be achieve using logstash .In logstash we have multiple pipline concept using different conf files
Normally I expect most would probably have two layers of Telegraf for this, but for smaller workloads a single Telegraf can handle this.
The idea is to add tags that you can use for routing, and then remove them before writing to the final destination. Here is the configuration with only the routing options(untested):
[[inputs.sflow]]
# options
[[outputs.kafka]]
# options
# don't handle anything tagged with output_dest=elastic
[outputs.elasticsearch.tagdrop]
output_dest = ["elastic"]
[[inputs.kafka_consumer]]
# options
# add the output_dest=elastic tag
[inputs.kafka_consumer.tags]
output_dest = "elastic"
[[outputs.elasticsearch]]
# options
# remove the tag we used from routing
tagexclude = ["output_dest"]
# only write items with the output_dest=elastic tag
[outputs.elasticsearch.tagpass]
output_dest = ["elastic"]
@daniel Hi , this configuration is not working . with this configuration I am getting data on kafka topic but not able to get it in Elasticsearch .it looks like kafka_consumer plugin is not working . I am attaching the conf file can you please look on it .
One more thing , I am able to get data in elasticsearch if I run two instances of telegraf . one instance configure to get data on kafka from sflow(sflow → kafka) and second instance with conf file to get data in elasticsearch from kafka (kafka_consumer → elastic) . I wanted to do it with one telegraf instance , using two instances is the overhead .final_telegraf.txt (14.4 KB)