Kafka Consumer Input Plugin write properties at telegraf.conf

Good day
Can you tell me, it is right at telegraf.conf?

SERVICE INPUT PLUGINS 
[[inputs.kafka_consumer]]
  ## kafka servers
  brokers = ["kafka:9092"] - **I need to write localhost or kafka?**
  ## topic(s) to consume
  topics = ["telegraf"] - **does this topic need to be like this, or I need to write the same topics as has my kafka service?**
## the name of the consumer group
  consumer_group = "telegraf_metrics_consumers" - **where can I find what does this mean? I need to write the same name as my kafka consumer logs have?**

Hi @ArCr, Have you tried using the default config from the documentation? If so, what happens? It’s easy enough to edit the config file, so it may be worth it to try the default config. You will know pretty quickly if Telegraf is collecting metrics as expected. https://github.com/influxdata/telegraf/tree/master/plugins/inputs/kafka_consumer

I had this configuration

[[inputs.kafka_consumer]]
  ## kafka servers
  brokers = ["kafka:9092"]
  ## topic(s) to consume
  topics = ["telegraf"]
consumer_group = "customerGtwConsumerGroup"
  ## Offset (must be either "oldest" or "newest")
  offset = "oldest"
  max_message_len = 1000000
data_format = "influx"

Docker-compose logs were

telegraf_1      | 2019-05-24T07:12:28Z I! Started the kafka consumer service, brokers: [kafka:9092], topics: [telegraf]
kafka_1         | [2019-05-24 07:12:28,664] INFO [KafkaApi-2] Auto creation of topic telegraf with 1 partitions and replication factor 1 is successful (kafka.server.KafkaApis)
kafka_1         | [2019-05-24 07:12:28,698] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions telegraf-0 (kafka.server.ReplicaFetcherManager)
kafka_1         | [2019-05-24 07:12:28,702] INFO Loading producer state from offset 0 for partition telegraf-0 with message format version 2 (kafka.log.Log)
kafka_1         | [2019-05-24 07:12:28,703] INFO Completed load of log telegraf-0 with 1 log segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
kafka_1         | [2019-05-24 07:12:28,704] INFO Created log for partition [telegraf,0] in /kafka/kafka-logs-b925db59e624 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
kafka_1         | [2019-05-24 07:12:28,706] INFO Partition [telegraf,0] on broker 2: No checkpointed highwatermark is found for partition telegraf-0 (kafka.cluster.Partition)
kafka_1         | [2019-05-24 07:12:28,707] INFO Partition [telegraf,0] on broker 2: telegraf-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)

These logs confuse me, 'cause I don’t understand how can i used it

I want to write to influx some metrics from kafka, when I send messages by wss with gatling

I saw this sentence #consumer_group = “telegraf_metrics_consumers” what does it mean?
I have kafka two topics and two consumer groups. I need to write consumer_group = “my test consumer first group, my second consumer group name”?

As I can see at this URL https://github.com/bsm/sarama-cluster#deprecation-notice and at this import Import is Cluster extensions for Sarama, the Go client library for Apache Kafka 0.9 [DEPRECATED]
Mayde kafka-consumer-input-plugin doesn’t work?

What version of kafka can I use at my tests? I use 0.11.0.0 version. Or this plugin works only with 0.9.0 kafka?
It is important 'cause my services can work with 0.11.0 version

Hi @ArCr. Some reading indicates that “the use of a consumer group is optional, but if you have a large volume of data to pull from Kafka, you can setup multiple Telegraf instances each pulling form the same consumer group. This will allow you to pull more data and not have duplicate data from Kafka, as the consumer group will keep track of the topic offsets for each consumer client.” From https://dzone.com/articles/multiple-data-center-replication-with-influxdb-inf

1 Like