Telegraf and influxdb configuration for ingesting bulk of data

I am using Kafka broker as input from where i am ingesting huge bulk of data and data is coming every 5 mins. But most of metrics are getting dropped. What should i refer for ingesting this much data properly without much data loss?


Do you have measurements with the same timestamp and tag values? I have had issues when loading data from log files where there will be log entries with the same timestamp and tag values but different field values. Influx overwrites these and only keeps the last record. If this is your issue, you need another tag value to make the series unique (which can impact cardinality) or manipulate the timestamp slightly if you don’t need nano-second granularity you have some room to play with there.

Unique series is coming. I think making some changes in telegraf agent plugin like setting metric-batch-size or others will work. But not sure how to use them properly.