Mqtt cached messages

Hi,

I have setup Telegraf subscribed to MQTT and writing the data to InfluxDB. I have a lot of data coming in and it works excellent up until the moment The Mqtt caches messages and sends them all at once. Telegraf sees all messages and in the logs shows that it writes them to Influx, but Influx does not show them in the database. Can someone point me in the correct direction on how to troubleshoot this issue or how to change configuration accordingly? The incoming data is Json format and I am parsing it as such.

My Telegraf config is:

[agent]
precision = “ms”
interval = “2s”
round_interval = true
debug = true
flush_buffer_when_full = true
collection_jitter = “0s”
flush_interval = “30s”
flush_jitter = “5s”
quiet = false
metric_batch_size = 1000
metric_buffer_limit = 12000000

[[outputs.influxdb_v2]]
urls = [“influxdb”]

Token for authentication.

#token = “xxxxxxxxxxxxxxxxxxxx”
organization = “XXxxxXXX”
bucket = “Xxxxxx”
[[inputs.mqtt_consumer]]
servers = [“tcp://XXXXXXXXXXXXXX”]
topics = [
“#”,
]
qos = 2

Data format

data_format = “json”
json_string_fields = [“event”, “state”]

I see no issues in influxdb logs not Telegraf.

Does anyone have any idea what am I missing?

Thanks in advance.

It might be to the data having the same key, which is made by timestamp + tags (both key value).
If a key already exists, you get an update.

My idea is that you don’t have an explicit timestamp in the message, therefore InfluxDB uses the time of the insert.

as a sample, if you submit the below data without timestamp in the same moment you will get one insert and 3 updates

bucket,host=a cpu=10   --insert
bucket,host=a cpu=11   --update
bucket,host=a cpu=13   --update
bucket,host=a cpu=8   --update

leaving you with just one point in the end.

to avoid this you could have an explicit timestamp in the data themselves.

Hi @Giovanni_Luisotto

Thank you for the reply. The message sent is in json and it is sending a “time” field in unix timestamp. Should I add this as a tag or this would not matter?

I did change the json to use unix at some point, at lease I read on this in some documentation, but nothing really changed.

You must specify it as time field in the json parser json_time_key = "_name_", otherwise it won’t be used

Hi Giovanni,

I did try to put the “time” as jsin_time_key and change the time to unix, even to unix_ms.

This is what is sent to Influx:

“time”: 1626888371866

but sadly I have no data - Telegraf shows that it writes the data, and again it is overwriting it.

There is another setting that helps with time precision:
precision : Collected metrics are rounded to the precision specified as an interval.

Have a look at the precision option, maybe that’s causing the points to be rounded to the same timestamp.

Are you sure InflxuDB is performing an update? how do you query your data?

Hi, Thank you for your help. I figured it out. When I receive all the messages at once, Telegraf writes them to Influx successfully but since it writes them in batches and uses the influx time stamp, they are written in every possible nanosecond. As I use Grafana I set the query time to 1 ms and then 1 ns that actually crashes Influx. If I could use the json time stamp, that would be great but saddly I have no success.