Error in plugin: must be an object or an array of objects

Hi everyone , I’m trying to config telegraf+mqtt plugin on windows to subscribe topics on a mqtt server but I got this error in cmd .
2021-06-18T00:55:02Z E! [inputs.mqtt_consumer] Error in plugin: must be an object or an array of objects.
would you please help me

Please paste your telegraf configuration , and a small sample of the data your trying to receive

@Milad
This is unreadable, please post the config in proper markdown format in the forum:

```toml
put your telegraf config snippet here
and btw, we only need the mqtt part, not the whole config file
```

sorry about it @Franky1
here it is:


[[inputs.mqtt_consumer]]
  ## Broker URLs for the MQTT server or cluster.  To connect to multiple
  ## clusters or standalone servers, use a seperate plugin instance.
  ##   example: servers = ["tcp://localhost:1883"]
  ##            servers = ["ssl://localhost:1883"]
  ##            servers = ["ws://localhost:1883"]
  servers = ["tcp://130.136.2.70:1883"]

  ## Topics that will be subscribed to.
  topics = [
  "#"
  ]

  ## The message topic will be stored in a tag specified by this value.  If set
  ## to the empty string no topic tag will be created.
  # topic_tag = "topic"

  ## QoS policy for messages
  ##   0 = at most once
  ##   1 = at least once
  ##   2 = exactly once
  ##
  ## When using a QoS of 1 or 2, you should enable persistent_session to allow
  ## resuming unacknowledged messages.
  # qos = 0

  ## Connection timeout for initial connection in seconds
  # connection_timeout = "30s"

  ## Maximum messages to read from the broker that have not been written by an
  ## output.  For best throughput set based on the number of metrics within
  ## each message and the size of the output's metric_batch_size.
  ##
  ## For example, if each message from the queue contains 10 metrics and the
  ## output metric_batch_size is 1000, setting this to 100 will ensure that a
  ## full batch is collected and the write is triggered immediately without
  ## waiting until the next flush_interval.
  # max_undelivered_messages = 1000

  ## Persistent session disables clearing of the client session on connection.
  ## In order for this option to work you must also set client_id to identify
  ## the client.  To receive messages that arrived while the client is offline,
  ## also set the qos option to 1 or 2 and don't forget to also set the QoS when
  ## publishing.
  # persistent_session = false

  ## If unset, a random client ID will be generated.
  # client_id = ""

  ## Username and password to connect MQTT server.
   username = " my user name "
   password = "my password"

  ## Optional TLS Config
  # tls_ca = "/etc/telegraf/ca.pem"
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"
  ## Use TLS but skip chain & host verification
  # insecure_skip_verify = false

  ## Data format to consume.
  ## Each data format has its own unique set of configuration options, read
  ## more about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
    json_string_fields = ["uplink_message_frm_payload"]
  data_format = "json"

Any more information about this error?
Does this error occur immediately when telegraf loads the config? Then i assume there is a typo in your config.
If it occurs during data fetching from the broker, we need more information, for example a sample of the raw data of your mqtt json payload.

@Franky1 exactly this error occur immediately when telegraf loads the config in cmd .
here is the complete config:
As I’m a beginner in this field I’m not sure about info you want but I will explain the project completely.
there is a ESP32 microcontroller which send sensors data with variable sample frequency to mqtt server and by mqtt explore I can see the data flowing through server. Now I get this error when I run telegraf in cmd to subscribe the topics send by ESP32

# Configuration for telegraf agent
[agent]
  ## Default data collection interval for all inputs
  interval = "10s"
  ## Rounds collection interval to 'interval'
  ## ie, if interval="10s" then always collect on :00, :10, :20, etc.
  round_interval = true

  ## Telegraf will send metrics to outputs in batches of at most
  ## metric_batch_size metrics.
  ## This controls the size of writes that Telegraf sends to output plugins.
  metric_batch_size = 1000

  ## For failed writes, telegraf will cache metric_buffer_limit metrics for each
  ## output, and will flush this buffer on a successful write. Oldest metrics
  ## are dropped first when this buffer fills.
  ## This buffer only fills when writes fail to output plugin(s).
  metric_buffer_limit = 10000

  ## Collection jitter is used to jitter the collection by a random amount.
  ## Each plugin will sleep for a random time within jitter before collecting.
  ## This can be used to avoid many plugins querying things like sysfs at the
  ## same time, which can have a measurable effect on the system.
  collection_jitter = "0s"

  ## Default flushing interval for all outputs. Maximum flush_interval will be
  ## flush_interval + flush_jitter
  flush_interval = "10s"
  ## Jitter the flush interval by a random amount. This is primarily to avoid
  ## large write spikes for users running a large number of telegraf instances.
  ## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
  flush_jitter = "0s"

  ## By default or when set to "0s", precision will be set to the same
  ## timestamp order as the collection interval, with the maximum being 1s.
  ##   ie, when interval = "10s", precision will be "1s"
  ##       when interval = "250ms", precision will be "1ms"
  ## Precision will NOT be used for service inputs. It is up to each individual
  ## service input to set the timestamp at the appropriate precision.
  ## Valid time units are "ns", "us" (or "µs"), "ms", "s".
  precision = ""

  ## Logging configuration:
  ## Run telegraf with debug log messages.
  debug = false
  ## Run telegraf in quiet mode (error log messages only).
  quiet = false
  ## Specify the log file name. The empty string means to log to stderr.
  logfile = ""

  ## Override default hostname, if empty use os.Hostname()
  hostname = ""
  ## If set to true, do no set the "host" tag in the telegraf agent.
  omit_hostname = false
[[outputs.influxdb_v2]]
  ## The URLs of the InfluxDB cluster nodes.
  ##
  ## Multiple URLs can be specified for a single cluster, only ONE of the
  ## urls will be written to each interval.
  ## urls exp: http://127.0.0.1:9999
  urls = ["https://europe-west1-1.gcp.cloud2.influxdata.com"]

  ## Token for authentication.
  token = "my token in influx"

  ## Organization is the name of the organization you wish to write to; must exist.
  organization = "mvtkurd@gmail.com"

  ## Destination bucket to write into.
  bucket = "IOT PROJECT"
[[inputs.cpu]]
  ## Whether to report per-cpu stats or not
  percpu = true
  ## Whether to report total system cpu stats or not
  totalcpu = true
  ## If true, collect raw CPU time metrics.
  collect_cpu_time = false
  ## If true, compute and report the sum of all non-idle CPU states.
  report_active = false
[[inputs.disk]]
  ## By default stats will be gathered for all mount points.
  ## Set mount_points will restrict the stats to only the specified mount points.
  # mount_points = ["/"]
  ## Ignore mount points by filesystem type.
  ignore_fs = ["tmpfs", "devtmpfs", "devfs", "overlay", "aufs", "squashfs"]
[[inputs.diskio]]
[[inputs.mem]]
[[inputs.net]]
[[inputs.processes]]
[[inputs.swap]]
[[inputs.system]]


[[inputs.mqtt_consumer]]
  ## Broker URLs for the MQTT server or cluster.  To connect to multiple
  ## clusters or standalone servers, use a seperate plugin instance.
  ##   example: servers = ["tcp://localhost:1883"]
  ##            servers = ["ssl://localhost:1883"]
  ##            servers = ["ws://localhost:1883"]
  servers = ["tcp://130.136.2.70:1883"]

  ## Topics that will be subscribed to.
  topics = [
  "#"
  ]

  ## The message topic will be stored in a tag specified by this value.  If set
  ## to the empty string no topic tag will be created.
  # topic_tag = "topic"

  ## QoS policy for messages
  ##   0 = at most once
  ##   1 = at least once
  ##   2 = exactly once
  ##
  ## When using a QoS of 1 or 2, you should enable persistent_session to allow
  ## resuming unacknowledged messages.
  # qos = 0

  ## Connection timeout for initial connection in seconds
  # connection_timeout = "30s"

  ## Maximum messages to read from the broker that have not been written by an
  ## output.  For best throughput set based on the number of metrics within
  ## each message and the size of the output's metric_batch_size.
  ##
  ## For example, if each message from the queue contains 10 metrics and the
  ## output metric_batch_size is 1000, setting this to 100 will ensure that a
  ## full batch is collected and the write is triggered immediately without
  ## waiting until the next flush_interval.
  # max_undelivered_messages = 1000

  ## Persistent session disables clearing of the client session on connection.
  ## In order for this option to work you must also set client_id to identify
  ## the client.  To receive messages that arrived while the client is offline,
  ## also set the qos option to 1 or 2 and don't forget to also set the QoS when
  ## publishing.
  # persistent_session = false

  ## If unset, a random client ID will be generated.
  # client_id = ""

  ## Username and password to connect MQTT server.
   username = "my user "
   password = "my pass"

  ## Optional TLS Config
  # tls_ca = "/etc/telegraf/ca.pem"
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"
  ## Use TLS but skip chain & host verification
  # insecure_skip_verify = false

  ## Data format to consume.
  ## Each data format has its own unique set of configuration options, read
  ## more about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
    json_string_fields = ["uplink_message_frm_payload"]
  data_format = "json"

The mqtt plugin can be loaded and the connection to the broker seems to work.
I suspect that the json parser cannot process the payload, perhaps the payload is not in json format at all.
But as long as we don’t see the payload of the mqtt messages, we can only guess.

I’m using PUBSUBCLIENT library to send data to mqtt server in arduino so maybe their github link give us more info about data format. So if it’s possible would you please have a look on it .

I don’t have the time to read the docs of this lib, but i assume that it does not produce json payloads.
You can check the mqtt message payloads if you go to the dashboard of your broker or connect another mqtt client application to your broker.


I solved the issue by changing these lines but I have new issue in influx now .
screenshot uploaded
I changed the aggregation function also but it just show a direct line

 ## Data format to consume.
  ## Each data format has its own unique set of configuration options, read
  ## more about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
   ##json_string_fields = ["uplink_message_frm_payload"]
  data_format = "value"
   data_type = "string"

As i already said, you have to know the payload of your mqtt messages.
Otherwise you won’t be able to choose and configure the parser.
Garbage in - garbage out :wink:
Forget about the influxdb dashboard for now.
Fix the data ingress in Telegraf first.
Add this for debugging purposes to your Telegraf config file:

# file output only for debugging
[[outputs.file]]
  files = ["mqtt.out"]
  influx_sort_fields = true

Run Telegraf and post a snippet of this file here.

@Franky1 I got two result for debugging which I attached here.

2021-06-18T22:21:08Z I! Loaded inputs: mqtt_consumer
2021-06-18T22:21:08Z I! Loaded aggregators: 
2021-06-18T22:21:08Z I! Loaded processors: 
2021-06-18T22:21:08Z I! Loaded outputs: file influxdb_v2
2021-06-18T22:21:08Z I! Tags enabled: host=DESKTOP-4GIC52S
2021-06-18T22:21:08Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"DESKTOP-4GIC52S", Flush Interval:10s
2021-06-18T22:21:08Z D! [agent] Initializing plugins
2021-06-18T22:21:08Z D! [agent] Connecting outputs
2021-06-18T22:21:08Z D! [agent] Attempting connection to [outputs.influxdb_v2]
2021-06-18T22:21:08Z D! [agent] Successfully connected to outputs.influxdb_v2
2021-06-18T22:21:08Z D! [agent] Attempting connection to [outputs.file]
2021-06-18T22:21:08Z D! [agent] Successfully connected to outputs.file
2021-06-18T22:21:08Z D! [agent] Starting service inputs
2021-06-18T22:21:08Z I! [inputs.mqtt_consumer] Connected [tcp://130.136.2.70:1883]
2021-06-18T22:21:19Z D! [outputs.file] Wrote batch of 10 metrics in 339.4µs
2021-06-18T22:21:19Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:21:19Z D! [outputs.influxdb_v2] Wrote batch of 10 metrics in 438.9345ms
2021-06-18T22:21:19Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:21:29Z D! [outputs.file] Wrote batch of 7 metrics in 0s
2021-06-18T22:21:29Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:21:29Z D! [outputs.influxdb_v2] Wrote batch of 7 metrics in 257.531ms
2021-06-18T22:21:29Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:21:39Z D! [outputs.file] Wrote batch of 7 metrics in 0s
2021-06-18T22:21:39Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:21:39Z D! [outputs.influxdb_v2] Wrote batch of 7 metrics in 189.1935ms
2021-06-18T22:21:39Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:21:49Z D! [outputs.file] Wrote batch of 8 metrics in 0s
2021-06-18T22:21:49Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:21:49Z D! [outputs.influxdb_v2] Wrote batch of 8 metrics in 215.267ms
2021-06-18T22:21:49Z D! [outputs.influxdb_v2] Buffer fullness: 2 / 10000 metrics
2021-06-18T22:21:59Z D! [outputs.file] Wrote batch of 13 metrics in 94.4µs
2021-06-18T22:21:59Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:21:59Z D! [outputs.influxdb_v2] Wrote batch of 13 metrics in 251.7228ms
2021-06-18T22:21:59Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:22:09Z D! [outputs.file] Wrote batch of 7 metrics in 0s
2021-06-18T22:22:09Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:22:09Z D! [outputs.influxdb_v2] Wrote batch of 7 metrics in 278.0282ms
2021-06-18T22:22:09Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:22:19Z D! [outputs.file] Wrote batch of 7 metrics in 101.4µs
2021-06-18T22:22:19Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:22:19Z D! [outputs.influxdb_v2] Wrote batch of 7 metrics in 200.8531ms
2021-06-18T22:22:19Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:22:29Z D! [outputs.file] Wrote batch of 7 metrics in 177.3µs
2021-06-18T22:22:29Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:22:29Z D! [outputs.influxdb_v2] Wrote batch of 7 metrics in 228.5967ms
2021-06-18T22:22:29Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:22:39Z D! [outputs.file] Wrote batch of 7 metrics in 475.6µs
2021-06-18T22:22:39Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:22:39Z D! [outputs.influxdb_v2] Wrote batch of 7 metrics in 259.2422ms
2021-06-18T22:22:39Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:22:49Z D! [outputs.file] Wrote batch of 7 metrics in 0s
2021-06-18T22:22:49Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:22:49Z D! [outputs.influxdb_v2] Wrote batch of 7 metrics in 183.0992ms
2021-06-18T22:22:49Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:22:59Z D! [outputs.file] Wrote batch of 7 metrics in 0s
2021-06-18T22:22:59Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:22:59Z D! [outputs.influxdb_v2] Wrote batch of 7 metrics in 213.4737ms
2021-06-18T22:22:59Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:23:09Z D! [outputs.file] Wrote batch of 7 metrics in 0s
2021-06-18T22:23:09Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:23:09Z D! [outputs.influxdb_v2] Wrote batch of 7 metrics in 142.9851ms
2021-06-18T22:23:09Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:23:19Z D! [outputs.file] Wrote batch of 7 metrics in 0s
2021-06-18T22:23:19Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:23:19Z D! [outputs.influxdb_v2] Wrote batch of 7 metrics in 143.3764ms
2021-06-18T22:23:19Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:23:29Z D! [outputs.file] Wrote batch of 7 metrics in 0s
2021-06-18T22:23:29Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:23:29Z D! [outputs.influxdb_v2] Wrote batch of 7 metrics in 141.0954ms
2021-06-18T22:23:29Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:23:39Z D! [outputs.file] Wrote batch of 10 metrics in 0s
2021-06-18T22:23:39Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:23:39Z D! [outputs.influxdb_v2] Wrote batch of 10 metrics in 144.2035ms
2021-06-18T22:23:39Z D! [outputs.influxdb_v2] Buffer fullness: 2 / 10000 metrics
2021-06-18T22:23:49Z D! [outputs.file] Wrote batch of 11 metrics in 0s
2021-06-18T22:23:49Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:23:49Z D! [outputs.influxdb_v2] Wrote batch of 11 metrics in 143.4867ms
2021-06-18T22:23:49Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:23:59Z D! [outputs.file] Wrote batch of 5 metrics in 0s
2021-06-18T22:23:59Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:23:59Z D! [outputs.influxdb_v2] Wrote batch of 5 metrics in 139.0503ms
2021-06-18T22:23:59Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:24:09Z D! [outputs.file] Wrote batch of 9 metrics in 0s
2021-06-18T22:24:09Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:24:09Z D! [outputs.influxdb_v2] Wrote batch of 9 metrics in 141.6609ms
2021-06-18T22:24:09Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:24:19Z D! [outputs.file] Wrote batch of 7 metrics in 258.3µs
2021-06-18T22:24:19Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:24:19Z D! [outputs.influxdb_v2] Wrote batch of 7 metrics in 139.807ms
2021-06-18T22:24:19Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:24:29Z D! [outputs.file] Wrote batch of 7 metrics in 0s
2021-06-18T22:24:29Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:24:29Z D! [outputs.influxdb_v2] Wrote batch of 7 metrics in 143.5782ms
2021-06-18T22:24:29Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:24:39Z D! [outputs.file] Wrote batch of 7 metrics in 0s
2021-06-18T22:24:39Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:24:39Z D! [outputs.influxdb_v2] Wrote batch of 7 metrics in 140.9941ms
2021-06-18T22:24:39Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:24:49Z D! [outputs.file] Wrote batch of 7 metrics in 0s
2021-06-18T22:24:49Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:24:49Z D! [outputs.influxdb_v2] Wrote batch of 7 metrics in 142.5805ms
2021-06-18T22:24:49Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:24:59Z D! [outputs.file] Wrote batch of 7 metrics in 216.2µs
2021-06-18T22:24:59Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:24:59Z D! [outputs.influxdb_v2] Wrote batch of 7 metrics in 141.9163ms
2021-06-18T22:24:59Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:25:09Z D! [outputs.file] Wrote batch of 7 metrics in 0s
2021-06-18T22:25:09Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:25:09Z D! [outputs.influxdb_v2] Wrote batch of 7 metrics in 142.3981ms
2021-06-18T22:25:09Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:25:19Z D! [outputs.file] Wrote batch of 7 metrics in 0s
2021-06-18T22:25:19Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:25:19Z D! [outputs.influxdb_v2] Wrote batch of 7 metrics in 142.7248ms
2021-06-18T22:25:19Z D! [outputs.influxdb_v2] Buffer fullness: 1 / 10000 metrics
2021-06-18T22:25:29Z D! [outputs.file] Wrote batch of 11 metrics in 222.2µs
2021-06-18T22:25:29Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:25:29Z D! [outputs.influxdb_v2] Wrote batch of 11 metrics in 141.0086ms
2021-06-18T22:25:29Z D! [outputs.influxdb_v2] Buffer fullness: 2 / 10000 metrics
2021-06-18T22:25:39Z D! [outputs.file] Wrote batch of 10 metrics in 134.2µs
2021-06-18T22:25:39Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:25:39Z D! [outputs.influxdb_v2] Wrote batch of 10 metrics in 140.8738ms
2021-06-18T22:25:39Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:25:49Z D! [outputs.file] Wrote batch of 7 metrics in 0s
2021-06-18T22:25:49Z D! [outputs.file] Buffer fullness: 0 / 10000 metrics
2021-06-18T22:25:49Z D! [outputs.influxdb_v2] Wrote batch of 7 metrics in 140.8107ms
2021-06-18T22:25:49Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics

and another one:

mqtt_consumer,host=DESKTOP-4GIC52S,topic=Milad/mois value="22" 1624053607188255000
mqtt_consumer,host=DESKTOP-4GIC52S,topic=Milad/temp value="24.80" 1624053607699446300
mqtt_consumer,host=DESKTOP-4GIC52S,topic=Milad/hum value="51.90" 1624053607801700900
mqtt_consumer,host=DESKTOP-4GIC52S,topic=Milad/RSSI value="-52" 1624053607904044100
mqtt_consumer,host=DESKTOP-4GIC52S,topic=Milad/SHI value="19" 1624053608112040000
mqtt_consumer,host=DESKTOP-4GIC52S,topic=Milad/ID_SMART_POT value="001" 1624053608211545200
mqtt_consumer,host=DESKTOP-4GIC52S,topic=Milad/GPS value="44.490429839047955, 11.348976588503946" 1624053608211545200
mqtt_consumer,host=DESKTOP-4GIC52S,topic=Milad/mois value="22" 1624053616301435500
mqtt_consumer,host=DESKTOP-4GIC52S,topic=Milad/temp value="24.80" 1624053616814251900
mqtt_consumer,host=DESKTOP-4GIC52S,topic=Milad/hum value="52.00" 1624053617018706700
mqtt_consumer,host=DESKTOP-4GIC52S,topic=Milad/RSSI value="-53" 1624053617120988000
mqtt_consumer,host=DESKTOP-4GIC52S,topic=Milad/SHI value="19" 1624053617120988000
mqtt_consumer,host=DESKTOP-4GIC52S,topic=Milad/ID_SMART_POT value="001" 1624053617222709100
mqtt_consumer,host=DESKTOP-4GIC52S,topic=Milad/GPS value="44.490429839047955, 11.348976588503946" 1624053617325055000
mqtt_consumer,host=DESKTOP-4GIC52S,topic=Milad/mois value="22" 1624053625416580900
mqtt_consumer,host=DESKTOP-4GIC52S,topic=Milad/temp value="24.80" 1624053625926977600
mqtt_consumer,host=DESKTOP-4GIC52S,topic=Milad/hum value="52.10" 1624053626233996300
mqtt_consumer,host=DESKTOP-4GIC52S,topic=Milad/RSSI value="-54" 1624053626336185700
mqtt_consumer,host=DESKTOP-4GIC52S,topic=Milad/SHI value="19" 1624053626336185700
mqtt_consumer,host=DESKTOP-4GIC52S,topic=Milad/ID_SMART_POT value="001" 1624053626439376400
mqtt_consumer,host=DESKTOP-4GIC52S,topic=Milad/GPS value="44.490429839047955, 11.348976588503946" 1624053626543126600
mqtt_consumer,host=DESKTOP-4GIC52S,topic=Milad/mois value="22" 1624053634632338700

Also I can see the data in the table format in influx . but just in table not other :thinking:

I only need the second log.

The problem is that all payloads are read as strings. In addition, the payloads are also formatted differently. Strings can’t be displayed in the InfluxDB dashboard, of course, because how do you want to plot strings on a graph?

You have to convert the payloads into numeric values. There are several ways to do this. Personally I would format the data correctly as json right at the beginning in the ESP32 devices and then send it as mqtt message.

Another possibility is to do the conversion in Telegraf.
Here is one possibility as an example.
This should be a good starting point.

[agent]
  omit_hostname = true  # add this to your agent config

[[inputs.mqtt_consumer]]
  name_override = "mqtt"  # add this to your mqtt config

[[processors.converter]]
  [processors.converter.fields]
    float = ["value"]
  [processors.converter.tagpass]
    topic = ["Milad/temp", "Milad/hum"]

[[processors.converter]]
  [processors.converter.fields]
    integer = ["value"]
  [processors.converter.tagpass]
    topic = ["Milad/mois", "Milad/RSSI", "Milad/SHI", "Milad/ID_SMART_POT"]

[[processors.parser]]
  parse_fields = ["value"]
  drop_original = false
  merge = "override"
  data_format = "csv"
  csv_column_names = ["lat", "lon"]
  csv_trim_space = true
  [processors.parser.tagpass]
    topic = ["Milad/GPS"]

[[outputs.file]]  # only for debugging
  files = ["mqtt.out"]
  influx_sort_fields = true

@Franky1 Thank you so much for your advices. I’ll update my config.
As a another question , in this project I have to set GPS and ID_SMART_POT as a tag and the rest of data as field , so how can I do it ?
should I set them in Telegraf config or it’s possible to set them in influx?

I got error with new configuratin
here is the config:

# Configuration for telegraf agent
[agent]
  ## Default data collection interval for all inputs
  interval = "10s"
  ## Rounds collection interval to 'interval'
  ## ie, if interval="10s" then always collect on :00, :10, :20, etc.
  round_interval = true

  ## Telegraf will send metrics to outputs in batches of at most
  ## metric_batch_size metrics.
  ## This controls the size of writes that Telegraf sends to output plugins.
  metric_batch_size = 1000

  ## For failed writes, telegraf will cache metric_buffer_limit metrics for each
  ## output, and will flush this buffer on a successful write. Oldest metrics
  ## are dropped first when this buffer fills.
  ## This buffer only fills when writes fail to output plugin(s).
  metric_buffer_limit = 10000

  ## Collection jitter is used to jitter the collection by a random amount.
  ## Each plugin will sleep for a random time within jitter before collecting.
  ## This can be used to avoid many plugins querying things like sysfs at the
  ## same time, which can have a measurable effect on the system.
  collection_jitter = "0s"

  ## Default flushing interval for all outputs. Maximum flush_interval will be
  ## flush_interval + flush_jitter
  flush_interval = "10s"
  ## Jitter the flush interval by a random amount. This is primarily to avoid
  ## large write spikes for users running a large number of telegraf instances.
  ## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
  flush_jitter = "0s"

  ## By default or when set to "0s", precision will be set to the same
  ## timestamp order as the collection interval, with the maximum being 1s.
  ##   ie, when interval = "10s", precision will be "1s"
  ##       when interval = "250ms", precision will be "1ms"
  ## Precision will NOT be used for service inputs. It is up to each individual
  ## service input to set the timestamp at the appropriate precision.
  ## Valid time units are "ns", "us" (or "µs"), "ms", "s".
  precision = ""

  ## Logging configuration:
  ## Run telegraf with debug log messages.
  debug = true
  ## Run telegraf in quiet mode (error log messages only).
  quiet = false
  ## Specify the log file name. The empty string means to log to stderr.
  logfile = ""

  ## Override default hostname, if empty use os.Hostname()
  hostname = "mqtt"
  ## If set to true, do no set the "host" tag in the telegraf agent.
  omit_hostname = true
[[outputs.influxdb_v2]]
  ## The URLs of the InfluxDB cluster nodes.
  ##
  ## Multiple URLs can be specified for a single cluster, only ONE of the
  ## urls will be written to each interval.
  ## urls exp: http://127.0.0.1:9999
  urls = ["https://europe-west1-1.gcp.cloud2.influxdata.com"]

  ## Token for authentication.
  token = "my token
"

  ## Organization is the name of the organization you wish to write to; must exist.
  organization = "mvtkurd@gmail.com"

  ## Destination bucket to write into.
  bucket = "IOT PROJECT"

[[inputs.mqtt_consumer]]
  ## Broker URLs for the MQTT server or cluster.  To connect to multiple
  ## clusters or standalone servers, use a seperate plugin instance.
  ##   example: servers = ["tcp://localhost:1883"]
  ##            servers = ["ssl://localhost:1883"]
  ##            servers = ["ws://localhost:1883"]
  servers = ["tcp://130.136.2.70:1883"]

  ## Topics that will be subscribed to.
topics=["#"]

  ## The message topic will be stored in a tag specified by this value.  If set
  ## to the empty string no topic tag will be created.
   ##topic_tag = "GPS"
   

  ## QoS policy for messages
  ##   0 = at most once
  ##   1 = at least once
   ##  2 = exactly once
  ##
  ## When using a QoS of 1 or 2, you should enable persistent_session to allow
  ## resuming unacknowledged messages.
  # qos = 0

  ## Connection timeout for initial connection in seconds
  # connection_timeout = "30s"

  ## Maximum messages to read from the broker that have not been written by an
  ## output.  For best throughput set based on the number of metrics within
  ## each message and the size of the output's metric_batch_size.
  ##
  ## For example, if each message from the queue contains 10 metrics and the
  ## output metric_batch_size is 1000, setting this to 100 will ensure that a
  ## full batch is collected and the write is triggered immediately without
  ## waiting until the next flush_interval.
  # max_undelivered_messages = 1000

  ## Persistent session disables clearing of the client session on connection.
  ## In order for this option to work you must also set client_id to identify
  ## the client.  To receive messages that arrived while the client is offline,
  ## also set the qos option to 1 or 2 and don't forget to also set the QoS when
  ## publishing.
   persistent_session = false

  ## If unset, a random client ID will be generated.
   ##client_id = ""

  ## Username and password to connect MQTT server.
   username = "my user"
   password = "my pass"

  ## Optional TLS Config
  # tls_ca = "/etc/telegraf/ca.pem"
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"
  ## Use TLS but skip chain & host verification
  # insecure_skip_verify = false
  
  [[processors.converter]]
  [processors.converter.fields]
    float = ["value"]
  [processors.converter.tagpass]
    topic = ["Milad/temp", "Milad/hum"]

[[processors.converter]]
  [processors.converter.fields]
    integer = ["value"]
  [processors.converter.tagpass]
    topic = ["Milad/mois", "Milad/RSSI", "Milad/SHI", "Milad/ID_SMART_POT"]

[[processors.parser]]
  parse_fields = ["value"]
  drop_original = false
  merge = "override"
  data_format = "csv"
  csv_column_names = ["lat", "lon"]
  csv_trim_space = true
  [processors.parser.tagpass]
    topic = ["Milad/GPS"]

[[outputs.file]]  # only for debugging
  files = ["mqtt5.out"]
  influx_sort_fields = true
  

  ## Data format to consume.
  ## Each data format has its own unique set of configuration options, read
  ## more about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
   ##json_string_fields = ["uplink_message_frm_payload"]
  ##data_format = "value"
  ##data_type = "string"
    ##influx_sort_fields = true


Hi @Milad,

Does this help?

@MarcV @Franky1 thank you guys :heart_eyes: :+1: . now it works correctly. As a final question, how can I set GPS and ID_SMART_POT as a tag and the rest of data as field ?

Theoretically, this is also possible with a processors.converter plugin. But in this case it makes no sense, because the datapoint then only consists of tags and has no field anymore. A datapoint must have at least one field, otherwise it will be discarded.

[[processors.converter]]
  [processors.converter.fields]
    tag = ["value"]
  [processors.converter.tagpass]
    topic = ["Milad/ID_SMART_POT"]
# datapoint will be discarded after this step, because it has no field anymore

I don’t quite understand why you would use the GPS coordinates as the tag. Unless the value is fixed and does not change. But then I would map that with the processors.enum plugin if necessary.

[[processors.enum]]
  [[processors.enum.mapping]]
    field = "value"
    dest = "device"
    [processors.enum.mapping.value_mappings]
      "44.490429839047955, 11.348976588503946" = 1
  [processors.enum.tagpass]
    topic = ["Milad/GPS"]

Nevertheless, I still advocate solving the problems at the source. Have your devices send meaningful mqtt payloads right away and don’t fix the problems afterwards, it gets more and more complicated…

2 Likes

100% . I’d upvote this a million times if I could.

1 Like