How to use jsonv2 parser

I am using Telegraf to transport data from a instance of “THE THINGS STACK to Cloud” to “Influx Cloud 2.0", the The Telegraf is running on a small Debian-10-buster VM instance in Google Cloud. I am using the MQTT Integration in THE THINGS STACK, and streaming data from one end device with two active sensors. The end device ein question can have more than two sensors. I am “storing” the Telegraf config.file in Influx Cloud 2.0: The parts of the Telegraf MQTT config.file looks like this:

 [[inputs.mqtt_consumer]]
  ## Broker URLs for the MQTT server or cluster.  To connect to multiple [[inputs.mqtt_consumer]]
  ## Broker URLs for the MQTT server or cluster.  To connect to multiple
  ## clusters or standalone servers, use a separate plugin instance.
  ##   example: servers = ["tcp://localhost:1883"]
  ##            servers = ["ssl://localhost:1883"]
  ##            servers = ["ws://localhost:1883"]
  servers = ["tcp://eu1.cloud.thethings.industries:1883"]

  ## Topics that will be subscribed to.
  topics = ["v3/red001@bluebeam/devices/eui-24e124445d083682/up"]

  ## The message topic will be stored in a tag specified by this value.  If set
  ## to the empty string no topic tag will be created.
  topic_tag = "only2002"

  ## QoS policy for messages
  ##   0 = at most once
  ##   1 = at least once
  ##   2 = exactly once
  ##
  ## When using a QoS of 1 or 2, you should enable persistent_session to allow
  ## resuming unacknowledged messages.
  qos = 0

  ## Connection timeout for initial connection in seconds
  connection_timeout = "30s"

  ## Maximum messages to read from the broker that have not been written by an
  ## output.  For best throughput set based on the number of metrics within
  ## each message and the size of the output's metric_batch_size.
  ##
  ## For example, if each message from the queue contains 10 metrics and the
  ## output metric_batch_size is 1000, setting this to 100 will ensure that a
  ## full batch is collected and the write is triggered immediately without
  ## waiting until the next flush_interval.
  # max_undelivered_messages = 1000

  ## Persistent session disables clearing of the client session on connection.
  ## In order for this option to work you must also set client_id to identify
  ## the client.  To receive messages that arrived while the client is offline,
  ## also set the qos option to 1 or 2 and don't forget to also set the QoS when
  ## publishing.
  # persistent_session = false

  ## If unset, a random client ID will be generated.
  # client_id = ""

  ## Username and password to connect MQTT server.
  username = "USER"
  password = "PASSWORD"

  ## Optional TLS Config
  # tls_ca = "/etc/telegraf/ca.pem"
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"
  ## Use TLS but skip chain & host verification
  # insecure_skip_verify = false

  ## Data format to consume.
  ## Each data format has its own unique set of configuration options, read
  ## more about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
  data_format = "json"

  ## Enable extracting tag values from MQTT topics
  ## _ denotes an ignored entry in the topic path
  ## Value supported is int, float, unit
  #   [[inputs.mqtt_consumer.topic.types]]
  #      key = type
  ## clusters or standalone servers, use a separate plugin instance.
  ##   example: servers = ["tcp://localhost:1883"]
  ##            servers = ["ssl://localhost:1883"]
  ##            servers = ["ws://localhost:1883"]
  servers = ["tcp://eu1.cloud.thethings.industries:1883"]

  ## Topics that will be subscribed to.
  topics = ["v3/red001@bluebeam/devices/eui-24e124445d083682/up"]

  ## The message topic will be stored in a tag specified by this value.  If set
  ## to the empty string no topic tag will be created.
  topic_tag = "only2002"

  ## QoS policy for messages
  ##   0 = at most once
  ##   1 = at least once
  ##   2 = exactly once
  ##
  ## When using a QoS of 1 or 2, you should enable persistent_session to allow
  ## resuming unacknowledged messages.
  qos = 0

  ## Connection timeout for initial connection in seconds
  connection_timeout = "30s"

  ## Maximum messages to read from the broker that have not been written by an
  ## output.  For best throughput set based on the number of metrics within
  ## each message and the size of the output's metric_batch_size.
  ##
  ## For example, if each message from the queue contains 10 metrics and the
  ## output metric_batch_size is 1000, setting this to 100 will ensure that a
  ## full batch is collected and the write is triggered immediately without
  ## waiting until the next flush_interval.
  # max_undelivered_messages = 1000

  ## Persistent session disables clearing of the client session on connection.
  ## In order for this option to work you must also set client_id to identify
  ## the client.  To receive messages that arrived while the client is offline,
  ## also set the qos option to 1 or 2 and don't forget to also set the QoS when
  ## publishing.
  # persistent_session = false

  ## If unset, a random client ID will be generated.
  # client_id = ""

  ## Username and password to connect MQTT server.
  username = "USER"
  password = "PASSWORD"

  ## Optional TLS Config
  # tls_ca = "/etc/telegraf/ca.pem"
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"
  ## Use TLS but skip chain & host verification
  # insecure_skip_verify = false

  ## Data format to consume.
  ## Each data format has its own unique set of configuration options, read
  ## more about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
  data_format = "json"

The data from the device defined under Topics ( topics = [“v3/red001@bluebeam/devices/eui-24e124445d083682/up”]), is streaming data to the Influx 2.0 instance.
However I want to parse out only the relevant signals from the message, which is:
“pt100_1”: 20.1,
“pt100_2": 19.4
.The json file from the relevant device in THE THINGS STACK looks like this:

{
  "name": "as.up.data.forward",
  "time": "2023-09-07T18:22:39.714926502Z",
  "identifiers": [
    {
      "device_ids": {
        "device_id": "eui-24e124445d083682",
        "application_ids": {
          "application_id": "red001"
        },
        "dev_eui": "24E124445D083682",
        "join_eui": "24E124C0002A0001",
        "dev_addr": "2608D24F"
      }
    }
  ],
  "data": {
    "@type": "type.googleapis.com/ttn.lorawan.v3.ApplicationUp",
    "end_device_ids": {
      "device_id": "eui-24e124445d083682",
      "application_ids": {
        "application_id": "red001"
      },
      "dev_eui": "24E124445D083682",
      "join_eui": "24E124C0002A0001",
      "dev_addr": "2608D24F"
    },
    "correlation_ids": [
      "as:up:01H9RE3J2XFC1FYKBCMZCZK6YY",
      "gs:conn:01H8XY3ATBBX3ZG68MPW3RDETS",
      "gs:up:host:01H8XY3ATHG3F50W0BA9HTKXKZ",
      "gs:uplink:01H9RE3HWBYA7W5F40M09N5RB7",
      "ns:uplink:01H9RE3HWCVKTM596ATEECP4K9",
      "rpc:/ttn.lorawan.v3.GsNs/HandleUplink:01H9RE3HWCG2V0Y8NJRJ0VSNS4",
      "rpc:/ttn.lorawan.v3.NsAs/HandleUplink:01H9RE3J2WZHXW52CW0R4Q0WY6"
    ],
    "received_at": "2023-09-07T18:22:39.709109152Z",
    "uplink_message": {
      "session_key_id": "AYgMSk4JwCpe2+phmK+1aw==",
      "f_port": 85,
      "f_cnt": 176280,
      "frm_payload": "AwAABAAABQAABgAABwEACAEACWfJAApnwgA=",
      "decoded_payload": {
        "gpio_in_1": "off",
        "gpio_in_2": "off",
        "gpio_in_3": "off",
        "gpio_in_4": "off",
        "gpio_out_1": "off",
        "gpio_out_2": "off",
        "pt100_1": 20.1,
        "pt100_2": 19.4
      },
      "rx_metadata": [
        {
          "gateway_ids": {
            "gateway_id": "eui-647fdafffe01549f",
            "eui": "647FDAFFFE01549F"
          },
          "time": "2023-09-07T18:22:39.391Z",
          "timestamp": 3709817516,
          "rssi": -108,
          "channel_rssi": -108,
          "snr": 1.5,
          "uplink_token": "CiIKIAoUZXVpLTY0N2ZkYWZmZmUwMTU0OWYSCGR/2v/+AVSfEKyl/egNGgwI76nopwYQtaGW7gEg4N/rkvz+ygEqDAjvqeinBhDA37i6AQ==",
          "channel_index": 6,
          "gps_time": "2023-09-07T18:22:39.391Z",
          "received_at": "2023-09-07T18:22:39.409261810Z"
        }
      ],
      "settings": {
        "data_rate": {
          "lora": {
            "bandwidth": 125000,
            "spreading_factor": 7,
            "coding_rate": "4/5"
          }
        },
        "frequency": "867700000",
        "timestamp": 3709817516,
        "time": "2023-09-07T18:22:39.391Z"
      },
      "received_at": "2023-09-07T18:22:39.500501394Z",
      "consumed_airtime": "0.082176s",
      "network_ids": {
        "net_id": "000013",
        "tenant_id": "bluebeam",
        "cluster_id": "eu1",
        "cluster_address": "eu1.cloud.thethings.industries",
        "tenant_address": "bluebeam.eu1.cloud.thethings.industries"
      }
    }
  },
  "correlation_ids": [
    "as:up:01H9RE3J2XFC1FYKBCMZCZK6YY",
    "gs:conn:01H8XY3ATBBX3ZG68MPW3RDETS",
    "gs:up:host:01H8XY3ATHG3F50W0BA9HTKXKZ",
    "gs:uplink:01H9RE3HWBYA7W5F40M09N5RB7",
    "ns:uplink:01H9RE3HWCVKTM596ATEECP4K9",
    "rpc:/ttn.lorawan.v3.GsNs/HandleUplink:01H9RE3HWCG2V0Y8NJRJ0VSNS4",
    "rpc:/ttn.lorawan.v3.NsAs/HandleUplink:01H9RE3J2WZHXW52CW0R4Q0WY6"
  ],
  "origin": "ip-10-23-6-94.eu-west-1.compute.internal",
  "context": {
    "tenant-id": "CghibHVlYmVhbQ=="
  },
  "visibility": {
    "rights": [
      "RIGHT_APPLICATION_TRAFFIC_READ"
    ]
  },
  "unique_id": "01H9RE3J32D2KCS6N1TZC13KVF"
}

I have tried several time using the Telegraf json_v2 parser without any luck. Anybody that can support me ? (edited)

Hello @Luhrern,
What would be extra helpful is if you can share your expected output in line protocol.

You can use the json plugin too, specifically the query option:

I think your would look like:
json_query = “obj.data.uplink_message.decoded_payload.pt100*”

If you wanted just the pt values and nothing else.

The query syntax supported is GJSON Path Syntax, you can go to this playground to test out your GJSON path here: gjson.dev/.

You can use that for the json and json_v2 plugins.
With json_v2 plugin you can specify a path that way for the following:

  • measurement_name (OPTIONAL): Will set the measurement name to the provided string.
  • measurement_name_path (OPTIONAL): You can define a query with GJSON Path Syntax to set a measurement name from the JSON input. The query must return a single data value or it will use the default measurement name. This takes precedence over measurement_name.
  • timestamp_path (OPTIONAL): You can define a query with GJSON Path Syntax to set a timestamp from the JSON input. The query must return a single data value or it will default to the current time.
  • timestamp_format (OPTIONAL, but REQUIRED when timestamp_query is defined: Must be set to unix, unix_ms, unix_us, unix_ns, or the Go “reference time” which is defined to be the specific time: Mon Jan 2 15:04:05 MST 2006
  • timestamp_timezone (OPTIONAL, but REQUIRES timestamp_query: This option should be set to a Unix TZ value, such as America/New_York, to Local to utilize the system timezone, or to UTC. Defaults to UTC

As well as for the tags and fields.

But I really recommend using the playground to test out your paths with your given json.