How not to use a field ? (influxDB - Telegraf )

Hi, I wish I didn’t need to use the “lat” and “lon” fields all the time but when I set a default value and don’t write “lat” and “lon” in my MQTT message, the values don’t go into influxDB.

the code :

the error :

E! [inputs.mqtt_consumer] Error in plugin: the path fields.lon doesn’t exist

Hi,

Where did you see this “default” setting? To be honest, I did not think that was something we had.

If you want to drop the field, then you can use metric modifiers, like fielddrop to make sure those two fields never are used.

If you want to include the field, then using a defaults processor to set up some default values would help.

don’t know it was on a other forum.

So if i want to use “lat” and “lon” in a static way, i just need to use defaults processor ?
This will work with mqtt and after show info in influxDB ?

Yes

This will work with mqtt and after show info in influxDB ?

Yes, note that processors operate on all metrics. So you may want to use a namepass to only send in the mqtt metrics if you have other metrics.

i think i use it badly because it stopped with this code

i think i use it badly because it stopped with this code

Logs and an example of the data you are using or the output before the defauls would help please :slight_smile:

logs with proccessors.defaults :

and without you can see this is working pretty well :
( i can’t put 2 media because i m a new member )

the complete code of telegraf.conf :

[agent]
omit_hostname = true
debug = true
logfile = "/var/log/telegraf/telegraf.log"
#[[inputs.file]]
  ## Path to the file from the container.
  ## TODO: eventually mount a volume to the /temp/ directory in case more files should be uploaded.
  #files = ["/etc/telegraf/0000"]
  #data_format = "influx"

# Parse the new lines appended to a file
[[inputs.tail]]
  ## File names or a pattern to tail.
  ## These accept standard unix glob matching rules, but with the addition of
  ## ** as a "super asterisk". ie:
  ##   "/var/log/**.log"  -> recursively find all .log files in /var/log
  ##   "/var/log/*/*.log" -> find all .log files with a parent dir in /var/log
  ##   "/var/log/apache.log" -> just tail the apache log file
  ##   "/var/log/log[!1-2]*  -> tail files without 1-2
  ##   "/var/log/log[^1-2]*  -> identical behavior as above
  ## See https://github.com/gobwas/glob for more examples
  ##
  files = ["/etc/telegraf/splitted_export_files/01"]

  ## Read file from beginning.
  from_beginning = true

  ## Whether file is a named pipe
  # pipe = false

  ## Method used to watch for file updates.  Can be either "inotify" or "poll".
  watch_method = "inotify"

  ## Maximum lines of the file to process that have not yet be written by the
  ## output.  For best throughput set based on the number of metrics on each
  ## line and the size of the output's metric_batch_size.
  max_undelivered_lines = 10000

  ## Character encoding to use when interpreting the file contents.  Invalid
  ## characters are replaced using the unicode replacement character.  When set
  ## to the empty string the data is not decoded to text.
  ##   ex: character_encoding = "utf-8"
  ##       character_encoding = "utf-16le"
  ##       character_encoding = "utf-16be"
  ##       character_encoding = ""
  # character_encoding = ""

  ## Data format to consume.
  ## Each data format has its own unique set of configuration options, read
  ## more about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
  data_format = "influx"

  ## Set the tag that will contain the path of the tailed file. If you don't want this tag, set it to an empty string.
  path_tag = ""

  ## Filters to apply to files before generating metrics
  ## "ansi_color" removes ANSI colors
  # filters = []

  ## multiline parser/codec
  ## https://www.elastic.co/guide/en/logstash/2.4/plugins-filters-multiline.html
  #[inputs.tail.multiline]
    ## The pattern should be a regexp which matches what you believe to be an indicator that the field is part of an event consisting of multiple lines of log data.
    #pattern = "^\s"

    ## The field's value must be previous or next and indicates the relation to the
    ## multi-line event.
    #match_which_line = "previous"

    ## The invert_match can be true or false (defaults to false).
    ## If true, a message not matching the pattern will constitute a match of the multiline filter and the what will be applied. (vice-versa is also true)
    #invert_match = false

    #After the specified timeout, this plugin sends the multiline event even if no new pattern is found to start a new event. The default is 5s.
    #timeout = 5s

[[inputs.mqtt_consumer]]
  servers = ["tcp://10.107.10.98:1883"]
  topics = [ "ardas_0000-028/data","bardas_001/#" ]
  data_format = "json_v2"

  username = "mqttuser"
  password = "mqttpassword"

  # Parsing the JSON data
  [[inputs.mqtt_consumer.json_v2]]

    #measurement_name = "" # A string that will become the new measurement name
    #measurement_name_path = "" # A string with valid GJSON path syntax, will override measurement_name
    timestamp_path = "time"
     # A string with valid GJSON path syntax to a valid timestamp (single value)
    
    timestamp_format = "unix" 
    # A string with a valid timestamp format (see below for possible values)
    
    timestamp_timezone = "UTC"
    # A string with with a valid timezone (see below for possible values)

    [[inputs.mqtt_consumer.json_v2.field]]
        path = "fields.value"
        # A string with valid GJSON path syntax
	
        #rename = "new name" # A string with a new name for the tag key
        type = "float"
        # A string specifying the type (int,uint,float,string,bool)
    [[inputs.mqtt_consumer.json_v2.field]]
        path = "fields.lon"
	type ="float"
#	optional = true
#	value = [3.953129,3.952439]
    [[inputs.mqtt_consumer.json_v2.field]]
        path = "fields.lat"
	type = "float"
#	optional = true
#	value = [50.452637,50.452151]
    [[inputs.mqtt_consumer.json_v2.tag]]
        path = "tags.sensor"

 
#[[processors.defaults]]

#    [processors.defaults.fields]
#	fields.value = "0"
#	fields.lon = "3.953129"
#	fields.lat = "50.452151"
       # A string with valid GJSON path syntax
        #rename = "new name" # A string with a new name for the tag key

    #[[inputs.mqtt_consumer.json_v2.object]]
    #    path = "" # A string with valid GJSON path syntax
    #    timestamp_key = "" # A JSON key (for a nested key, prepend the parent keys with underscores) to a valid timestamp
    #    timestamp_format = "" # A string with a valid timestamp format (see below for possible values)
    #    timestamp_timezone = "" # A string with with a valid timezone (see below for possible values)
    #    disable_prepend_keys = false (or true, just not both)
    #    included_keys = [] # List of JSON keys (for a nested key, prepend the parent keys with underscores) that should be only included in result
    #    excluded_keys = [] # List of JSON keys (for a nested key, prepend the parent keys with underscores) that shouldn't be included in result
    #    tags = [] # List of JSON keys (for a nested key, prepend the parent keys with underscores) to be a tag instead of a field
    #    [inputs.file.json_v2.object.renames] # A map of JSON keys (for a nested key, prepend the parent keys with underscores) with a new name for the tag key
    #        key = "new name"
    #    [inputs.file.json_v2.object.fields] # A map of JSON keys (for a nested key, prepend the parent keys with underscores) with a type (int,uint,float,string,bool)
    #        key = "int"
[[outputs.influxdb_v2]]
  # For the URL, could also use mg3d-dev's IP adress but docker-compose allows us to use an internal network
  urls = ["http://influx-app:8086"]
  token = "token nom"
  organization = "my-org"  
  namepass = []
  # bucket = "my-bucket"
  bucket = "bucket"


( i can’t put 2 media because i m a new member )

Avoid screenshots :slight_smile: copy and paste wrapped around three ``` symbols is best.

Consider this config:

[agent]
   omit_hostname = true

[[inputs.exec]]
  commands = ["echo metric,tag=tag1 value=42"]
  data_format = "influx"

[[outputs.file]]

it produces a metric like:

metric,tag=tag1 value=42 1679069672000000000

Then if I want to add those lat and long fields I would add:

[[processors.defaults]]
  [processors.defaults.fields]
    lon = 3.953129
    lat = 50.452151

which produces:

metric,tag=tag1 value=42,lon=3.953129,lat=50.452151 1679069763000000000

Is that what you are after?

thanks it work but do you have any idea how am i supposed to do if i want one sensor with different lat/lon ? because now they have all the proccesors.defaults.fields one

how many sensors do you have?
are they all in the same config? or different configs?
how would you differentiate between them?

I have 4 sensors.
They all have the same configuration.
I differentiate them with “sensor” tags.

But now that i have # the line of previous field “lat” “lon” so i could get from proccesors.defautls i don’t know how to do

example :

 [[inputs.mqtt_consumer.json_v2.field]]
        path = "fields.value"
        # A string with valid GJSON path syntax

        #rename = "new name" # A string with a new name for the tag key
        type = "float"
        # A string specifying the type (int,uint,float,string,bool)
#    [[inputs.mqtt_consumer.json_v2.field]]
#        path = "fields.lon"
#       type ="float"
#       optional = true
#       value = [3.953129,3.952439]
#    [[inputs.mqtt_consumer.json_v2.field]]
#        path = "fields.lat"
#       type = "float"
#       optional = true
#       value = [50.452637,50.452151]
    [[inputs.mqtt_consumer.json_v2.tag]]
        path = "tags.sensor"

 
[[processors.defaults]]

    [processors.defaults.fields]
#       fields.value = "0"
        lon = 3.953129
        lat = 50.452151

I differentiate them with “sensor” tags.

what I would is have 4 different processors setup with metric selectors so only metrics with specific tags gets what it needs:

[[processors.defaults]]
  tagpass = {"sensor" = ["sensor1name"]}
  [processors.defaults.fields]
    lon = 3.953129
    lat = 50.452151

[[processors.defaults]]
  tagpass = {"sensor" = ["sensor2name"]}
  [processors.defaults.fields]
    lon = 44.22444
    lat = 13.22992
....

unfortunately when i put this line, it’s doesn’t start, always rebooting
image

root@mg3d-dev:~/rcl_monitoring/telegraf_config# docker exec -it rcl_monitoring_telegraf-plugin_1 tail -f /var/log/telegraf/telegraf.log
2023-03-20T14:15:55Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2023-03-20T14:15:56Z D! [agent] Stopping service inputs
2023-03-20T14:15:56Z D! [inputs.mqtt_consumer] Disconnecting [tcp://10.107.10.98:1883]
2023-03-20T14:15:56Z D! [inputs.mqtt_consumer] Disconnected [tcp://10.107.10.98:1883]
2023-03-20T14:15:56Z D! [agent] Input channel closed
2023-03-20T14:15:56Z D! [agent] Processor channel closed
2023-03-20T14:15:56Z I! [agent] Hang on, flushing any cached metrics before shutdown
2023-03-20T14:15:56Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2023-03-20T14:15:56Z I! [agent] Stopping running outputs
2023-03-20T14:15:56Z D! [agent] Stopped Successfully

2023-03-20T14:15:56Z D! [inputs.mqtt_consumer] Disconnecting [tcp://10.107.10.98:1883]
2023-03-20T14:15:56Z D! [inputs.mqtt_consumer] Disconnected [tcp://10.107.10.98:1883]

if you cannot connect to an input, telegraf will not start in most cases.

I would also suggest including the full logs, so either run this outside of docker during development or get everything not just the last few lines

without tagpass :

2023-03-20T14:45:29Z I! Starting Telegraf 1.23.4
2023-03-20T14:45:29Z I! Loaded inputs: mqtt_consumer tail
2023-03-20T14:45:29Z I! Loaded aggregators: 
2023-03-20T14:45:29Z I! Loaded processors: defaults
2023-03-20T14:45:29Z I! Loaded outputs: influxdb_v2
2023-03-20T14:45:29Z I! Tags enabled: 
2023-03-20T14:45:29Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"", Flush Interval:10s
2023-03-20T14:45:29Z D! [agent] Initializing plugins
2023-03-20T14:45:29Z D! [agent] Connecting outputs
2023-03-20T14:45:29Z D! [agent] Attempting connection to [outputs.influxdb_v2]
2023-03-20T14:45:29Z D! [agent] Successfully connected to outputs.influxdb_v2
2023-03-20T14:45:29Z D! [agent] Starting service inputs
2023-03-20T14:45:29Z I! [inputs.mqtt_consumer] Connected [tcp://10.107.10.98:1883]
2023-03-20T14:45:39Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2023-03-20T14:45:49Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics

With tagpass :

2023-03-20T14:47:49Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2023-03-20T14:47:58Z D! [agent] Stopping service inputs
2023-03-20T14:47:58Z D! [inputs.mqtt_consumer] Disconnecting [tcp://10.107.10.98:1883]
2023-03-20T14:47:58Z D! [inputs.mqtt_consumer] Disconnected [tcp://10.107.10.98:1883]
2023-03-20T14:47:58Z D! [agent] Input channel closed
2023-03-20T14:47:58Z D! [agent] Processor channel closed
2023-03-20T14:47:58Z I! [agent] Hang on, flushing any cached metrics before shutdown
2023-03-20T14:47:58Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2023-03-20T14:47:58Z I! [agent] Stopping running outputs
2023-03-20T14:47:58Z D! [agent] Stopped Successfully


it just don’t want to connect at input with tagpass line

sorry but the with tagpass logs are not complete. can you

a) provide the full log, where is the starting up message and list of inputs?
b) provide full config as well

So here is the full telegraf.conf :

root@mg3d-dev:~/rcl_monitoring/telegraf_config# cat telegraf.conf
[agent]
omit_hostname = true
debug = true
logfile = "/var/log/telegraf/telegraf.log"
#[[inputs.file]]
  ## Path to the file from the container.
  ## TODO: eventually mount a volume to the /temp/ directory in case more files should be uploaded.
  #files = ["/etc/telegraf/0000"]
  #data_format = "influx"

# Parse the new lines appended to a file
[[inputs.tail]]
  ## File names or a pattern to tail.
  ## These accept standard unix glob matching rules, but with the addition of
  ## ** as a "super asterisk". ie:
  ##   "/var/log/**.log"  -> recursively find all .log files in /var/log
  ##   "/var/log/*/*.log" -> find all .log files with a parent dir in /var/log
  ##   "/var/log/apache.log" -> just tail the apache log file
  ##   "/var/log/log[!1-2]*  -> tail files without 1-2
  ##   "/var/log/log[^1-2]*  -> identical behavior as above
  ## See https://github.com/gobwas/glob for more examples
  ##
  files = ["/etc/telegraf/splitted_export_files/01"]

  ## Read file from beginning.
  from_beginning = true

  ## Whether file is a named pipe
  # pipe = false

  ## Method used to watch for file updates.  Can be either "inotify" or "poll".
  watch_method = "inotify"

  ## Maximum lines of the file to process that have not yet be written by the
  ## output.  For best throughput set based on the number of metrics on each
  ## line and the size of the output's metric_batch_size.
  max_undelivered_lines = 10000

  ## Character encoding to use when interpreting the file contents.  Invalid
  ## characters are replaced using the unicode replacement character.  When set
  ## to the empty string the data is not decoded to text.
  ##   ex: character_encoding = "utf-8"
  ##       character_encoding = "utf-16le"
  ##       character_encoding = "utf-16be"
  ##       character_encoding = ""
  # character_encoding = ""

  ## Data format to consume.
  ## Each data format has its own unique set of configuration options, read
  ## more about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
  data_format = "influx"

  ## Set the tag that will contain the path of the tailed file. If you don't want this tag, set it to an empty string.
  path_tag = ""

  ## Filters to apply to files before generating metrics
  ## "ansi_color" removes ANSI colors
  # filters = []

  ## multiline parser/codec
  ## https://www.elastic.co/guide/en/logstash/2.4/plugins-filters-multiline.html
  #[inputs.tail.multiline]
    ## The pattern should be a regexp which matches what you believe to be an indicator that the field is part of an event consisting of multiple lines of log data.
    #pattern = "^\s"

    ## The field's value must be previous or next and indicates the relation to the
    ## multi-line event.
    #match_which_line = "previous"

    ## The invert_match can be true or false (defaults to false).
    ## If true, a message not matching the pattern will constitute a match of the multiline filter and the what will be applied. (vice-versa is also true)
    #invert_match = false

    #After the specified timeout, this plugin sends the multiline event even if no new pattern is found to start a new event. The default is 5s.
    #timeout = 5s

[[inputs.mqtt_consumer]]
  servers = ["tcp://10.107.10.98:1883"]
  topics = [ "ardas_0000-028/data","bardas_001/#" ]
  data_format = "json_v2"

  username = "mqttuser"
  password = "mqttpassword"

  # Parsing the JSON data
  [[inputs.mqtt_consumer.json_v2]]

    #measurement_name = "" # A string that will become the new measurement name
    #measurement_name_path = "" # A string with valid GJSON path syntax, will override measurement_name
    timestamp_path = "time"
     # A string with valid GJSON path syntax to a valid timestamp (single value)
    
    timestamp_format = "unix" 
    # A string with a valid timestamp format (see below for possible values)
    
    timestamp_timezone = "UTC"
    # A string with with a valid timezone (see below for possible values)

    [[inputs.mqtt_consumer.json_v2.field]]
        path = "fields.value"
        # A string with valid GJSON path syntax
	
        #rename = "new name" # A string with a new name for the tag key
        type = "float"
        # A string specifying the type (int,uint,float,string,bool)
#    [[inputs.mqtt_consumer.json_v2.field]]
#        path = "fields.lon"
#	type ="float"
#	optional = true
#	value = [3.953129,3.952439]
#    [[inputs.mqtt_consumer.json_v2.field]]
#        path = "fields.lat"
#	type = "float"
#	optional = true
#	value = [50.452637,50.452151]
    [[inputs.mqtt_consumer.json_v2.tag]]
        path = "tags.sensor"

 
[[processors.defaults]]
    tagpass = {"sensor" = "255-1113"}
    [processors.defaults.fields]
#	fields.value = "0"
	lon = 3.953129
	lat = 50.452151

#[[processors.defaults]]
#    tagpass = {"sensor" = "255-1114"}
#    [processors.defaults.fields]
#       fields.value = "0"
#        lon = 5.953129
#        lat = 50.452151
       # A string with valid GJSON path syntax
        #rename = "new name" # A string with a new name for the tag key

    #[[inputs.mqtt_consumer.json_v2.object]]
    #    path = "" # A string with valid GJSON path syntax
    #    timestamp_key = "" # A JSON key (for a nested key, prepend the parent keys with underscores) to a valid timestamp
    #    timestamp_format = "" # A string with a valid timestamp format (see below for possible values)
    #    timestamp_timezone = "" # A string with with a valid timezone (see below for possible values)
    #    disable_prepend_keys = false (or true, just not both)
    #    included_keys = [] # List of JSON keys (for a nested key, prepend the parent keys with underscores) that should be only included in result
    #    excluded_keys = [] # List of JSON keys (for a nested key, prepend the parent keys with underscores) that shouldn't be included in result
    #    tags = [] # List of JSON keys (for a nested key, prepend the parent keys with underscores) to be a tag instead of a field
    #    [inputs.file.json_v2.object.renames] # A map of JSON keys (for a nested key, prepend the parent keys with underscores) with a new name for the tag key
    #        key = "new name"
    #    [inputs.file.json_v2.object.fields] # A map of JSON keys (for a nested key, prepend the parent keys with underscores) with a type (int,uint,float,string,bool)
    #        key = "int"
[[outputs.influxdb_v2]]
  # For the URL, could also use mg3d-dev's IP adress but docker-compose allows us to use an internal network
  urls = ["http://influx-app:8086"]
  token = "token code"
  organization = "my-org"  
  namepass = []
  # bucket = "my-bucket"
  bucket = "import_from_file_telegraf_20221003"

And the full log ( the beginning is without tagpass but after it just always restart so i can’t have more logs) :

2023-03-21T07:57:50Z D! [agent] Starting service inputs // (previous logs without tagpass)
2023-03-21T07:57:50Z I! [inputs.mqtt_consumer] Connected [tcp://10.107.10.98:1883]
2023-03-21T07:58:01Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2023-03-21T07:58:11Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2023-03-21T07:58:17Z D! [agent] Stopping service inputs
2023-03-21T07:58:17Z D! [inputs.mqtt_consumer] Disconnecting [tcp://10.107.10.98:1883]
2023-03-21T07:58:17Z D! [inputs.mqtt_consumer] Disconnected [tcp://10.107.10.98:1883]
2023-03-21T07:58:17Z D! [agent] Input channel closed
2023-03-21T07:58:17Z D! [agent] Processor channel closed
2023-03-21T07:58:17Z I! [agent] Hang on, flushing any cached metrics before shutdown
2023-03-21T07:58:17Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2023-03-21T07:58:17Z I! [agent] Stopping running outputs
2023-03-21T07:58:17Z D! [agent] Stopped Successfully
root@mg3d-dev:~/rcl_monitoring/telegraf_config# 


Using your config, shows an error:

tagpass = {"sensor" = "255-1113"}

2023-03-21T13:32:29Z E! error loading config file config.toml: error parsing defaults, line 31:{767 828}: found unexpected format while parsing “tagpass”, expecting string array/slice format on each entry

Change the format to:

tagpass = {"sensor" = ["255-1113"]}

I’ve updated my original response as well.

thank you,

Now this is working well

1 Like