XML Parser code not working

Hi all,

I have been trying to parse 3 values on separate URLs via the XML parser. However nothing shows up in the X2 bucket in influxdb. In my config file I have already another input source (X1) that is saved in the X1 bucket in the same database. I have tried to separate those 2 inputs and outputs via tagexclude . Does anybody see where the problem lies?

Many thanks!

Please see below my entire config file:

# Configuration for telegraf agent
[agent]
  ## Default data collection interval for all inputs
  interval = "5s"
  round_interval = true

  ## Telegraf will send metrics to outputs in batches of at most
  ## metric_batch_size metrics.
  ## This controls the size of writes that Telegraf sends to output plugins.
  metric_batch_size = 1000

  ## Maximum number of unwritten metrics per output.  Increasing this value
  ## allows for longer periods of output downtime without dropping metrics at the
  ## cost of higher maximum memory usage.
  metric_buffer_limit = 10000

  ## Collection jitter is used to jitter the collection by a random amount.
  ## Each plugin will sleep for a random time within jitter before collecting.
  ## This can be used to avoid many plugins querying things like sysfs at the
  ## same time, which can have a measurable effect on the system.
  collection_jitter = "0s"

  ## Default flushing interval for all outputs. Maximum flush_interval will be
  ## flush_interval + flush_jitter
  flush_interval = "10s"
  ## Jitter the flush interval by a random amount. This is primarily to avoid
  ## large write spikes for users running a large number of telegraf instances.
  ## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
  flush_jitter = "0s"

  ## By default or when set to "0s", precision will be set to the same
  ## timestamp order as the collection interval, with the maximum being 1s.
  ##   ie, when interval = "10s", precision will be "1s"
  ##       when interval = "250ms", precision will be "1ms"
  ## Precision will NOT be used for service inputs. It is up to each individual
  ## service input to set the timestamp at the appropriate precision.
  ## Valid time units are "ns", "us" (or "µs"), "ms", "s".
  precision = ""

  ## Log at debug level.
  # debug = false
  ## Log only error level messages.
  # quiet = false

  ## Log target controls the destination for logs and can be one of "file",
  ## "stderr" or, on Windows, "eventlog".  When set to "file", the output file
  ## is determined by the "logfile" setting.
  # logtarget = "file"

  ## Name of the file to be logged to when using the "file" logtarget.  If set to
  ## the empty string then logs are written to stderr.
  # logfile = ""

  ## The logfile will be rotated after the time interval specified.  When set
  ## to 0 no time based rotation is performed.  Logs are rotated only when
  ## written to, if there is no log activity rotation may be delayed.
  # logfile_rotation_interval = "0d"

  ## The logfile will be rotated when it becomes larger than the specified
  ## size.  When set to 0 no size based rotation is performed.
  # logfile_rotation_max_size = "0MB"

  ## Maximum number of rotated archives to keep, any older logs are deleted.
  ## If set to -1, no archives are removed.
  # logfile_rotation_max_archives = 5

  ## Pick a timezone to use when logging or type 'local' for local time.
  ## Example: America/Chicago
  # log_with_timezone = ""

  ## Override default hostname, if empty use os.Hostname()
  hostname = ""
  ## If set to true, do no set the "host" tag in the telegraf agent.
  omit_hostname = false
[[outputs.influxdb_v2]]

  urls = ["http://192.168.X.X:XXXX"]

  ## Token for authentication.
  token = "XXX"

  ## Organization is the name of the organization you wish to write to; must exist.
  organization = "XXX"

  ## Destination bucket to write into.
  bucket = "X1"
  
  tagexclude =[“tag1”]
  [outputs.influxdb_v2.tagpass]
  tag1 = [“X1”]



[[inputs.http]]
  urls = ["http://192.168.X.XX/getvar.csv"]
  data_format = "csv"
  csv_header_row_count = 1
  csv_measurement_column = "name"
  csv_tag_columns = ["id"]
  csv_column_types = ["string","float","string","string","string","string"]
  [inputs.socket_listener.tags]
  tag1 = “X1”

[[processors.converter]]
  [processors.converter.fields]
    float = ["val"]


#___________ X2 _________________


[[outputs.influxdb_v2]]

  urls = ["http://192.168.X.X:XXXX"]

  ## Token for authentication.
  token = "YYY"

  ## Organization is the name of the organization you wish to write to; must exist.
  organization = "XXX"

  ## Destination bucket to write into.
  bucket = "X2"
  
  tagexclude =[“tag2”]
  [outputs.influxdb_v2.tagpass]
  tag2 = [“X2”]



[[inputs.http]]
  urls = ["http://192.168.X.X/cgi-bin/webgui.fcgi?xmlitem=34.4001",
			"http://192.168.X.X/cgi-bin/webgui.fcgi?xmlitem=34.4022",
			"http://192.168.X.X/cgi-bin/webgui.fcgi?xmlitem=34.4033"]
  data_format = "xml"
  tagexclude = ["url", "host"]

  ## Multiple parsing sections are allowed
  [[inputs.tail.xml]]
    ## Optional: XPath-query to select a subset of nodes from the XML document.
    #metric_selection = "/pm5"

  [inputs.http.xml.tags]
      id = "id/@name"

    ## Non-integer field definitions using XPath queries.
    ## The field type is defined using XPath expressions such as number(), boolean() or string(). If no conversion is performed the field will be of type string.
    [inputs.http.xml.fields]
      type = "number(/item/@type)"
	  label = "string(/item/@label)"
	  unit = "string(/item/@unit)"
	  value = "number(/item/@value)"

  [inputs.socket_listener.tags]
  tag2 = “X2”

The XML code looks like this:
image

Couple changes to try:

  1. Use the xpath parser to get rid of the deprecation warnings in the future
  2. Set the metric selection to point at the item found in pm5 via:
    metric_selection = "/pm5"

Using this xml:

<pm5>
    <item type="34" id="4033" label="T1" unit="C" value="28.3" />
</pm5>

and this config:

[[inputs.file]]
  files = ["metrics.json"]
  data_format = "xml"

  tagexclude = ["url", "host"]

  [[inputs.file.xpath]]
    metric_selection = "/pm5"

    [inputs.file.xpath.tags]
      id = "id/@name"

    [inputs.file.xpath.fields]
      type = "number(/item/@type)"
      label = "string(/item/@label)"
      unit = "string(/item/@unit)"
      value = "number(/item/@value)"

I got:

file type=34,label="T1",unit="C",value=28.3 1672760971000000000

You will need to change the “file” to “tail”.

Hi @jpowers ,

Many thanks for the reply!

I have adjusted my code as follows:

[[outputs.influxdb_v2]]

  urls = ["http://192.168.X.X:8086"]

  ## Token for authentication.
  token = "XXX"

  ## Organization is the name of the organization you wish to write to; must exist.
  organization = "XXX"

  ## Destination bucket to write into.
  bucket = "BAYROL"

  tagexclude =["tag2"]
  [outputs.influxdb_v2.tagpass]
	tag2 = ["BAYROL"]


[[inputs.http]]
  urls = ["http://192.168.X.X/cgi-bin/webgui.fcgi?xmlitem=34.4001"]
  data_format = "xml"

  ## Multiple parsing sections are allowed

  [[inputs.http.xpath]]
    metric_selection = "/pm5"

    [inputs.http.xpath.tags]
      id = "id/@name"

    [inputs.http.xpath.fields]
      type = "number(/item/@type)"
      label = "string(/item/@label)"
      unit = "string(/item/@unit)"
      value = "number(/item/@value)"
	
  [inputs.http.tags]
	tag2 = "BAYROL"

Thanks again.

Best,
Ben

1 Like