I’m using Telegraf 1.27.2-alpine Docker container to connect to a Juniper network device. I was initially receiving data like this:
{
"fields": {
"component/properties/property/state/value": 56030424
},
"name": "memory",
"tags": {
"/components/component/properties/property/name": "mem-util-packet-dma-bytes-allocated",
"host": "telegraf-agent",
"source": "device.mgt.net",
"tag-name": "FPC0:CPU0"
},
"timestamp": 1689125760
}
{
"fields": {
"component/properties/property/state/value": 49
},
"name": "memory",
"tags": {
"/components/component/properties/property/name": "mem-util-packet-dma-utilization",
"host": "telegraf-agent",
"source": "device.mgt.net",
"tag-name": "FPC0:CPU0"
},
"timestamp": 1689125760
}
I then applied the pivot processor to map the “…/property/name” to “…/property/value” which then gives me data like this:
{
"fields": {
"mem-util-kernel-fpb-bytes-allocated": 56
},
"name": "memory",
"tags": {
"host": "telegraf-agent",
"source": "device.mgt.net",
"tag-name": "FPC0:CPU0"
},
"timestamp": 1689125850
}
{
"fields": {
"mem-util-kernel-fpb-allocations": 4
},
"name": "memory",
"tags": {
"host": "telegraf-agent",
"source": "device.mgt.net",
"tag-name": "FPC0:CPU0"
},
"timestamp": 1689125850
}
Now I would like to use the merge aggregator to merge the fields together so it should look like this:
{
"fields": {
"mem-util-kernel-fpb-bytes-allocated": 56,
"mem-util-kernel-fpb-allocations": 4
},
"name": "memory",
"tags": {
"host": "telegraf-agent",
"source": "device.mgt.net",
"tag-name": "FPC0:CPU0"
},
"timestamp": 1689125850
}
However, merge is not working.
This is what my Telegraf conf looks like:
[global_tags]
[agent]
round_interval = true
metric_batch_size = 1000
metric_buffer_limit = 150000
flush_interval = "10s"
debug = true
quiet = false
hostname = "$containerName-telegraf-agent"
omit_hostname = false
[[inputs.gnmi]]
addresses = ["device.mgt.net:50051"]
username = "$user"
password = "$password"
redial = "10s"
enable_tls = true
tls_ca = "/etc/telegraf/router_ca.pem"
insecure_skip_verify = true
tagexclude = ["path"]
[inputs.gnmi.aliases]
memory = "/components"
[[inputs.gnmi.subscription]]
name = "memory"
origin = "openconfig"
path = "/junos/system/linecard/cpu/memory"
subscription_mode = "sample"
sample_interval = "10s"
# Rotate a single valued metric into a multi field metric
[[processors.pivot]]
namepass = ["memory"]
## Tag to use for naming the new field.
tag_key = "/components/component/properties/property/name"
## Field to use as the value of the new field.
value_key = "component/properties/property/state/value"
# Perform some field and tag name changes
[[processors.rename]]
[[processors.rename.replace]]
field = "name"
dest = "field-name"
[[processors.rename.replace]]
tag = "name"
dest = "tag-name"
# Set timestamp to nano second format so merge processor doesn't throw errors.
[[processors.starlark]]
namepass = ["qmon", "routingEngine", "memory"]
source = '''
load('time.star', 'time')
def apply(metric):
metric.time = time.now().unix_nano
return metric
'''
# Aggreate fields for following sensors
[[aggregators.merge]]
namepass = ["memory"]
drop_original = true
grace = "10s"
[[outputs.file]]
## Files to write to, "stdout" is a specially handled file.
files = ["stdout"]
data_format = "json"
Not sure why merge isn’t merging the events together.