Dynamic mapping to tags

I have a column node which have many mac addresses. I want to map the mac-address to location and site. Eg - mac-address = 0005BCA, site = mumbai, location = 123. I am using processor.enum.mapping to map mac-address. But i am facing the issue that i am to define each mac-address manually in “[processors.enum.mapping.value_mappings]”, is there a way to do dynamically. Like for some mac-address we can have site and location without a way to manually define it. Will storing a key-value pair in redis or some other db will work, so that i don’t need to manually define keys and i should get all key-value pairs from redis db.

Can it be that we have the same problem but in a different context? I want to add the tags from one table to another. Maybe something similar can help you or we can help each other.
Here is the link to my topic: Join two different sized Tables by a tag and not by time

As far as I know not, that would be a very special use case for the enum processor.
But you could write your own processor plugin that does exactly this dynamic mapping with a processors.execd plugin.

I am not sure.
@Pratik_Das_Baghel wants to solve this on Telegraf level.

1 Like

ok thanks. i wasn’t sure about this.

Thanks. I will look for that.

1 Like

Hello @Pratik_Das_Baghel,
This might also be useful

1 Like

I will briefly outline here a working example that works like this.
It can serve as a starting point for you and must of course be adapted!
The processors plugin is not perfect - i would make it a bit more robust against errors.

You will need three external python 3 libraries:

pip install redis
pip install influxdb-client
pip install line-protocol-parser

First I start a local Redis instance with Docker:

docker pull redis
docker run -p 6379:6379 --name redis -d redis

Then we initially populate the Redis database with the dynamic mappings using this Python 3 script:

import json
import redis

macs = dict()
macs["0005BCA"] = {'site': 'mumbai', 'location': 123}
macs["0005BCB"] = {'site': 'delhi', 'location': 124}
macs["0005BCC"] = {'site': 'bangalore', 'location': 125}

r = redis.Redis()  # default connection localhost:6379
print(r.ping())  # check connection

for key, value in macs.items():
    r.set(key, json.dumps(value))  # write keys
for key in macs:
    print(json.loads(r.get(key)))  # read back keys

Then we have some input data (mac addresses) as an example in macs.log:


This is my telegraf.conf snippet:

[[inputs.tail]]  # only for testing
  files = ["macs.log"]
  from_beginning = true
  data_format = "value"
  data_type = "string"
  name_override = "macs"

  namepass = ["macs"]
  command = ["python", "execd-redis.py"]

[[outputs.file]]  # only for testing
  files = ["macs.out"]
  influx_sort_fields = true

This is the Python 3 processors.execd plugin in the file execd-redis.py:

import json
import redis
from influxdb_client import Point
from line_protocol_parser import parse_line

r = redis.Redis()  # default connection localhost:6379
r.ping()  # check connection

while True:
        input_line = input()  # read from stdin
    except EOFError:  # catch EOF error
    except KeyboardInterrupt:  # catch KeyboardInterrupt
        data = parse_line(input_line)  # parse input line
        mac = data['fields'].get('value')
        if mac:
            data['tags'].update({'mac': mac})  # make tag
            data['fields'].update({'mac': mac})  # make field
            data['fields'].pop('value')  # remove field
            if r.exists(mac):
                details = json.loads(r.get(mac))
        point = Point.from_dict(data)  # metric from dict
        print(point.to_line_protocol())  # write to stdout

And voila, here is the output in macs.out after I ran Telegraf:

macs,mac=0005BCA,path=macs.log location=123i,mac="0005BCA",site="mumbai" 1617384804599588100
macs,mac=0005BCB,path=macs.log location=124i,mac="0005BCB",site="delhi" 1617384804599588100
macs,mac=0005BCC,path=macs.log location=125i,mac="0005BCC",site="bangalore" 1617384804599588100

Happy telegrafing :nerd_face:

Hello @Franky1,
Thank you so much for this detailed answer. You rock!

Great work. Thanks man