Best practices for automating Telegraf config generation

I am currently writing a script to manage Telegraf config generation for a number of SNMP devices and I’m trying to decide the best way to lay out my config.

My first thought is that a single monolithic file is bad, mostly for readability issues. This then leaves me with the issue of deciding how to break it up.

In my current non-automated config I have a separate config file for each device type that I am trying to collect data from, which may look like this:

[[inputs.snmp]]
agents = ["device1","device2","device3"]
version = 1
community = "private"
name = "device_type"
[[inputs.snmp.field]]
    name = "oid_name"
    oid = "some::oid"

Going forwards my plan is to create a separate file for each device, such that only a single device would exist for each agent field. This seems fairly clean to me, except when I look how Telegraf appears to load plugins - every time I invoke a plugin I see it get loaded in the log file.

This makes me think that the layout for my Telegraf config files that I am planning on moving to will generate a bunch of unnecessary overhead. I believe that this shouldn’t be an issue with what we are doing now, but I would hate to introduce any sort of bottleneck that could cause issues down the line.

If you split each device into it’s own plugin definition it will require a bit more memory to hold the separated configuration but unless you are monitoring a very high number of devices I think it will not be very noticeable.

Thanks for the info on that. Since I think having separate plugin definitions would be easier to manage I will then try that route.

Hi, and what best practise would be for 14k snmp enabled switches? Maybe someone tried to do some automation? Because now, I use perl script to generate yaml configs for each switch model, from sql db, and other script reads that configs and inserts directly to influxdb.

@Naumis1 the basics of automating this are similar to any other infrastructure / configuration automation. The details will depend on your architecture, your team’s skill set, and your business model.

You’ll want a central source of truth for information about your switches. It sounds like you’re using a SQL DB for this purpose, which is fine. Other options include storing this information in a git repository or configuration management database.

You’ll also want a method of generating and deploying configuration files and software based on this data. If using Perl scripts for this is working for you, then stick with it! Configuration management tools like Ansible, Chef, Puppet, or SaltStack might also be a good choice.

Finally, you’ll want to decide if you need “continuous configuration automation”, which is analogous to continuous deployment for software. This is a process which will detect changes in your environment, such as the addition of new data to your central source of truth, which will then automatically deploy new configurations based on those changes.