Managing kapacitor alerts at scale

We are working on productionalizing how we create kapacitor tasks, templates etc. We are planning on using Load directory service | Kapacitor 1.4 Documentation plus ansible. One of the issues we are thinking we will run into is how to prevent the tasks from ballooning into a lot of files that are nearly identical.

For example we have dozens of queues we want to monitor. We have a tickscript that pulls out the depth thats generic, it primarily takes the queue names as its input. But, we have different thresholds for each queue. So our initial solution is to render a task file per queue that contains the queue name and threshold. Problem is this results in a lot of files especially when you multiply across all the different kinds of alerts we want to generate.

Does anyone have any experience with a better way to manage kapacitor alerts?

1 Like

The best way to manage Kapacitor alerts at scale is with sideload().

Thanks for the response. If I follow the documentation I could have a task that takes a sideloaded file which contains a yaml list of queue names and thresholds rather than a task per queue.

Do you know of any examples using a sideload? I did some google searching and couldn’t find an example.

Thanks.

There’s a start on an example in this description.

Sure I’ve reviewed that example but was hoping for something more in depth. Some questions: What is the format of the files at ‘file:///path/to/dir’ Is it just key: value? What is replaced by {{ host }}?

I’m having trouble visualizing how this works beyond the concept of loading fields/tags from a file.

Thanks.

The format of the files is key value data, either json or yaml. The {{ .host}} is replaced by the host tag value. This allows for creating specific overrides per host or whatever other tags you might need.

1 Like

really appreciating this info katy. i’m new here so such kind of information is surely valuable for me. was wondering if i might ask you other questions as well? thanks again!

1 Like

Of course. We’re here to help!

Is it possible to reference a field within a yaml/json file rather than have 1 file per queue in our case? eg have a file called threshold.yml with the format:
QUEUE_1:
threshold: 1000
QUEUE_2:
threshold: 2000

I got a file per queue to work but with the following.
Tick script:
dbrp “telegraf”.“autogen”
var data = stream
|from()
.measurement(‘infra_nix_mq_queue’)
.groupBy(‘queues’)
|window()
.period(5m)
.every(10s)
|sideload()
.source(‘file:/app/uid/kapacitor/sideload’)
.order(’{{.queues}}.yml’)
.field(‘threshold’, 9999)
|alert()
.crit(lambda: int(“depth”) > “threshold”)
.exec(’/usr/bin/python’, ‘/tmp/zenoss_event.py’)

Where the yaml file is the name of a queue with the format
threshold: 1000

This looks related.

Going to try it out.

1 Like

Unfortunately he solved escaping his partition names but only requested being able to use something other than a flat file.

I think it’s a hack; it works, but having a hierarchical tree within the .yml files would be superior.
I’ve looked at the sideload source code and found no other way around.

Cheers
Benjamin