Logs processing best practices

We are parsing all logs with severity 0-3. Not all actually, some logs are filtered and not recorded to influxdb.
What are best practices dealing with new logs coming? I would like to alert logs recorded to influxdb to slack channel. I guess some aggregation has to be used so one day we won’t receive hundreds of similar messages. I’ll appreciate any config samples as well :slight_smile:
Thanks in advance.

Ideally your logs are using a structured data format that you can easily parse using Telegraf’s logparser plugin, which will let you convert them into tagged InfluxDB measurements. Telegraf’s measurement filtering can be used to filter what data gets recorded to InfluxDB.

Once you have data in InfluxDB, you can use Kapacitor to send alerts to a Slack channel (or other output). First, set up a Slack event handler, and then create a Kapacitor alert using that handler.

So I’m on last step - creating a Kapacitor alert.
This is what I have for now.

dbrp "telegraf"."autogen"

var info = 2

var data = stream
    |from()
        .measurement('syslog')
    |eval(lambda: "levelint", lambda: "message")
        .as('value', 'message')

var trigger = data
    |alert()
        .id('{{ index .Tags "host"}}/syslog')
        .message('{{ index .Fields "value" }} loglevel at {{ index .Tags "host"}} with message: {{ index .Fields "message" }}')
        .info(lambda: "value" <= info)
        .slack()

So I alert all messages to slack channel with log level 0-2 (later it will be 0-3, when I deal with current logs so nothing will be recorded to influxdb). But some day something will happen and I’ll receive hundreds of notifications to slack channel. What can I do to avoid it?
Are there any workaround for this issue?