Avoiding spamming notifications

I’m currently using influxdb 2.0.5, but I’m receiving a lot of repeated notifications on pagerduty and slack. How can I set up a monitor task (for example, a disk or memory alert) that doesn’t trigger pagerduty/slack if it has already been triggered in, say, last 60 mins?
Please, notice I’m not asking how to set up a custom check task; I’m asking what’s the best way to avoid repeating notifications.

Hello @vvilaplana,
I would create a custom notification rule. You could simply query statuses from the last 2 hours, use the elapsed function, and notify on statuses that have been written/elapsed more than 60m apart.
I would use the elapsed function and query for data where the time between statuses has been more than 60m

import "influxdata/influxdb/monitor"
import "influxdata/influxdb/secrets"
import "slack"

token = secrets.get(key: "SLACK_TOKEN")

slack_endpoint = slack.endpoint(url: "https://slack.com/api/chat.postMessage") //webhook 

option task = {name: "my custom notification rule", 
               every: 1h, 
               offset: 0s}

notification_data = {
	_notification_rule_id: "0000000000000001",  //alphanumeric, 16 characters 
	_notification_rule_name: "example-rule-name",
	_notification_endpoint_id: "0000000000000002",  //alphanumeric, 16 characters 
	_notification_endpoint_name: "example-endpoint-name",
}

from(bucket: "_monitoring")
  |> range(start: task.every)
  |> filter(fn: (r) => r["_measurement"] == "statuses")
  |> filter(fn: (r) => r["_check_name"] == "my check name")
  |> elapsed(unit: 1m)
  |> filter(fn: (r) => r.elapsed > 60
|> monitor["notify"](data: notification, 
                     endpoint: slack_endpoint(mapFn: (r) = (
{channel: "", 
 text: "Notification Rule: ${ r._notification_rule_name } triggered by     check: ${ r._check_name }: ${ r._message }", 
color: if r["_level"] == "crit" then "danger" else if r["_level"] == "warn" then "warning" else "good"})))

However please note that if you have a one status that gets written at -1.5hr and then a second status that gets written at -0.5hr and your task runs at -1hr then it won’t query for the status that happened just outside of the task execution. So either you can query for data outside of an hour (like run the task every hour but query the last 2 hours of data) which under certain circumstances could potentially yield you one duplicate alert. For example if you have statuses written at -2 hr, -1hr, and 0hr and your task runs every hour then you will have a duplicate alert for the first two statuses at -2 hr and -1hr.
If having occasional duplicate alerts is a problem, you can write a custom task to output statuses that have happened more than 60m apart for the entire day and write these special statuses to a new measurement in your _monitoring bucket and create a basic notification rule for each time a new status is written.

import "date"
import "influxdata/influxdb/tasks"
today = date.truncate(t: now(), unit: 1d)
cutoff = tasks.lastSuccess(orTime: -60m) // runs last time your task ran successfully or every 60 minute
from(bucket: "_monitoring")
  |> range(start: today)
  |> filter(fn: (r) => r["_measurement"] == "statuses")
  |> filter(fn: (r) => r["_check_name"] == "my check name")
  |> elapsed(unit: 1m)
  |> filter(fn: (r) => r.elapsed > 60 and r._time >= cutoff)
//convert the output of the elapsed column to a string so you can write it to a new measurement
  |> map(fn:(r) => ({ r with elapsed: string(v: r.elapsed)}))
  |> map(fn:(r) => ({r with _measurement: "statuses_greater_than_60m_measurement"}))
  |> to(bucket: "_monitoring")

Now you should be querying data from the whole day, and then writing a new status to a new measurement every time the duration between two statuses exceeds 60m. Now you can simply write a notification rule on the output of that task by filtering for “statuses_greater_than_60m_measurement” and running that task as frequently as you want.

Please let me know if this makes sense or if I can help in any way.
Finally, if you haven’t seen this already, I highly recommend giving it a look: InfluxDB’s Checks and Notifications System | InfluxData

Thanks!