I am using latest Grafana 9 alerts which send me notifications on NoData which I would like to ignore, but I can’t since I can’t fill 0 when no metric is sent by the backend.
How can I achive this ?
This metric is sent only when there is a problem, so NoData should equal to 0, but I can’t make this work although I used count
agreggation which IMO should return 0 if there are no metrics?
1 Like
InfluxQL can fill only on an existing series, meaning at least one datapoint must be returned, then all the others can be filled accordingly.
As you said, since you are handling events data will exist only in case of issues and not having data at all means everything is fine.
Grafana Way
This must be handled by Grafana (which is perfectly able to do so), the latest version should look like this:
If you are still using the old now so-called “legacy alerts” the interface might be different, but this is a behavior that was configurable even in previous versions so it has to be there.
Data Way
The other option is to actually collect a 0 at the source, which is still makes sense to me from the point of view of continuous monitoring, as 0 vs no-data might not strictly be the same.
- 0 means we are fine and everything is working…
- no data means everything should be fine, but is it also working? (I can’t tell… what if there are data collection issues?)
Simply put my question to the system is Hey system how many errors do I have? and I expect to always get a response even if it is There are 0 errors
Legacy alerts worked fine, I am using now new alerts.
The setting you showed doesn’t work OK (it did in Grafana < v9). Grafana generates OK alerts in that case, so the difference from other options is that there will be 1 alert instead of 2 ( nodata/error → ok vs only ok ). I think this is a bug with latest Grafana: Grafana 9.1.4 NoData reduce problem · Issue #55085 · grafana/grafana · GitHub
“Errors” option is for influxdb data collection issues (network etc.), but I don’t expect sending app to has issues, or it would quickly be fixed (unless it is network issue in which case 0 metrics would pass)
I think the only option you have is to wait for Grafana to be fixed…
The only other option I see is to always have the series, but since it’s not a continuous stream the only option is to query the last 1-2-3 weeks/months of data, or in any case, a time window big enough to contain an error, this way the series will be returned and fill will work.
As you can imagine this way is flawed as you needlessly query lots of data and still have no guarantee something will be returned…
Thanks, it is what I suspected. This particular error happens few times per year at most so no range is good enough.