I am trying to write a query to summarize the monitoring status of an alert task but I keep getting ‘expected int but found bool’. This was the same error whether I tried map or reduce functions. I saw this documentation but it is opposite of what I am doing, turning floats into strings. What am I missing? I plan on writing these values to another bucket because I cannot access _monitor without giving Grafana a full access token. It also helps me determine if there is a lack of records (with the count) and a weighted value which hopefully won’t flap.
OK, so one step forward. You cannot name the accumulator variables whatever you like with the identify definition. I cannot find any documentation that specifies what names are available to me though either. The ones I have seen in examples are sum, count, total and product.
I am not sure if Map or Reduce would be better here yet. Also, the reduce has to come before the grouping.
Yes, I figured out the if else problem. I am thinking if I want to optimize it (remove the first map), I should move the task away from the monitor feature and just write to my own bucket with the level already set numerically instead of with a string. I am expecting to have hundred of these types of tasks in the future so removing the first map will compound the savings in cpu usage.
yes then I agree you might want to bypass using the monitor.check() functions and write data to a separate measurement in the _monitoring bucket (or wherever you want). This is an interesting use case though.
This issue is related to what you’re requesting. I encourage you to comment on it:
Thanks for the follow ups. I commented on the first issue above and I will need to take a look at the profiler.
Right now I run the above query every 5 seconds with a 5 minute window which should have 10 records per a monitored item. I expect to have many hundreds of monitored items as the project grows. I query the last record for a large group of items on Grafana to get the current status which of course is very fast since I preprocess it using this query. So I could be mapping 5,000 values every 5 seconds in the near future.
I will run it through the optimizer but in the future I could move to using my own monitoring tasks to get rid of the map. I don’t need to early optimize though.