Kapacitor practical limits

Hello,

I apologize if this has been covered, I used search and have been looking around but couldn’t find a reference to it.

We are planning to use Kapacitor for stream processing to send alerts from our sensors. We were planning to have a separate task/tickscript setup for each sensor that is publishing data, as the trigger conditions for each sensor are different depending on the location and variables (one in location A might be val > 50, another in location B might be val > 60, etc).

My question is if there is any kind of practical limit to kapacitor and if we had 100, 1000, or n devices and therefore n separate tasks, will this prevent a problem assuming there is adequate compute/memory capacity on the server for this? At what point might scalability become a concern?

The alerts will be sent from kapacitor via http post to an api that will de-dupe and handle the notification from there.

Is this the “right” way to do it or is there a better solution for handling this type of setup? Obviously if we have 1,000 or even 100 devices, some of them will probably have the same trigger conditions as others, but I’d think it would be a management nightmare to try and group devices based on trigger conditions and keep modifying tasks to include the various sensor IDs, especially if we start to get more complex with rules in the future.

I’d appreciate any thoughts or advice on this type of setup and how we might be able to design a scalable approach.

Thank you!