Hi all, I’m new to Influx community and still trying to get familiar with how it works here. So if you find my post is similar to other existing posts, please let me know.
My colleague and I went to Paul Dix’s time series meetup last night at Wayfair Boston. And we chatted with Paul, Ryan and Noah after the talk about the way we want to use Influx to build our new monitoring solution.
To summarize, our IoT devices are collecting data from all kinds of sensors, which form various sensor streams. And the client will be responsible for setting alerts and groups of recipients who will receive the alerts. All sensor data, naturally, will be saved in InfluxDB. But we are not sure what’s the best design for alert evaluation. Currently, we are thinking of 2 designs using InfluxDB and Kapacitor to achieve this:
Design 1 - a self-contained solution
We will be saving everything in InfluxDB, which includes sensor data, alert settings, recipient settings. Although strictly speaking alerts and recipients are not time series data, but when they are created, deleted, modified, we can attach a timestamp to them, and save them as data points as if they are “time series” data. Conceptually we just need to join sensor measurements, alert measurements and recipient measurements and with some magic UDFs we should be able to get alert configured to send to a specific group of recipients, and we will only need one TICKScript for all use cases. Here are the pros and cons:
- Only need one TICKScript. Very easy to manage and maintain.
- Self-contained solution
- Whenever alerts or users are modified, we just need to create a datapoint in alerts or users measurement. No new TICKScript needed.
- The biggest concern is feasibility. Is it the right way to use InfluxDB and Kapacitor?
- The TICKScript and UDFs can be very complicated and not easy to debug.
Design 2 - Dynamically generate TICKScripts
Let’s say we will be storing alert settings and recipients in a relational database - Postgres. And we will have a service that generates TICKScript using the data stored in Postgres and use Kapacitor HTTP API to create tasks. Here are the pros and cons:
- Conceptually, the logic is easier to understand
- We can use TICKScript template variables when generating TICKScripts
- Each TICKScript will be static and tailored to each device’s sensor data. Easy to debug.
- There will probably be thousands of tasks created on Kapacitor, is that normal? And whenever the user updates the alert, we have to find the corresponding task and modify it.
- We need to implement and maintain the TICKScript generation service.
I imagine the dynamic alert configuration is very common among IoT solutions. I’m very curious how people solve this problem creatively using TICK stack. Thanks in advance!