I am working on processing sensor values that I store on InfluxDB with Kapacitor. I have a UI that enables users to define transform functions to sensor values. The approach I implemented is:
- First write all sensor data to InfluxDB
- Then using tickscripts read that data, do transforms on the data(I am using stream node) and write results into another database.
I am not sure if we can replace the data in the database, I could’t find a way to do this, so I am writing the processed data to another db with the same measurement and field names.
I don’t like this approach since:
I am storing data in 2 different databases. Even though one is storing processed data, some sensor values may not need pre-processing (the transform functions defined by users through UI I implemented.). In this case, my automatically created tickscript will write the same data on the other database. Seems like a replication of data and using memory unnecessarily.
If I have lots of sensors, lets says 1000, then I will have at least 1000 tickscripts. I am concerned that this will put too much work on CPU (since I am using stream node to transform data points).
I could’t think of a more efficient way. Maybe someone had encounter these problems as well and can help me. Or can you share better approach ideas with me?
Thanks in advance,