I started working with influxdb (and grafana) some weeks ago and I already fell in love
Currently I collect 8 integer values (load cell data AD converted by 8 HX711s) every 25ms. Works fine including the nice dashboard possibilities in grafana. But a strange effect in the data source results in data glitches every now and then. Normal values are below 1000 without load, go up to 50000 with load on the load cells. Single values in between are simply measured wrong. The wrong values are different for the 8 cells but more or less static for each one of them (cell1: 34618-34621, cell2: 152776 or 152778, …). Meaning whenever a wrong value appears it is about the same wrong value per load cell. Strange and hard to avoid unfortunately.
Logically it is not too complex to handle these records:
Value differs more than GLITCH_MIN_DIFF from direct predecessor and successor? Delete it. (Or change it to the average of predecessor and successor). This would not even consider the fact that the wrong values are quite predictable per load cell.
Can you help me to implement such filter in influxdb (could run periodically to clean up new data) or grafana (filtering while reporting)?
Thanks and best regards