We are sending data through Kafka cluster having multiple partitions using round-robin and consume from Kafka with multiple consumers.
In case of maintenance or other disruption, causing queueu buildup, consumers may resume reading untransferred data from multiple partitions at different speeds and send them to Kapacitor.
What would be the effect of such unordered data points,
what kind of nodes may generate or miss alerts,
are there any recommendations or thoughts on how to avoid this?