I have a client API, written in Go, that is gathering chart metrics (from binance) over a websocket. The “keep alive” is unreliable and the websocket keeps dropping the connection, about once every 2-6 hours. What’s the best way to ingest data with an unstable client without dropping points of data?
- I don’t want to monitor my monitor (and then monitor my monitor’s monitor )
- I don’t want to drop or miss even a single data point
- I don’t want to severely impact query times on my dozens of connected grafana charts
With InfluxDb, writes replace each other, right?
Would it cause query issues later, or violate best-practices, if I fired up 2 or 3 processes that all were ingesting the exact same data and restarting themselves whenever their websocket connection drops? I’m pretty sure that with 3 such processes, I’d rarely ever miss any data but I’m concerned that writing the same values at (effectively) the same time would break things. But maybe InfluxDB is built for exactly that use case.
Is it safe to ingest data redundantly and concurrently?
Is there a better approach, perhaps through telegraf or tasks or some other InfluxDb feature designed for this problem?