We have 20k non-uniformly (1-20s) sampled timeseries, going back several years (~10 years). Currently, we are storing these as separate measurements, using the default retention policy with a shard group duration of 1 week. We do not want to delete any of our raw data points, since we need the full resolution data for our analysis.
In addition to the raw data, we store downsampled versions of the time series as fields in the same measurements as the raw data to make graph queries faster.
Lately, InfluxDBs memory usage has maxed out our current setup (64 GB RAM + 100 GB SWAP) during data insertion/compactions, making it nearly impossible to query the database. To aid this temporarily, we have increased the parameter compact-full-write-cold-duration to 2000 hours.
Compactions still use a lot of memory, and even when Influx is not compacting, the database seems to use much more memory than expected based on the hardware guidelines https://docs.influxdata.com/influxdb/v0.10/guides/hardware_sizing/#general-hardware-guidelines-for-a-single-node .
This has led us the following questions.
- How does the number of shards affect InfluxDB’s memory usage?
- Should we increase shard group duration, if so, are there any disadvantages we should be aware of?
- Since our data is stored in a lot of seperate files it is not inserted in chronological order, this is of course not an ideal use case for influx. What kind of performance penalities might this entail?
Unstable during compaction