Schema/shard configuration for storing dense long term historical data

We have 20k non-uniformly (1-20s) sampled timeseries, going back several years (~10 years). Currently, we are storing these as separate measurements, using the default retention policy with a shard group duration of 1 week. We do not want to delete any of our raw data points, since we need the full resolution data for our analysis.

In addition to the raw data, we store downsampled versions of the time series as fields in the same measurements as the raw data to make graph queries faster.

Lately, InfluxDBs memory usage has maxed out our current setup (64 GB RAM + 100 GB SWAP) during data insertion/compactions, making it nearly impossible to query the database. To aid this temporarily, we have increased the parameter compact-full-write-cold-duration to 2000 hours.

Compactions still use a lot of memory, and even when Influx is not compacting, the database seems to use much more memory than expected based on the hardware guidelines Hardware Sizing Guidelines | InfluxData Documentation Archive .

This has led us the following questions.

  • How does the number of shards affect InfluxDB’s memory usage?
  • Should we increase shard group duration, if so, are there any disadvantages we should be aware of?
  • Since our data is stored in a lot of seperate files it is not inserted in chronological order, this is of course not an ideal use case for influx. What kind of performance penalities might this entail?

Unstable during compaction

@mikaanes Those docs are a little old. They are for the 0.10 version. Can you share some schema information? How many measurements do you have? From the above question it sound like there are 20k measurements. This is suboptimal schema. We recomend that folks differientiate data with tagging instead of encoding data in the measurement names.

Also I don’t recommend that folks run the database with swap enabled. This can lead to major CPU contention and especially during CPU intensive operations like compactions this can lead to a situation where swap is continually increasing and can’t be drained due to CPU contention. These kinds of issues are generally due to suboptimal schema design.

Sorry, for some reason Google keeps returning old versions of the InfluxDB documentation. It seems like the RAM guidelines haven’t changed that much since 0.10, though.

Our current schema has 20k measurements, each with approximately 30 fields (where 28 are different aggregations of the raw data, and very sparse).

Does having many measurements have any different effect on the memory consumption than having many different tag values?

When we ran the database without swap enabled, we got OOM-errors approx. every 4 hours (full compaction interval?). (Out of memory: Kill process 4589 (influxd) score 969 or sacrifice child). Is this something than can be avoided with a different schema, or this an inevitable consequence of backfilling data into cold shards?

@mikaanes Yeah I think that documentation needs some upgrading. 20k measurements is most definitely a suboptimal schema. Can you post some examples of both raw and downsampled measurements? How many series do you estimate for each measurement?

This is how a typical measurement looks, including both raw and downsampled/aggregated fields. All the columns are fields.

@mikaanes Thats a bit difficult to read. Can you post an example point in line protocol?

We parse data from files and calculate a number of aggregations ourselves, since InfluxDB currently doesn’t give us the timestamp for min/max aggregates (only the timestamp of the bucket).

This is written to the database using line protocol along the lines of:

mymeas value=3.44, quality=192i 1491656400000
mymeas value=3.45, quality=192i 1491656410000
mymeas value=3.11, quality=192i 1491656420000
mymeas value=3.04, quality=192i 1491656430000
mymeas value=2.11, quality=192i 1491656440000
mymeas value=2.02, quality=192i 1491656450000

mymeas MAX1MIN_value=3.44, MAX1MIN_quality=192i 1491656400000
mymeas MAX1MIN_value=3.99, MAX1MIN_quality=192i 1491655680000

mymeas MIN1MIN_value=2.02, MIN1MIN_quality=192i 1491656450000
mymeas MIN1MIN_value=2.06, MIN1MIN_quality=192i 1491656460000

mymeas AVG1MIN_value=2.86, AVG1MIN_quality=192i 1491656400000
mymeas AVG1MIN_value=2.86, AVG1MIN_quality=192i 1491656400000

mymeas MAX4HOURS_value=5.22, MAX4HOURS_quality=192i 1491663600000

I.e, the aggregations are done for MAX,MIN and AVG, for the time periods 1 minute, 5 minutes, 20 minutes, 1 hour, 4 hours, and they are all written as fields in the same measurement as the raw data, but will mostly contain empty entries.