High memory consumption on TSM compactions (Influx 1.7)

Hi,

We have currently deployed Influx on Kubernetes on a 4-pod StatefulSet where we have the following data:

  • 40 Day Retention Policy
  • Shard Duration 1 Day
  • Cardinality of aprox: 61026

The memory usage is quite huge, 4GB per pod/instance… And the performance is ok. However, every night, when TSM compactions take place… the ammount of RAM grows a lot… to the point it collapses our pods.

Configuration is totally default: /etc/influxdb/influxdb.conf

[meta]
dir = “/var/lib/influxdb/meta”

[data]
dir = “/var/lib/influxdb/data”
engine = “tsm1”
wal-dir = “/var/lib/influxdb/wal”

We played with different configurations, but still… TSM compactions totally kill our instances when it comes to memory usage. Any idea what a good configuration might relax this memory usage?

Bests,

E

You can try to limit the number of parallel compactions, this solved the problem for us, see max-concurrent-compactions.

1 Like

You can try to limit the number of parallel compactions, this solved the problem for us, see max-concurrent-compactions .

@ourumov did that solve the problem?