InfluxDB v1.3.1: how to reduce memory usage?

Hello together,

when trying to reduce the memory usage of a small scale database by reviewing the behavior changes of different config parameters, there are some strange effects visible:

The starting point on a silent plain database system (neither writes nor reads have been activated):
(a) A minimal system by turning off any unused services like subscriber and continuous_queries,
(b) With a completely empty database (together with step a) the InfluxDB uses about 300MB of virtual memory,

Importing a database from backup (size 350 MB) leads to a virtual memory usage of 600-650 MB.

That means that more than 85% of the data is held within the memory. Does anyone has got a clue how to reduce this memory allocation?

Some trial-and-error tuning steps:
(1) changing shard index from inmem to tsi1
more-or-less no effects (difference less than 20 MB)

(2) lowering or increasing cache-max-memory-size and/or cache-snapshot-memory-size
(of course) no effects on virtual memory

(3) disabling of retention and shard-precreation
no visible effects

Are there any parameters to be changed to reduce the amount of virtual memory usage?

Any pointers appreciated! Thanks in advance.

1 Like

If the following statement of the announcement on the 4th of April 2017 ( Path to 1 Billion Time Series: InfluxDB High Cardinality Indexing Ready for Testing | InfluxData ) is still true,

If you already have data, all old shards will continue to use the in-memory index. New shards will use the new disk based time series index. For testing purposes it’ll be best to start with a fresh database. To verify that you’re using disk based indexing, do a few writes and look at your data/<database>/<retention policy>/<shard id> directory. You should see a subdirectory called index.

the above mentioned tuning step (a) is all in vain: without copying the data into a new database (with tsm1 enabled earlier) no new behaviour can be expected: Only after the database copy there are index files within each shards directory.

But finally, after removing the old database and restarting the daemon - this is really strange - the virtual memory usage increased to 1800 MB. This is not what I have expected when switching from inmem to tsm1.

Has anyone recognized the same behavior?

Hi,

I have been experimenting with tsi1 vs inmem.
I have a number of database on the same influx instance but only two high cardinality with 300 and 500 series respectively.
The whole database is 80G, and with no queries running, using tsi1, it is running at 18G mem and 80G virtual memory!
With inmem the same database uses around 16G of memory, and almost no virtual memory.
I dropped and created a new databases when changing between the two modes.

I am exploring tsi1 as some user queries would make influx crash due to OOM.
I like influxdb, but boy does it need memory!

1 Like