High memory usage on TSI mode

We are running a stress test with importing certain amount of history records from scratch using InfluxDB v1.8.3 OSS. The goal of the test is to discover the needed parameters to set and the hardware spec in order to gain the best out of InfluxDB. By switching to TSI mode, we expect to decrease memory usage. Unfortunately, we’re pretty much stuck in the same situation where the memory usage is still high and InfluxDB service was up and down periodically. We’ve tried some ways mentioned in other threads but is not helping us much to find a way out.

We could gain a slightly better of memory usage by increasing the shard group duration (4w). But it won’t help much if we multiply the factor of the amount of records we’re going to import. It will still exceed the limit of memory (64G). It seems like even though we’ve switched to TSI, the ram usage is not dropping sharply as we expect for some reason. Our schema design will create over 3000+ databases and 10000+ points per database. The time span of the data will be a year (we’re aiming to hold 3+ years of data). We’d like to find out if there’s any other piece that we’re missing here and why TSI mode drains more memory than inmem mode.

We start to look into tweaking parameters like max-index-log-file-size to 64k and triple the amount of vm.max_map_count to 2048000. Hoping that these tune up can alleviate the stress on memory usage. Does anyone have the formula of giving the estimate for those numbers?

Other than that, we find out with more workers inserting the data, InfluxDB will consume much more memory. With sending compressed data in batch mode, the condition is a bit worsen then non-compressed one. With custom RP set, the data process rate drops also. We have InfluxQL creates database each time when inserting points. Would that be the cause where RP creation get triggered every time? Our InfluxDB node is on Ubuntu20.4 with 8 cores, 64G ram, and gp2 SSD. Looking for any comment about these issues. Thanks.

1 Like

We had very similar issues. The version of Go that we were using doesn’t release memory which caused us a ton of issues.

Try adding this to /etc/default/influxdb


This will tell GO to release memory and you should see it decrease over time. (A service restart will be required for this to take affect)

Hello! Have you tried this option?
“GODEBUG=madvdontneed=1” into “/etc / default/influxdb”
Have you solved the problem with high memory usage?
I have a similar problem.