High memory usage after moving to TSI index

I have DB with 7 logical db’s inside (36G on disk total, cardinality from 10k to 312k), running in Docker 1.7.9-alpine. It was working fine with 4G RAM AWS server until I decided to migrate the index engine from tsm(inmem) to tsi (with influx_inspect buildtsi). After migration, it just won’t start with that amount of memory. Crashes after 2-3 minutes with OOM. I have changed instance type to 16G RAM and limited docker container to 8G. It started but works for a few hours and then reboot with OOM. I tried to remove limits so it takes 13G of RAM and works fine. Swap is disabled.
So I’m trying to find an answer for the following questions: Why tsm (inmem) index was ok with 4 gigs of RAM but tsi (which is RAM optimized), doesn’t work even with double amount of that memory?
Second: why influx doesn’t have an option to limit memory usage even if this affects performance?
I tried to recreate tsi index multiple times even with decreased max-index-log-file-size
Shard duration is the default - 7 days and RP is forever
Here are some of my db settings:

[data]
index-version = “tsi1”
cache-max-memory-size = “1g”
cache-snapshot-memory-size = 26214400
compact-throughput = 50331648
compact-throughput-burst = 50331648
max-series-per-database = 1000000
max-values-per-tag = 100000
max-concurrent-compactions = 0
max-index-log-file-size = 131072

Similar problem described here TSM-based vs. TSI-based memory usage

I will appreciate any help, thanks!

Tho I can’t answer you questions, I may help tweaking your settings.
As a start point I’d recommend changing this param to 1:
max-concurrent-compactions = 1

Trust me, 1 compaction at a time is more than enough…unless you write millions of point each minute
Also read thru the influxdb.log (or journalctl if you have systemd) and check what kind of compactions happens before it runs OOM.
I suspect it’s either Cache snapshot or TSM compaction what causes this problem.