Aborted compaction

Hi,

I’m using InfluxDB 1.7.9. And now I have some problems with compaction. My InfluxDB is running on Raspberry Pi. I didn’t have any problems. This is from the syslog:

Dec 2 00:21:03 pi-server influxd[1321]: [httpd] 2001:16b8:c215:cd00:61cb:d50c:cadb:fdce - admin [02/Dec/2019:00:21:02 +0100] “POST /write?db=telegraf HTTP/1.1” 204 0 “-” “Telegraf/1.12.6” 3deaa4bd-1491-11ea-875f-b827eb5ed7a5 27091
Dec 2 00:21:03 pi-server influxd[1321]: ts=2019-12-01T23:21:03.628706Z lvl=info msg=“TSM compaction (end)” log_id=0JSvpwlG000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0JSx7Pdl000 op_name=tsm1_compact_group op_event=end op_elapsed=1001.551ms
Dec 2 00:21:04 pi-server influxd[1321]: ts=2019-12-01T23:21:04.627120Z lvl=info msg=“TSM compaction (start)” log_id=0JSvpwlG000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0JSx7XSl000 op_name=tsm1_compact_group op_event=start
Dec 2 00:21:04 pi-server influxd[1321]: ts=2019-12-01T23:21:04.627250Z lvl=info msg=“Beginning compaction” log_id=0JSvpwlG000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0JSx7XSl000 op_name=tsm1_compact_group tsm1_files_n=10
Dec 2 00:21:04 pi-server influxd[1321]: ts=2019-12-01T23:21:04.627310Z lvl=info msg=“Compacting file” log_id=0JSvpwlG000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0JSx7XSl000 op_name=tsm1_compact_group tsm1_index=0 tsm1_file=/var/lib/influxdb/data/telegraf/autogen/93/000000032-000000003.tsm
Dec 2 00:21:04 pi-server influxd[1321]: ts=2019-12-01T23:21:04.627374Z lvl=info msg=“Compacting file” log_id=0JSvpwlG000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0JSx7XSl000 op_name=tsm1_compact_group tsm1_index=1 tsm1_file=/var/lib/influxdb/data/telegraf/autogen/93/000000064-000000003.tsm
Dec 2 00:21:04 pi-server influxd[1321]: ts=2019-12-01T23:21:04.627436Z lvl=info msg=“Compacting file” log_id=0JSvpwlG000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0JSx7XSl000 op_name=tsm1_compact_group tsm1_index=2 tsm1_file=/var/lib/influxdb/data/telegraf/autogen/93/000000072-000000002.tsm
Dec 2 00:21:04 pi-server influxd[1321]: ts=2019-12-01T23:21:04.627497Z lvl=info msg=“Compacting file” log_id=0JSvpwlG000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0JSx7XSl000 op_name=tsm1_compact_group tsm1_index=3 tsm1_file=/var/lib/influxdb/data/telegraf/autogen/93/000000081-000000002.tsm
Dec 2 00:21:04 pi-server influxd[1321]: ts=2019-12-01T23:21:04.627559Z lvl=info msg=“Compacting file” log_id=0JSvpwlG000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0JSx7XSl000 op_name=tsm1_compact_group tsm1_index=4 tsm1_file=/var/lib/influxdb/data/telegraf/autogen/93/000000089-000000002.tsm
Dec 2 00:21:04 pi-server influxd[1321]: ts=2019-12-01T23:21:04.627620Z lvl=info msg=“Compacting file” log_id=0JSvpwlG000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0JSx7XSl000 op_name=tsm1_compact_group tsm1_index=5 tsm1_file=/var/lib/influxdb/data/telegraf/autogen/93/000000090-000000001.tsm
Dec 2 00:21:04 pi-server influxd[1321]: ts=2019-12-01T23:21:04.627681Z lvl=info msg=“Compacting file” log_id=0JSvpwlG000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0JSx7XSl000 op_name=tsm1_compact_group tsm1_index=6 tsm1_file=/var/lib/influxdb/data/telegraf/autogen/93/000000091-000000001.tsm
Dec 2 00:21:04 pi-server influxd[1321]: ts=2019-12-01T23:21:04.627742Z lvl=info msg=“Compacting file” log_id=0JSvpwlG000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0JSx7XSl000 op_name=tsm1_compact_group tsm1_index=7 tsm1_file=/var/lib/influxdb/data/telegraf/autogen/93/000000092-000000001.tsm
Dec 2 00:21:04 pi-server influxd[1321]: ts=2019-12-01T23:21:04.627803Z lvl=info msg=“Compacting file” log_id=0JSvpwlG000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0JSx7XSl000 op_name=tsm1_compact_group tsm1_index=8 tsm1_file=/var/lib/influxdb/data/telegraf/autogen/93/000000093-000000001.tsm
Dec 2 00:21:04 pi-server influxd[1321]: ts=2019-12-01T23:21:04.627864Z lvl=info msg=“Compacting file” log_id=0JSvpwlG000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0JSx7XSl000 op_name=tsm1_compact_group tsm1_index=9 tsm1_file=/var/lib/influxdb/data/telegraf/autogen/93/000000094-000000001.tsm
Dec 2 00:21:04 pi-server influxd[1321]: ts=2019-12-01T23:21:04.628324Z lvl=info msg=“Aborted compaction” log_id=0JSvpwlG000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0JSx7XSl000 op_name=tsm1_compact_group error=“compaction in progress: open /var/lib/influxdb/data/telegraf/autogen/93/000000094-000000002.tsm.tmp: file exists”

InfluxDB is using a lot of RAM and CPU. I restarted InfluxDB. But without any changes.

Hi,

You can delete the .tmp file and retry. This should happen automatically, though there was an issue that we fixed recently that you could be hitting. If you delete the .tmp files and this continues, do let us know by opening an issue.

It would also help if you confirm which version of InfluxDB you are running.

Thanks.

NB: This command should help delete .tmp files:

WARNING: I have not tested this command, so run it without “-delete” first to see a list of files it finds.

find /var/lib/influxdb -name "*.tmp" -type f -delete

1 Like

Thx.

I did this yesterday. Than I converted my indexes from TSM to TSI. I will check this today. But it looks than better.

Today I had the same error in syslog:

root@pi-server:/var/log# grep “Aborted compaction” syslog | wc -l
26111

Why this happens? And what can be done to prevent this?

Trying this process
Stop the influxdb service
delete the tmp files like @rawkode said earlier
run a chown -R command on /var/lib/influxdb to make sure all the permissions are set correctly
restart the influxdb service

If that doesn’t work, I would try rebuilding your tsi index. Make sure the service is stopped when you do this!

I did this. But after 2-3 days the problem with “Aborted compaction” is back.

Why so often?!

I’m using it on Raspberry Pi with SQLlite DB.

I know now the problem. This is the 32 Bit problem with “cannot allocate memory” during the process.

My experience is that you need to pay close attention to your shard size on these SBCs due to their limited RAM space. If you use too large a shard size you will eventually end up in an OOM-loop (Out Of Memory) while trying to compact shards.

dg

1 Like

Can you give here some tips?

Here are the docs on it: Database management using InfluxQL | InfluxDB OSS 1.7 Documentation But it all depends on how much data you are writing to the DB. If you are writing a lot of data – your shards will be large – you’ll want the duration shorter.

It’s really a function of how long your retention policy is, and how much data is in each shard, based on the data ingestion rate.

dg