Influxdb2 engine folder keeps growing despite having retention set

Hello!

We have set up influxdb2 about 2 years ago, currently updated to 2.2 (originally 2.0), using it to store IDRAC and proxmox metrics. The vm with the engine has 64gb disk, and the engine folder is currently taking up 43gb. There were no new buckets or vms created in the past 2 weeks, which would explain the higher capacity.

There are several buckets, all of them have retention period set. I’ve already double checked, the data gets deleted, when it falls out of the set period.

We have set up prometheus node-exporter on the vm in the past. It’s visible that every night, the disk usage falls down a bit, but not to the same level, about 0.7% higher then the last dip (about 500mb). The free space just keeps disappearing.

The vm has centOS installed, the log folder is less then 200mb. There are no other large folder on the vm other then the engine folder.

Sadly, I have no way telling when the growth started, we only keep 2 weeks of vm storage data, but the pattern is clear within these 2 weeks.

Any idea where I’m leaking free space?

Hi
I do have the same issue .
Please try to make a full backup and a full restore …… the size will incredibly drop.
It seems that influxdb has an issue with compression
Regards
Phil

1 Like

Thanks for the tip Phil, I will give it a go in the coming days.

Meanwhile, it has come to mind, that with 2.0 we had no problem. With 0.7% daily increment, we should have reached alerting threshold in like 3 months, but that version was running for 2 years. Maybe something acting up with 2.2, or gone bad during update.

Nonetheless I will give the backup a go, and also update fo 2.4 while I’m at it.

Which version are you on? Did you also update the engine, or running a fresh install?

Hi I am running a docker version
The version is 2.4.0

InfluxDB v2.4.0
Server: de247ba
Frontend: a2bd1f3

Phil

The backup-restore method did not work. It made a backup successfully, it was 24gb, about half of the size of the original data. Removed all buckets from the fronted, started restore, and it restored every data, exactly to the same size.

And in the process, it broke 2 of the 3 data feeder :smiley:

I will try the update to 2.4, and pray, that it magically fixes my problem.

Spoke too soon, it might have done something. The restore restored to the original size, but since then it started to decline on every rollout, the remaining is smaller about 1.2%.

Or it might have been the update. It’s getting more and more interesting :smiley: