I have been having this issue crop up often. I am running an older(ish) version of influxdb and have compression enabled but the size of my db is monsterous! Is there anyway to set a policy that will delete content after a period of time? I’d rather not lose my data, but I need to get this manageable and I have already expanded the partition this is on once before.
Look here: TIG Stack - Data Retention Policy
Thanks @LordOfTheSnow for answering this! Fully agree with Snow @lastb0isct checkout retention policies in the docs
Create retention policies with CREATE RETENTION POLICY
as the previous answers pointed out changing the Retention policy will probably solve your issue, but since you don’t want to lose data you may also have a look at downsampling, which allows you to keep old data by aggregating them… you will lose precision, but you will still have data
To my mind, downsampling is one of the most useless functions in InfluxDB as you can’t downsample data in place in a bucket/database but you have to use a different target bucket/database which leaves with you with two instances for the same kind of data.
It’s rather disappointing that in-place-downsampling is still not possible in InfluxDB 2+.
I have looked at data retention before but am not sure if my changes have stayed. I’m running influxdb in a docker container. Is there something different that is needed for retention policies in a docker container?
No, there is no difference. InfluxDB doesn’t know if it runs in a container or not.