I have Influxdb installed on a Rapsberry PI and have some Python scripts running that feed three databases with sensor data. Every day I run a backup script, a cp command in crontab that writes the changes on the volumes folder including Influxdb data to my NAS. Recently I discovered that the SD card of my raspberry Pi is running out of memory. I checked the data on my NAS and the size of the Influxdb data is 27GB, more than the 16GB of the SD card.
A df command showed that the SD card is completely used and no data is available. I read some influxdb docs and articles and decided to implement a retention policy. I applied this in such a way that a substantial part of one of the databases should be removed. The retention policy was accepted and appeared in the overview when I show them. Unfortunately nothing seems to happen. I gave the RPI some time and restarted but the data on the SD card was not reduced and Grafana shows that the data before the period of retention is still there.
I have some questions on this:
- Is the execution of the retention rule obstructed by lack of data available?
- How to reduce the amount of data in this situation? I intend to change my Python scripts to store aggregated data but have to reduce the data first as the development environment is not working.
- What is the reason the NAS stores more data than the content of the Influxdb volume? What is the right way to make backups of the Influxdb data on a Network drive?