Running DB - cleanup

Hello, this is my first post on this forum. I need some help cleaning up my influxDB instance which runs as an addon in HomeAssistant HAOS.

It is v1 and runs for around 5 years now survived many updates. the DB was recently 50GB bid and that is an issue (for backup etc.).

I am not a DBA in any form but technical.

I started to read what can be done and I did a few things.

  1. I changed from “log all” to “log specific”.

  2. Connect via InfluxDBStudio-0.2.0. I deleted may entities and some measurements. This freed up around 5GB and now my DB is 45GB. (yellow arrows of things I deleted (not there anymore in this screenprint).

  3. Tried to ask many AI machines for help, but none are reliable for me…

What I want to achieve is:
a. to cleanup the DB readings for entities that I deleted. From what I understand is that although deleting entities does not necessary delete readings or measurements.

b. set a retention policy where now it is “never”.
Picture in next post as I am new.

I asked my retention question here but I think HA is not having so many DBAs":
InfluxDB - Setup to Compress Data older than 6 Months / 2 Years - Third party integrations - Home Assistant Community

I really hope to find some help on this forum to cleanup my DB…

Picture of current retention in influxdb:

This is what ChatGPT tells me Is that correct?

Steps to Configure

1. Define Retention Policies

We need three RPs:

  1. autogen: Keeps raw data for the last 6 months.
  2. rp_6m_2y: Keeps 5-minute averages for data between 6 months and 2 years.
  3. rp_2y_inf: Keeps 15-minute averages for data older than 2 years.

Run the following commands in the InfluxDB CLI:

-- Retention Policy for data between 6 months and 2 years
CREATE RETENTION POLICY "rp_6m_2y" ON "home_assistant" DURATION 18M REPLICATION 1

-- Retention Policy for data older than 2 years
CREATE RETENTION POLICY "rp_2y_inf" ON "home_assistant" DURATION INF REPLICATION 1

The autogen policy remains active and retains raw data for up to 6 months.


2. Create Continuous Queries

Now set up Continuous Queries (CQs) to automatically compress data and store it in the appropriate Retention Policies.

Compression for data between 6 months and 2 years:
CREATE CONTINUOUS QUERY "cq_5m" ON "home_assistant"
BEGIN
  SELECT mean(value) AS value, mean(crit) AS crit, mean(warn) AS warn
  INTO "home_assistant"."rp_6m_2y".:MEASUREMENT
  FROM "home_assistant"."autogen"./.*/ 
  WHERE time > now() - 2y AND time <= now() - 6M
  GROUP BY time(5m), *
END
Compression for data older than 2 years:
CREATE CONTINUOUS QUERY "cq_15m" ON "home_assistant"
BEGIN
  SELECT mean(value) AS value, mean(crit) AS crit, mean(warn) AS warn
  INTO "home_assistant"."rp_2y_inf".:MEASUREMENT
  FROM "home_assistant"."autogen"./.*/ 
  WHERE time <= now() - 2y
  GROUP BY time(15m), *
END

3. Verify the Configuration

After creating the RPs and CQs, verify everything is set up correctly:

  • To check your RPs:
SHOW RETENTION POLICIES ON "home_assistant"
  • To list your CQs:
SHOW CONTINUOUS QUERIES

Summary

  1. Raw data (last 6 months): Stored in autogen.
  2. Compressed data (5-minute intervals, 6 months to 2 years): Stored in rp_6m_2y.
  3. Compressed data (15-minute intervals, older than 2 years): Stored in rp_2y_inf.

I’d say ChatGPT got close…

I suggest you have a look at the docs regardless of what ChatGPT tells you

a. to cleanup the DB readings for entities that I deleted. From what I understand is that although deleting entities does not necessary delete readings or measurements.

I’m not sure what you mean by that, if you delete something it actually gets deleted… the cleanest delete is the measurement drop, which is quite extreme… but you can also delete data points by filtering on specific tags and time (useful for anomalies)

b. set a retention policy where now it is “never”.

That’s just a simple command, which can look like that

ALTER RETENTION POLICY "_RP_" ON "_DB_" DURATION 180d SHARD DURATION 1d DEFAULT

Every X time influx will enforce it and delete the data outside retention. If you want to keep the oldest data do not run this before moving/aggregating those data to a new RP. (and that’s done via a one-shot manually run query, not via CQs)

To CREATE new RPs the command is almost the same, I would discourage using names like rp_2y_inf, putting duration and aggregation frequency in the name itself will make no sense if tomorrow you decide to change it (but that’s a personal consideration)

About the CQs, I doubt the one you posted will work for several reasons:

  • 6M is not a valid duration literal
  • it filters data already outside the retention or in the best case borderline into retention
  • not considering the previous point, every time the query runs it will aggregate ALL the data and update the already existing “snapshots”… that’s not needed and is also bad performance-wise
  • generally speaking, CQs do not work that way, by default, you generate the aggregated data immediately, practically if I run a CQ every 5m, it creates the aggregate of the last 5m of data (meaning that I’ll have the “current” data in 2 RPs, with different precision), you can use a wider window (CQ advanced syntax) to re-generate more than one interval, useful if your data are subject to delays

Have a look at the docs linked above to have a better understanding of the possibilities and behavior of CQs

Hi, thanks for taking the time to react and respond. Much appreciated!

I have read the docs, but I am unsure of the correct order and exact syntax of the commands.

I hoped there was a simple instruction of steps. Now I am still very unsure what to do with those RPs. I have 1 now… what do I nee dto do where to get 3 and automatically have data moved over? And how do the RP syntax look? And where to apply them?

There is a step-by-step example of the whole downsampling process in the third link above, have a look there

Ok, I am going in circles here. I have read the docs and conceptually I get them.

But…

As described in the 1st post I am not a DBA. I have some guts though, but since this might potentially break my entire DB (even later discovered) I asked for help on the actual steps and commands.

So how I understand the docs is 2 fold:

  1. create the Retention Policies (RPs).
  2. create the Continuous Queries (CQ).

For 2 I think I read that if these are not there data is not automatically changed to another RP and if there would be 3 RPs all data will end up in the autogen RP and stay there forever (correct?).
Data will only be moved to another RP if there are proper QCs (correct?).

Lets first focus on step 1 than to create the 3 RPs as I proposed.

STEP 1

1. Define Retention Policies

Three RPs:

  1. autogen: Keeps raw data for the last 6 months.
  2. rp_1: Keeps 5-minute averages for data between 6 months and 2 years.
  3. rp_2: Keeps 15-minute averages for data older than 2 years.

Run the following commands in the InfluxDB CLI:

-- Retention Policy for data between 6 months and 2 years
CREATE RETENTION POLICY "rp_1" ON "home_assistant" DURATION 18M REPLICATION 1

-- Retention Policy for data older than 2 years
CREATE RETENTION POLICY "rp_2" ON "home_assistant" DURATION INF REPLICATION 1

The autogen policy remains active and retains raw data for up to 6 months.

Since I run HomeAssistant as an OS and have the influxdb as an addon (docker container) it is pretty exiting to get to the CLI (and prone to error) - for reference How to access InfluxDB cli form Hassio - Installation / Home Assistant OS - Home Assistant Community

So my question is: Can I add the 2 new RPs like here:

And do I need to change the autogen RP then?

What is the correct format to enter the above RPs there?

See next post for additional picture.

I can Add a new one (2 RPs) and edit the existing one:

I had to fight a little with cahtgpt to specify my question but then I got this answer.

Is that correct for STEP 1 ???

Here’s how to configure retention policies in the InfluxDB Admin GUI, where there is a button for “Add Retention Policy” and fields for “Retention Policy” and “Duration”:


Instructions for the GUI:

1. For autogen (raw data for the last 6 months):

  • Retention Policy: autogen
    (This is pre-filled when editing the default autogen policy.)
  • Duration: 26w
    (26 weeks = 6 months.)

Click Save or Apply after entering the values.


2. For rp_1 (5-minute averages for data between 6 months and 2 years):

  • Click the Add Retention Policy button.
  • Retention Policy: rp_1
    (Provide a unique name.)
  • Duration: 78w
    (78 weeks = 18 months = 6 months + 1.5 years.)

Click Save or Apply.


3. For rp_2 (15-minute averages for data older than 2 years):

  • Click the Add Retention Policy button again.
  • Retention Policy: rp_2
  • Duration: INF
    (This means infinite duration, keeping the data forever.)

Click Save or Apply.

To anticipate on STEP2:

if above 3 new RPs are in place. Would Ik only have to enter the following SQL commands? Are they correct?

Set Up Continuous Queries (CQs)

Continuous Queries (CQs) in InfluxDB are used to automatically aggregate data from one retention policy (RP) into another. This ensures your existing raw data in the autogen RP is processed and stored appropriately in the new RPs.

a) For rp_1 (5-minute averages for data between 6 months and 2 years):

This CQ will calculate the average of all measurements every 5 minutes and write the results into the rp_1 retention policy.

CREATE CONTINUOUS QUERY "cq_rp1" ON "home_assistant"
BEGIN
  SELECT mean(*) INTO "home_assistant"."rp_1".:MEASUREMENT
  FROM "home_assistant"."autogen"./.*/ 
  GROUP BY time(5m), *
END
  • What this does:
    • Takes all measurements (.*) from the autogen RP.
    • Calculates the 5-minute averages (mean(*)).
    • Writes the results into the corresponding measurements in rp_1.

b) For rp_2 (15-minute averages for data older than 2 years):

This CQ will take 5-minute averages from rp_1 and further aggregate them into 15-minute averages for rp_2.

CREATE CONTINUOUS QUERY "cq_rp2" ON "home_assistant"
BEGIN
  SELECT mean(*) INTO "home_assistant"."rp_2".:MEASUREMENT
  FROM "home_assistant"."rp_1"./.*/ 
  GROUP BY time(15m), *
END

What this does:

  • Takes data already processed into 5-minute averages in rp_1.
  • Aggregates this data into 15-minute averages.
  • Stores the results in rp_2.

In here?: