InfluxDB reset after Pod is killed

I have a really strange behaviour with Influx Version 2.0.9 and also with 2.1.1.
My InfluxDb is running on Kubernetes as a statefull set. The Folder /root/.influxdbv2 is mounted on a PVC for persistence.
In Kubernetes also only one Pod is running right now.

I can reproduce the following steps every time:

  • Setup Influx with an admin user
  • Write some Data in it, all is fine
  • Delete the Pod, the Pod is then created again
  • When Influx is back up and running again, it is in it’s initial state. I have to setup a user again. After I have done this, the data is also gone
  • If I check the /root/.influxdbv2 Folder on the new created pod, all the data seams to be there. For example I have 10GB of Data in it.

So to summarize, if I restart/delete my influxdb Pod, the new created Pod behavious like a new naked instance. The data in the folder /root/.influxdbv2 seams to be there.

Is there any other folder I need to persist to prevent that from happening?

Thank you,
Best regards,

This is a bit outside the scope of this forum as it is more container/kubernetes oriented, but anyhow if you can paste your container config, and your influxdb config inside a code block (use the </> button in toolbar) I’ll try and help.