Inflxudb _internal overflows with Kubernete's Lifeprobe

influxdb
#1

I have an Instance of influxdb (v1.3.7) powered by Kubernetes on Azure Cloud.

The storage is an Azure-File (one of M$ Cloud Storage formats) mounted through SMB on /var/lib/influxdb.
Kubernetes also foresees so called life- and readiness-probes which periodically shall check the inflxudb. My probes are defined as following:

     livenessProbe:
          httpGet:
            path: /ping
            port: 8086
          initialDelaySeconds: 90
          timeoutSeconds: 5
          periodSeconds: 15
        readinessProbe:
          httpGet:
            path: /ping
            port: api
          initialDelaySeconds: 60
          periodSeconds: 15
          timeoutSeconds: 5

As you can see, the /ping API get’s called quite often. Initially I have set periodSeconds to 1 sec. Influxdb’s Memory (RAM) usage has grown within short time up to several GBs. Influxdb finally crashed. After I purged the content of /var/lib/influxdb it started up again.
Now, as I have set the interval to 15 seconds the situation relaxed. But still the storage usage under _inter grows constantly.

grafik

After running this for one day I see that I also will run into troubles sooner & later with that interval. Is there a way to ignore calls to the /ping-API instead of disabling the whole [monitor]-module?

#2

In case anybody else is running into this.

It is about Azure Files (cifs-Driver) as data storage for inflxudb in the cloud.

It seems that the predefined CIFS-Settings by AZURE are incompatible or insufficient with running a database on them. I simply got no configuration working properly with Azure Files. influxdb always broke down after a certain amount of time.

My problem had nothing to do with Kubernetes Lifeprobe. Instead I use Managed Disks as Database storage back-end. Keep away from AZURE FILES in combination with inflxudb.