InfluxDBv2 session and data handling

I have my InfluxDB version 2.7.11 hosted on AWS EKS Pod with Persistent Storage as EFS and access externally using Load Balancer.
where my deployment.yaml as:

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: influxdb-test
name: intel-influxdb-test
namespace: test-env
spec:
replicas: 1
selector:
matchLabels:
app: influxdb-test
template:
metadata:
labels:
app: influxdb-test
spec:
containers:
- image: influxdb:latest
name: influxdb-test
resources:
requests:
memory: “1200Mi”
cpu: “800m”
limits:
memory: “2000Mi”
cpu: “1200m”
ports:
- containerPort: 8086
name: http-influxdb
protocol: TCP
env:
- name: DOCKER_INFLUXDB_DATA_DIR
value: /var/lib/influxdb2/engine/data
- name: DOCKER_INFLUXDB_INIT_MODE
value: setup
- name: DOCKER_INFLUXDB_INIT_USERNAME
value: intel
- name: DOCKER_INFLUXDB_INIT_PASSWORD
value: “1234”
- name: DOCKER_INFLUXDB_INIT_ORG
value: intel
- name: DOCKER_INFLUXDB_INIT_BUCKET
value: intel-bucket
- name: INFLUXDB_HTTP_LOG_ENABLED
value: “true”
- name: INFLUXDB_HTTP_WRITE_TRACING
value: “true”
- name: INFLUXDB_DATA_ENGINE_QUERY_LOGGING_ENABLED
value: “true”
- name: INFLUXDB_DATA_QUERY_LOG_ENABLED
value: “true”
- name: DOCKER_INFLUXDB_DATA_DELETE_DATA
value: “never”
- name: INFLUX_TOKEN
valueFrom:
secretKeyRef:
name: influxdb-static-token-test
key: token
volumeMounts:
- mountPath: /var/lib/influxdb2/engine/data
subPath: data
name: efs-influxdb-test
- mountPath: /var/lib/influxdb2/engine/meta
name: efs-influxdb-test
subPath: meta
- mountPath: etc/influxdb/
name: intel-influxdb-config-test
readOnly: true
- name: intel-influxdb-secrets-test
mountPath: /app/influxdb
readOnly: true
envFrom:
- secretRef:
name: intel-influxdb-secrets-test
- configMapRef:
name: intel-influxdb-config-test
volumes:
- name: intel-influxdb-secrets-test
secret:
secretName: intel-influxdb-secrets-test
- name: intel-influxdb-config-test
configMap:
name: intel-influxdb-config-test
- name: efs-influxdb-test
persistentVolumeClaim:
claimName: efs-influxdb-pvc-test

Service.yaml as:

apiVersion: v1
kind: Service
metadata:
labels:
app: influxdb-test
name: intel-influxdb-test
namespace: test-env
spec:
type: LoadBalancer
ports:

  • port: 8086
    targetPort: http-influxdb
    nodePort: 3****
    selector:
    app: influxdb-test
    sessionAffinity: none

Influxdb.conf as:

reporting-disabled = true
bind-address = “0.0.0.0:8086”
storage-directory = “/var/lib/influxdb2/engine/data”
enabled = true
auth-enabled = true
log-enabled = true
write-tracing = false
pprof-enabled = false
pprof-aut-enabled = false
debug-pprof-enabled = false
ping-auth-enabled = true
wal-flush-interval = 50s
compaction-interval = 2h
write-buffer-size = 2000M

[data]

dir = “/var/lib/influxdb2/engine/data”
wal-dir = “/var/lib/influxdb2/engine/wal”
query-log-enabled = false
trace-logging-enabled = false

[meta]

dir = “/var/lib/influxdb2/engine/meta”
retention-autocreate = true
logging-enabled = true

[http]

flux-enabled = true
base-path = “/influx”

[query]

cache-enabled = true
cache-size = 1000000000

Persistent Volume and Persisitent VolumeClaim as:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-influxdb-pvc-test
namespace: test-env
spec:
accessModes:
- ReadWriteMany # Ensure it is ReadWriteMany for EFS
resources:
requests:
storage: 10Gi
storageClassName: efs-sc-test

apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-influxdb-pv-test
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany # Change from ReadWriteOnce to ReadWriteMany
storageClassName: efs-sc-test
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
csi:
driver: efs.csi.aws.com
volumeHandle: fs-**** # Your EFS volume ID

and set Horizontal Pod Autoscaling with CPU and Memory threshold limit for 75%, so once it attained its limit the scaling will take place.

here the list of problems I face,

  1. when i try to access with InfluxDB with load balancer and load increases, new pods get created but load is not distributed to all pods so only one pod takes full load and get crashed even 2/3 pods are running continuously load is not distributing, Once the scaling get started I face session expired and in development tab under application I can able to see cookies with name as “influxdb-oss-session”.

  2. when I restart/redeploy the InfluxDB pod, the exisitng data is not showing on InfluxDB but it placed on pod volume mount path “/var/lib/influxdb2/engine/data”. so i tried verifying on other topics so they suggested to add an environment variable for influxd.bolt file path. so i added a environment variable as:

        - name: INFLUXD_BOLT_PATH
          value: /var/lib/influxdb2/engine/data/influxd.bolt
    

now I can able to view data after restart/redeploy but during scaling pods went to crashloopbackoff and i checked with pod logs, it shows as:

2024-12-09T10:38:46.192426574Z info found existing boltdb file, skipping setup wrapper {“system”: “docker”, “bolt_path”: “/var/lib/influxdb2/engine/data/influxd.bolt”}
2024-12-09T10:38:46.284257866Z info found existing boltdb file, skipping setup wrapper {“system”: “docker”, “bolt_path”: “/var/lib/influxdb2/engine/data/influxd.bolt”}
ts=2024-12-09T10:38:46.422162Z lvl=info msg=“Welcome to InfluxDB” log_id=*** version=v2.7.11 commit=fbf5d4ab5e build_date=2024-12-02T17:48:15Z log_level=info
ts=2024-12-09T10:38:46.422212Z lvl=warn msg=“nats-port argument is deprecated and unused” log_id=***
Error: unable to open boltdb: timeout
See ‘influxd -h’ for help
ts=2024-12-09T10:38:47.394498Z lvl=error msg=“Failed opening bolt” log_id=*** error=“unable to open boltdb: timeout”

help me to resolve this issue.