Chronograf connections from .src and .kap files not working

I am trying to automate deploying the entire TICK stack and I am having trouble getting Chronograf to automatically connect to influxdb and kapacitor. I am running in kubernetes and I have tried configuring the settings using both environment variables in my pod definition and by creating .src and .kap files with the connection info. Neither of these is working.

For the .src and .kap files, I have validated that the RESOURCE_PATH variable is set on the pod, the files are available in that location and contain the correct data for both my influxdb pod and my kapacitor pod.

Looking at the logs for the Chronograf pod does not give an indication that it is even trying to use these files.

Any help would be greatly appreciated.

Anyone able to help with this?

Thanks.

Hi Patrick,

Can you please share your deployment yaml?

Here is the deployment.yaml:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template “chronograf.fullname” . }}
labels:
app: {{ template “chronograf.fullname” . }}
chart: “{{ .Chart.Name }}-{{ .Chart.Version }}”
release: “{{ .Release.Name }}”
heritage: “{{ .Release.Service }}”
spec:
replicas: {{ .Values.service.replicas }}
template:
metadata:
labels:
app: {{ template “chronograf.fullname” . }}
spec:
containers:
- name: {{ .Chart.Name }}
image: “{{ .Values.image.repository }}:{{ .Values.image.tag }}”
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.oauth.enabled }}
env:
- name: TOKEN_SECRET
valueFrom:
secretKeyRef:
name: {{ template “chronograf.fullname” . }}
key: token_secret
{{- if .Values.oauth.github.enabled}}
- name: GH_CLIENT_IDx
valueFrom:
secretKeyRef:
name: {{ template “chronograf.fullname” . }}
key: gh_client_id
- name: GH_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: {{ template “chronograf.fullname” . }}
key: gh_client_secret
- name: GH_ORGS
valueFrom:
secretKeyRef:
name: {{ template “chronograf.fullname” . }}
key: gh_orgs
{{- end }}
{{- if .Values.oauth.heroku.enabled}}
- name: HEROKU_CLIENT_ID
valueFrom:
secretKeyRef:
name: {{ template “chronograf.fullname” . }}
key: he_client_id
- name: HEROKU_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: {{ template “chronograf.fullname” . }}
key: he_client_secret
- name: HEROKU_ORGS
valueFrom:
secretKeyRef:
name: {{ template “chronograf.fullname” . }}
key: he_orgs
{{- end }}
{{- if .Values.oauth.google.enabled}}
- name: GOOGLE_CLIENT_ID
valueFrom:
secretKeyRef:
name: {{ template “chronograf.fullname” . }}
key: go_client_id
- name: GOOGLE_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: {{ template “chronograf.fullname” . }}
key: go_client_secret
- name: GOOGLE_DOMAINS
valueFrom:
secretKeyRef:
name: {{ template “chronograf.fullname” . }}
key: go_domains
- name: PUBLIC_URL
valueFrom:
secretKeyRef:
name: {{ template “chronograf.fullname” . }}
key: go_public_url
{{- end }}
{{- if .Values.oauth.auth0.enabled}}
- name: AUTH0_DOMAIN
valueFrom:
secretKeyRef:
name: {{ template “chronograf.fullname” . }}
key: auth0_domain
- name: AUTH0_CLIENT_ID
valueFrom:
secretKeyRef:
name: {{ template “chronograf.fullname” . }}
key: auth0_client_id
- name: AUTH0_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: {{ template “chronograf.fullname” . }}
key: auth0_client_secret
- name: PUBLIC_URL
valueFrom:
secretKeyRef:
name: {{ template “chronograf.fullname” . }}
key: auth0_public_url
{{- end }}
{{- end }}
ports:
- containerPort: 8888
name: api
livenessProbe:
httpGet:
path: /ping
port: api
readinessProbe:
httpGet:
path: /ping
port: api
volumeMounts:
- name: data
mountPath: /var/lib/chronograf
- name: config
mountPath: /usr/share/chronograf/resources
resources:
{{ toYaml .Values.resources | indent 10 }}
volumes:
- name: config
configMap:
name: {{ template “chronograf.fullname” . }}
- name: data
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
claimName: {{ template “chronograf.fullname” . }}
{{ else }}
emptyDir: {}
{{ end }}

Also, here is the configmap.yaml that is referenced in the volumes:

apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template “chronograf.fullname” . }}
labels:
app: {{ template “chronograf.fullname” . }}
chart: “{{ .Chart.Name }}-{{ .Chart.Version }}”
release: “{{ .Release.Name }}”
heritage: “{{ .Release.Service }}”
data:
chronograf.src: |+
{
“id”: {{ .Values.config.influxdb.id | quote }},
“name”: {{ .Values.config.influxdb.name | quote }},
“username”: {{ .Values.global.influxdb_username | quote }},
“password”: {{ .Values.global.influxdb_password | quote }},
“url”: “https://{{ template “influxdb.fullname” . }}.{{ .Release.Namespace }}:8086”,
“type”: {{ .Values.config.influxdb.type | quote }},
“insecureSkipVerify”: {{ .Values.config.influxdb.insecureSkipVerify }},
“default”: {{ .Values.config.influxdb.default }},
“telegraf”: {{ .Values.config.influxdb.telegraf | quote }},
“organization”: {{ .Values.config.influxdb.organization | quote }}
}
chronograf.kap: |+
{
“id”: {{ .Values.config.kapacitor.id | quote }},
“srcID”: {{ .Values.config.kapacitor.srcID | quote }},
“name”: {{ .Values.config.kapacitor.name | quote }},
“url”: “https://{{ template “kapacitor.fullname” . }}.{{ .Release.Namespace }}:9092”,
“active”: {{ .Values.config.kapacitor.active }},
“organization”: {{ .Values.config.kapacitor.organization | quote }}
}%

You can see here from inside the pod, that the files show up in the appropriate location:

/ # ls -al /usr/share/chronograf/resources
total 12
drwxrwxrwx 3 root root 4096 Oct 2 20:17 .
drwxr-xr-x 1 root root 4096 Oct 2 20:17 …
drwxr-xr-x 2 root root 4096 Oct 2 20:17 …2019_10_02_20_17_12.067645013
lrwxrwxrwx 1 root root 31 Oct 2 20:17 …data -> …2019_10_02_20_17_12.067645013
lrwxrwxrwx 1 root root 21 Oct 2 20:17 chronograf.kap -> …data/chronograf.kap
lrwxrwxrwx 1 root root 21 Oct 2 20:17 chronograf.src -> …data/chronograf.src

And the contents are there:

/ # cat /usr/share/chronograf/resources/chronograf.src
{
“id”: “10000”,
“name”: “Splice Influx”,
“username”: “splice_admin”,
“password”: “********”,
“url”: “https://splice-tick-influxdb.default:8086”,
“type”: “influx”,
“insecureSkipVerify”: true,
“default”: true,
“telegraf”: “telegraf.autogen”,
“organization”: “Default”
}

I had an idea that the issue might be that the Chronograf pod was starting before the Influxdb pod was up and running. I put a initContainer into the deployment.yaml so it would wait until Influxdb responds, but that did not solve the problem.

Bumping this up to see if I can get a response. This is still an issue for us.