Telegraf kube_inventory only gathering the kubernetes_node measurement

Hi @everyone.

We are trying to get data from the kube_inventory plugin, but just getting only one measurement (kubernetes_node).

I’ve deployed as a deployment on the “monitoring” namespace accesign to the kubernetes.default.svc API as described in (Access Clusters Using the Kubernetes API | Kubernetes)

This is my configmap

apiVersion: v1
kind: ConfigMap
metadata:
  name: telegraf-ki
  namespace: monitoring
  labels:
    k8s-app: telegraf-ki
data:
  telegraf.conf: |+
    [global_tags]
      env = "$ENV"
    [agent]
      interval = "60s"
      round_interval = true
      metric_batch_size = 1000
      metric_buffer_limit = 10000
      collection_jitter = "0s"
      flush_interval = "10s"
      flush_jitter = "0s"
      precision = ""
      debug = true
      quiet = false
      logfile = ""
      omit_hostname = true

    [[outputs.influxdb]]
      urls = ["$MONITOR_HOST"] 
      database = "$MONITOR_DATABASE" 
      timeout = "5s"
      username = "$MONITOR_USERNAME"
      password = "$MONITOR_PASSWORD"
      user_agent = "telegraf_ki_k8s"
    [[outputs.file]]
      files = ["stdout"] 
      data_format = "influx"

    [[inputs.kube_inventory]]
      url = "https://kubernetes.default.svc"
      bearer_token = "/var/run/secrets/kubernetes.io/serviceaccount/token"
      insecure_skip_verify = true

And this my output log on stdout, where you can see 1 measurement and no errors, but none about

  • daemonsets
  • deployments
  • persistentvolumes
  • persistentvolumeclaims
  • pods (containers)
  • statefulsets

As described in telegraf/plugins/inputs/kube_inventory at master · influxdata/telegraf · GitHub

2019-06-21T06:53:55Z I! Starting Telegraf 1.11.0
2019-06-21T06:53:55Z I! Using config file: /etc/telegraf/telegraf.conf
2019-06-21T06:53:55Z I! Loaded inputs: kube_inventory
2019-06-21T06:53:55Z I! Loaded aggregators: 
2019-06-21T06:53:55Z I! Loaded processors: 
2019-06-21T06:53:55Z I! Loaded outputs: influxdb file
2019-06-21T06:53:55Z I! Tags enabled: env=clusterA
2019-06-21T06:53:55Z I! [agent] Config: Interval:1m0s, Quiet:false, Hostname:"", Flush Interval:10s
2019-06-21T06:53:55Z D! [agent] Connecting outputs
2019-06-21T06:53:55Z D! [agent] Attempting connection to output: influxdb
2019-06-21T06:53:55Z W! [outputs.influxdb] when writing to [http://10.0.2.4:9096]: database "caas_metrics2" creation failed: 400 Bad Request
2019-06-21T06:53:55Z D! [agent] Successfully connected to output: influxdb
2019-06-21T06:53:55Z D! [agent] Attempting connection to output: file
2019-06-21T06:53:55Z D! [agent] Successfully connected to output: file
2019-06-21T06:53:55Z D! [agent] Starting service inputs
2019-06-21T06:54:10Z D! [outputs.file] wrote batch of 1 metrics in 127.495µs
2019-06-21T06:54:10Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
kubernetes_node,env=clusterA,node_name=kubetst01 capacity_memory_bytes=6133161984i,capacity_pods=110i,capacity_cpu_cores=1i,allocatable_memory_bytes=6028304384i,allocatable_pods=110i,allocatable_cpu_cores=1i 1561100040000000000
2019-06-21T06:54:10Z D! [outputs.influxdb] wrote batch of 1 metrics in 55.832466ms
2019-06-21T06:54:10Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:54:20Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:54:20Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:54:30Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:54:30Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:54:40Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:54:40Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:54:50Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:54:50Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:55:00Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:55:00Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
kubernetes_node,env=clusterA,node_name=kubetst01 allocatable_pods=110i,capacity_cpu_cores=1i,capacity_memory_bytes=6133161984i,capacity_pods=110i,allocatable_cpu_cores=1i,allocatable_memory_bytes=6028304384i 1561100100000000000
2019-06-21T06:55:10Z D! [outputs.file] wrote batch of 1 metrics in 136.117µs
2019-06-21T06:55:10Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:55:10Z D! [outputs.influxdb] wrote batch of 1 metrics in 151.65821ms
2019-06-21T06:55:10Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:55:20Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:55:20Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:55:30Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:55:30Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:55:40Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:55:40Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:55:50Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:55:50Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:56:00Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:56:00Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
kubernetes_node,env=clusterA,node_name=kubetst01 allocatable_cpu_cores=1i,allocatable_memory_bytes=6028304384i,allocatable_pods=110i,capacity_cpu_cores=1i,capacity_memory_bytes=6133161984i,capacity_pods=110i 1561100160000000000
2019-06-21T06:56:10Z D! [outputs.file] wrote batch of 1 metrics in 72.724µs
2019-06-21T06:56:10Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:56:10Z D! [outputs.influxdb] wrote batch of 1 metrics in 36.22681ms
2019-06-21T06:56:10Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:56:20Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:56:20Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:56:30Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:56:30Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:56:40Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:56:40Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:56:50Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:56:50Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:57:00Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:57:00Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
kubernetes_node,env=clusterA,node_name=kubetst01 capacity_cpu_cores=1i,capacity_memory_bytes=6133161984i,capacity_pods=110i,allocatable_cpu_cores=1i,allocatable_memory_bytes=6028304384i,allocatable_pods=110i 1561100220000000000
2019-06-21T06:57:10Z D! [outputs.file] wrote batch of 1 metrics in 70.295µs
2019-06-21T06:57:10Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:57:10Z D! [outputs.influxdb] wrote batch of 1 metrics in 34.21583ms
2019-06-21T06:57:10Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:57:20Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:57:20Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:57:30Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:57:30Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:57:40Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:57:40Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:57:50Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:57:50Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:58:00Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics. 
2019-06-21T06:58:00Z D! [outputs.file] buffer fullness: 0 / 10000 metrics.

I’ve just test changing serviceaccount as described Here (telegraf/plugins/inputs/kube_inventory at master · influxdata/telegraf · GitHub) and also with cluster-admin roles , in both cases with the same result.

Any idea on how to fix it?

These are my lab versions

$ kubectl cluster-info
Kubernetes master is running at https://10.0.2.15:6443
KubeDNS is running at https://10.0.2.15:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://10.0.2.15:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ kubectl get nodes
NAME        STATUS    ROLES     AGE       VERSION
kubetst01   Ready     master    279d      v1.11.3

As described in [kube_inventory only gathers kubernetes_node measurement with cluster-admin role · Issue #6033 · influxdata/telegraf · GitHub] the namespace config param is mandatory. When added it has begun to get all missing measurements