Telegraf Vsphere Multiple Inputs Multiple Outputs Possible?

influxdb
telegraf
#1

I was trying to configure 2 vcenters ( Test and Prod ) to send data to different DBs.

This is what I tried to do…

two different blocks of

[[inputs.vsphere]]

and

[[inputs.vsphere]]
[inputs.vsphere.tags]
influxdb_database = “other”

and 2 different outputs

[[outputs.influxdb]]

database = “telegraf”
urls = [ “http://127.0.0.1:8086” ]
username = “telegraf”
password = “passwords”
[outputs.influxdb.tagdrop]
influxdb_database = ["*"]

[[outputs.influxdb]]

database = “telegraf”
urls = [ “http://127.0.0.1:8086” ]
username = “telegraf_prod”
password = "password
tagexclude = [“influxdb_database”]
[outputs.influxdb.tagpass]
influxdb_database = [“other”]

I’m I doing this correctly?

I see this error

telegraf[964]: 2019-04-02T19:07:21Z E! Could not parse tags for input vsphere

#2

Hi welcome :slight_smile:
can you post the complete inputs.vsphere and outputs.influxdb blocks

#3

Sure here it is!

###############################################################################
#                            OUTPUT PLUGINS                                   #
###############################################################################

# Configuration for sending metrics to InfluxDB
[[outputs.influxdb]]

  database = "telegraf"
  urls = [ "http://127.0.0.1:8086" ]
  username = "telegraf"
  password = "password"
  [outputs.influxdb.tagdrop]
   influxdb_database = ["*"]

[[outputs.influxdb]]

  database = "telegraf"
  urls = [ "http://127.0.0.1:8086" ]
  username = "telegraf_prod"
  password = "password"
  tagexclude = ["influxdb_database"]
  [outputs.influxdb.tagpass]
   influxdb_database = ["other"]
#4
# # Read metrics from VMware vCenter
 [[inputs.vsphere]]
   [inputs.vsphere.tags]
     influxdb_database = "other"
#   ## List of vCenter URLs to be monitored. These three lines must be uncommented
#   ## and edited for the plugin to work.
       vcenters = [ "https://myProdIP/sdk" ]
       username = "user"
       password = "password"



#
#   ## VMs
#   ## Typical VM metrics (if omitted or empty, all metrics are collected)
       vm_metric_include = []


       insecure_skip_verify = true


# # Read metrics from VMware vCenter
 [[inputs.vsphere]]

#   ## List of vCenter URLs to be monitored. These three lines must be uncommented
#   ## and edited for the plugin to work.
   vcenters = [ "https://myTestIP/sdk" ]
   username = "user"
   password = "password"



#
#   ## VMs
#   ## Typical VM metrics (if omitted or empty, all metrics are collected)
   vm_metric_include = [
     "cpu.demand.average",
     "cpu.idle.summation",
     "cpu.latency.average",
     "cpu.readiness.average",
     "cpu.ready.summation",
     "cpu.run.summation",
     "cpu.usagemhz.average",
     "cpu.used.summation",
     "cpu.wait.summation",
     "mem.active.average",
     "mem.granted.average",
     "mem.latency.average",
     "mem.swapin.average",
     "mem.swapinRate.average",
     "mem.swapout.average",
     "mem.swapoutRate.average",
     "mem.usage.average",
     "mem.vmmemctl.average",
     "net.bytesRx.average",
     "net.bytesTx.average",
     "net.droppedRx.summation",
     "net.droppedTx.summation",
     "net.usage.average",
     "power.power.average",
     "virtualDisk.numberReadAveraged.average",
     "virtualDisk.numberWriteAveraged.average",
     "virtualDisk.read.average",
     "virtualDisk.readOIO.latest",
     "virtualDisk.throughput.usage.average",
     "virtualDisk.totalReadLatency.average",
     "virtualDisk.totalWriteLatency.average",
     "virtualDisk.write.average",
     "virtualDisk.writeOIO.latest",
     "sys.uptime.latest",
   ]
   # vm_metric_exclude = [] ## Nothing is excluded by default
   # vm_instances = true ## true by default
#
#   ## Hosts
#   ## Typical host metrics (if omitted or empty, all metrics are collected)
   host_metric_include = [
     "cpu.coreUtilization.average",
     "cpu.costop.summation",
     "cpu.demand.average",
     "cpu.idle.summation",
     "cpu.latency.average",
     "cpu.readiness.average",
     "cpu.ready.summation",
     "cpu.swapwait.summation",
     "cpu.usage.average",
     "cpu.usagemhz.average",
     "cpu.used.summation",
     "cpu.utilization.average",
     "cpu.wait.summation",
     "disk.deviceReadLatency.average",
     "disk.deviceWriteLatency.average",
     "disk.kernelReadLatency.average",
     "disk.kernelWriteLatency.average",
     "disk.numberReadAveraged.average",
     "disk.numberWriteAveraged.average",
     "disk.read.average",
     "disk.totalReadLatency.average",
     "disk.totalWriteLatency.average",
     "disk.write.average",
     "mem.active.average",
     "mem.latency.average",
     "mem.state.latest",
     "mem.swapin.average",
     "mem.swapinRate.average",
     "mem.swapout.average",
     "mem.swapoutRate.average",
     "mem.totalCapacity.average",
     "mem.usage.average",
     "mem.vmmemctl.average",
     "net.bytesRx.average",
     "net.bytesTx.average",
     "net.droppedRx.summation",
     "net.droppedTx.summation",
     "net.errorsRx.summation",
     "net.errorsTx.summation",
     "net.usage.average",
     "power.power.average",
     "storageAdapter.numberReadAveraged.average",
     "storageAdapter.numberWriteAveraged.average",
     "storageAdapter.read.average",
     "storageAdapter.write.average",
     "sys.uptime.latest",
   ]
#   # host_metric_exclude = [] ## Nothing excluded by default
#   # host_instances = true ## true by default
#
#   ## Clusters
    cluster_metric_include = [] ## if omitted or empty, all metrics are collected
#   # cluster_metric_exclude = [] ## Nothing excluded by default
#   # cluster_instances = false ## false by default
#
#   ## Datastores
    datastore_metric_include = [] ## if omitted or empty, all metrics are collected
#   # datastore_metric_exclude = [] ## Nothing excluded by default
#   # datastore_instances = false ## false by default for Datastores only
#
#   ## Datacenters
   datacenter_metric_include = [] ## if omitted or empty, all metrics are collected
   #datacenter_metric_exclude = [ "*" ] ## Datacenters are not collected by default.
   #datacenter_instances = false ## false by default for Datastores only
#
#   ## Plugin Settings
#   ## separator character to use for measurement and field names (default: "_")
#   # separator = "_"
#
#   ## number of objects to retreive per query for realtime resources (vms and hosts)
#   ## set to 64 for vCenter 5.5 and 6.0 (default: 256)
#   # max_query_objects = 256
#
#   ## number of metrics to retreive per query for non-realtime resources (clusters and datastores)
#   ## set to 64 for vCenter 5.5 and 6.0 (default: 256)
#   # max_query_metrics = 256
#
#   ## number of go routines to use for collection and discovery of objects and metrics
#   # collect_concurrency = 1
#   # discover_concurrency = 1
#
#   ## whether or not to force discovery of new objects on initial gather call before collecting metrics
#   ## when true for large environments this may cause errors for time elapsed while collecting metrics
#   ## when false (default) the first collection cycle may result in no or limited metrics while objects are discovered
#   # force_discover_on_init = false
#
#   ## the interval before (re)discovering objects subject to metrics collection (default: 300s)
#   # object_discovery_interval = "300s"
#
#   ## timeout applies to any of the api request made to vcenter
#   # timeout = "60s"
#
#   ## Optional SSL Config
#   # ssl_ca = "/path/to/cafile"
#   # ssl_cert = "/path/to/certfile"
#   # ssl_key = "/path/to/keyfile"
#   ## Use SSL but skip chain & host verification
    insecure_skip_verify = true
#5

Hi ,
Can you try moving the following two lines

[inputs.vsphere.tags]
influxdb_database = “other”

To the end of the [[inputs.vsphere]]
section ?

#6

I change it like this and now all logs going to the “telegraf” DB instead of “telegraf_prod”

   [[inputs.vsphere]]
    #   ## List of vCenter URLs to be monitored. These three lines must be uncommented
    #   ## and edited for the plugin to work.
           vcenters = [ "https://MyProdIP/sdk" ]
           username = "user"
           password = "password"



    #
    #   ## VMs
    #   ## Typical VM metrics (if omitted or empty, all metrics are collected)
           vm_metric_include = []


           insecure_skip_verify = true
       [inputs.vsphere.tags]
         influxdb_database = "other"
#7

What if you create two telegraf configs, oen for each vCenter and then add the influxdb out section to each of those, pointing at the respective database. Then put both configs in /etc/telegraf/telegraf.d/ or the Windows equivilant if not on linux?

I haven’t tried that, we collect data from 2 vCenters but send it to the same database then split it out based on tags.

#8

Can you share your telegraf.conf to see how you apply the tags? I basically want to have Test and Prod DBs for my 2 Enviroments.

#9

The tags are included in the plugin, so if it was going to one database you could filter your query with the vcenter tag to get the data.

Then you could use the chronograf dashboards or grafana to visualise the data based on which VC it was.

I’ll try and check my config. It’s not local though

edit: you can add [global_tags] to your config. So if you had separate configs you could define

[global_tags]
vcenter = test

In the test config and then change the other to “prod”

Thats the easiest way i can think of adding extra tags

#10

Hi fcocantu , that is because both your
[[outputs.influxdb]]

sections contain the same database : database = “telegraf”

I guess one of them should be “telegraf_prod”

like this

[[outputs.influxdb]]

  database = "telegraf_prod"
  urls = [ "http://127.0.0.1:8086" ]
  username = "telegraf_prod"
  password = "password"
  tagexclude = ["influxdb_database"]
  [outputs.influxdb.tagpass]
   influxdb_database = ["other"]
#11

I just realized that let me try!

#12

@philb , hi Phil , you can split the data in telegraf and send the data to the desired database :slight_smile: