Collect data from an application with Prometheus exporter

Hi,

I have to collect data from a software package, that only provides an prometheus exporter. I have assumed, that this is possible with the [input.prometheus] plugin, but after some tests, it seems more like this plugin is only able to collect data from an prometheus server.

Maybe I’m wrong, but the documentation is very thin at this topic, so it would be great to get a more detailed advise how to collect the data with telegraf to store it within an InfluxDB

Maybe I can use another plugin?

In the documentation they state the following, what is maybe helpful.

For each service, metrics are available via the built-in HTTP server on the /prometheus page (for
example http: //registry:7871/prometheus). The format is the Prometheus Text format 0.0.4 which
is documented here.

Thank you for your help.

Best Regards,

Stephan

Hello @SWalter,
Thanks for your question. Have you taken a look at the markdown on git for the prometheus plugin? You can scrape metrics from an endpoint. It also has service discovery for Kubernetes to build that list dynamically.
What documentation are you looking at? Clearly, we need to update that info.
Is that what you’re looking for?

Also with Influx 2.0, you can use a scrape function and bypass telegraf altogether. Let me know what you decide to use. If you decide to use telegraf, and are still having trouble please share your config so we can debug it. Thanks :slight_smile:

Hi @Anaisdg

I have had checked already the telegraf prometheus plugin documentation on github, but I am not really able to understand it. Yet I am even not sure, if the plugin interacts with a prometheus exporter/endpoint or a prometheus server instance.

I don’t know any other documentation for the telegraf plugin. So if there is anything else, please share it. I also tried to google some examples, but failed.

Influx2.0 is not really an option for us, since we have now just influx1.7.1 in place and plan to update to the newest version. Maybe 2.0 is something within one or two years, but not now.

We got some short description from the third party software developers, that I have attached. In general I would assume, that it is possible to use this provided information two write a valid telegraf config and then gather all the data, but I failed…

11.4 Prometheus
For advanced monitoring purposes, XXX services and clients export their metrics for consumption
by external monitoring systems.
For each service, metrics are available via the built-in HTTP server on the /prometheus page (for
example http: //registry:7871/prometheus). The format is the Prometheus Text format 0.0.4 which
is documented here.
This format can be consumed by Prometheus 1.x and 2.x, but also other monitoring systems are
able to scrape Prometheus targets. XXX Registry services also export a minimal Consul-like
service discovery interface that allows Prometheus to automatically discovery XXX services
and clients.
11.4.1 Prometheus Configuration Example
An example Prometheus conguration which lists services individually looks like this:

scrape_configs:
- job_name: "XXX_registry"
  metrics_path: "/prometheus"
  static_configs:
    - targets:
      - my_server_1:7871
      - my_server_2:7871
      - my_server_3:7871
      labels:
        XXX_cluster: clusterA
        service_type: XXX-registry

Using the service discovery interface, Prometheus discovers clients and services automatically:

scrape_configs:
- job_name: testing
  metrics_path: /prometheus
  consul_sd_configs:
    - server: registry:7871
      datacenter: mydatacenter
  relabel_configs:
    - source_labels: [job]
     target_label: XXX_cluster
    - source_labels: [__meta_consul_service]
      target_label: service_type
    - source_labels: [__meta_consul_service_address]
      target_label: instance

Make sure that a server points to an existing registry and the datacenter field is set (to any value).

@SWalter,
The prometheus input plugin gathers metrics from HTTP servers exposing metrics in Prometheus format.
In order to use telegraf you need to:
-install it
-generate a telegraf config file with your desired input and output plugins

 telegraf --input-filter prometheus --output-filter influxdb config > your_config.conf 

or if you decide to use influxdb_v2 in the future

 telegraf --input-filter prometheus --output-filter influxdb_v2 config > your_config.conf 

Then you need to edit the config:
-so that you’re scraping metrics from the correct url under the INPUTS PLUGIN section
-that you’re writing data to the correct OUTPUTS

finally, you can run telegraf with your desired config with

telegraf --config telegraf.conf

I also recommend setting debug=true in the telegraf config to help you debug.

@SWalter,
Does that help at all?
What have you tried? What does your config look like? Can you share it please?

Not Really,

since I have done this, but I don’t know how to configure the prometheus input plugin.

It is really unclear from the available documentation.

I don’t understand how to transfer the provided prometheus scrape config to a valid Telegraf config.

A problem is also, that the --test flag is not supported.

[agent] skipping plugin [[inputs.prometheus]]: service inputs not supported in --test mode

And also the debug=true is not really a help:

2019-11-19 09:48:55,290 [DEBUG] /entry-point.py_INFRA.XXX: spawned Telegraf with PID = 30
2019-11-19T08:48:55Z I! Starting Telegraf 1.11.1
2019-11-19T08:48:55Z I! Loaded inputs: prometheus
2019-11-19T08:48:55Z I! Loaded aggregators:
2019-11-19T08:48:55Z I! Loaded processors:
2019-11-19T08:48:55Z I! Loaded outputs: influxdb
2019-11-19T08:48:55Z I! Tags enabled: host=XXX_monitoring
2019-11-19T08:48:55Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"XXX_monigoring", 
Flush Interval:10s
2019-11-19T08:48:55Z D! [agent] Connecting outputs
2019-11-19T08:48:55Z D! [agent] Attempting connection to output: influxdb
2019-11-19T08:48:55Z D! [agent] Successfully connected to output: influxdb
2019-11-19T08:48:55Z D! [agent] Starting service inputs
2019-11-19T08:49:00Z E! [inputs.prometheus]: Error in plugin: error making HTTP request to 
http:10.10.1.1:8071/prometheus: Get http:10.10.1.1:8071/prometheus: http: no Host in request URL
2019-11-19T08:49:10Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics.
2019-11-19T08:49:20Z D! [outputs.influxdb] buffer fullness: 0 / 10000 metrics.

The agent and influx config is the same as the one that works for the monitoring of other components. So this is 100% valid.

The question is, what is the inputs.prometheus part? Right now it is

[[inputs.prometheus]]
# scrape urls
urls = ["http:10.10.1.1:8071/prometheus"]

interval = "60s"

## Specify timeout duration for slower prometheus clients (default is 3s)
response_timeout = "30s"

Ok, I have found a mistake. The port is 7871 not 8071. But even now the result ist the same.

2019-11-19T09:15:00Z E! [inputs.prometheus]: Error in plugin: error making HTTP request to http:admin1:7871/prometheus: Get http:admin1:7871/prometheus: http: no Host in request URL

But if I use my browser and go to admin1:7871/prometheus I see the monitoring data. Also with 10.10.1.1:7871/prometheus

So what is the problem?

Ok, the problem was, that within the url, two // were missing…

So this works…

indent preformatted text by 4 spaces
[[inputs.prometheus]] 
# scrape urls urls = ["http://10.10.1.1:8071/prometheus"] 
interval = "60s" 
## Specify timeout duration for slower prometheus clients (default is 3s) 
response_timeout = "30s"

But what is still missing is the auto discovery. Any suggestions?

@SWalter, this is what your third party developers wrote about service discovery in the software:

XXX Registry services also export a minimal Consul-like service discovery interface that allows Prometheus to automatically discovery XXX services and clients.

Consul is a piece of software made by Hashicorp which provides, among other things, service discovery functionality. It sounds like the software you are using provides a compatible service. Unfortunately, service discovery using Consul is not currently supported in Telegraf, although there is an open issue which discusses the possible addition of this functionality in the future.