I have metric collected from a source. the source sets the type of metrics as Info or Stateset.
# HELP capi_cluster_status_phase The clusters current phase.
# TYPE capi_cluster_status_phase stateset
capi_cluster_status_phase{customresource_group="cluster.x-k8s.io",customresource_kind="Cluster",customresource_version="v1beta1",name="somename",namespace="somenamespace",phase="Deleting",uid="50e401fb-9651-47e2-9baf-43d518cea952"} 0
can someone help me understand how we can transform the metric type in telegraf
Hello @rahav_jv ,
Telegrafs prom input plugin doesn’t natively support the stateset metric type unfortunately.
However you could try:
Metric filtering to exclude these problematic metrics:
[[inputs.prometheus]]
# Your existing configuration
# ...
metric_version = 2
# Exclude the problematic metrics
metric_name_filter = ["capi_cluster_status_phase"]
metric_name_filter_mode = "exclude"
Use the Enum processor to handle these stateset metrics by mapping them to numeric values similar to how Telegraf handles other enumerated states:
[[processors.enum]]
namepass = ["capi_cluster_status_phase"]
[[processors.enum.mapping]]
field = "phase"
[processors.enum.mapping.value_mappings]
Deleting = 0
Failed = 1
# Add other phases as needed
Some resoures that could help
# Kubernetes Inventory Input Plugin
This plugin gathers metrics from [Kubernetes][kubernetes] resources.
> [!NOTE]
> This plugin requires Kubernetes version 1.11+.
The gathered resources include for example daemon sets, deployments, endpoints,
ingress, nodes, persistent volumes and many more.
> [!CRITICAL]
> This plugin produces high cardinality data, which when not controlled for will
> cause high load on your database. Please make sure to [filter][filtering] the
> produced metrics or configure your database to avoid cardinality issues!
⭐ Telegraf v1.10.0
🏷️ containers
💻 all
[kubernetes]: https://kubernetes.io/
This file has been truncated. show original
# Splunk Metrics serializer
The Splunk Metrics serializer outputs metrics in the [Splunk metric HEC JSON format][splunk-format].
It can be used to write to a file using the file output, or for sending metrics to a HEC using the standard telegraf HTTP output.
If you're using the HTTP output, this serializer knows how to batch the metrics so you don't end up with an HTTP POST per metric.
[splunk-format]: http://dev.splunk.com/view/event-collector/SP-CAAAFDN#json
An example event looks like:
```javascript
{
"time": 1529708430,
"event": "metric",
"host": "patas-mbp",
"fields": {
"_value": 0.6,
"cpu": "cpu0",
"dc": "mobile",
This file has been truncated. show original