Not seeing netflow/sflow metrics in Prometheus but can see it in InfluxDB

Hello,

I have been testing using Telegraf as a sFlow collector. Originally I was testing with Telegraf’s inputs.sflow going to InfluxDB and using Grafana to show the metrics, and this was working no problems. However, our team uses Prometheus in stead of InfluxDB so we tried to change the outputs. configuration to point to prometheus. When we did this, we do not see any sflow metrics when we scrape from http://telegraf.ip:9273/metrics and we didn’t see anything in Prometheus either.

I then read that sflow plugin is deprecated, and to use netflow instead, so we set that up to be an sflow v5 collector and listen on the same sflow port (6343) and I can see netflow metrics now when we access http://telegraf.ip:9273, however only a few things show up. However, if I look at influxdb, there are WAY more metrics and tags.

Here is my /etc/prometheus/prometheus.yml:

global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'telegraf-sflow'
    static_configs:
      - targets: ['localhost:9273']

My telegraf.conf is the following:

[agent]
  round_interval = true
  metric_batch_size = 1000
  metric_buffer_limit = 10000
  collection_jitter = "0s"
  flush_interval = "10s"
  flush_jitter = "0s"
  precision = "0s"

# [[outputs.influxdb]]

[[outputs.prometheus_client]]
  listen = ":9273"
  path = "/metrics"

[[inputs.cpu]]
  percpu = true
  totalcpu = true
  collect_cpu_time = false
  report_active = false
  core_tags = false

[[inputs.disk]]
  ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]

[[inputs.diskio]]
  
[[inputs.kernel]]

[[inputs.mem]]

[[inputs.processes]]

[[inputs.swap]]

[[inputs.system]]

[[inputs.netflow]]
  service_address = "udp://:6343"
  protocol = "sflow v5"

# [[inputs.sflow]]
#   service_address = "udp://:6343"

When I had influxDB and sflow enabled in my telegraf.conf, I was able to see sflow metrics in the influxdb and I could create queries against it in Grafana. However when I tried to do the same with Prometheus, there are now sflow metrics, which is why we moved to netflow and I could see those.

I’m curious as to why I couldn’t see any sflow metrics in Prometheus, which I feel I should have been able to see those. Also, when testing with Netflow and InfluxDB, I could see a lot of metrics, but when moved to Prometheus, there are only 8.

I’m stuck at the moment trying to figure out how to do this. Any help would be greatly appreciated.

Thanks

Can you show a few metrics which are available in InfluxDB (in line protocol format please) which aren’t in Prometheus. That might give a hint..

Also, check the logs from Telegraf..

Hello,

So I was looking at the telegraf log after adding the debug settings to my telegraf.conf and restarting and I’m seeing errors:

 2025-10-28T23:04:04.674816+00:00 testing-tpg telegraf[13533]: 2025-10-28T23:04:04Z D! [inputs.sflow] Unknown sample type: 4
2025-10-28T23:04:04.674976+00:00 testing-tpg telegraf[13533]: message repeated 5

And it repeats with different time stamps.

I need to re-enable the output for influx and let it run for a bit to provide those logs, but it looks like the Telegraf output plugin for Prometheus doesn’t understand the sample? Any idea what this error means?

Thanks

After some research I see that the sflow plugin is the issue. I re-enabled netflow and now I get the following logs:

 2025-10-28T23:13:34.231321+00:00 testing-tpg telegraf[13533]: 2025-10-28T23:13:34Z I! [agent] Stopping running outputs
2025-10-28T23:13:34.231416+00:00 testing-tpg systemd[1]: Stopping telegraf.service - Telegraf...
2025-10-28T23:13:34.231495+00:00 testing-tpg telegraf[13533]: 2025-10-28T23:13:34Z D! [agent] Stopped Successfully
2025-10-28T23:13:34.236278+00:00 testing-tpg systemd[1]: telegraf.service: Deactivated successfully.
2025-10-28T23:13:34.236409+00:00 testing-tpg systemd[1]: Stopped telegraf.service - Telegraf.
2025-10-28T23:13:34.240943+00:00 testing-tpg systemd[1]: Starting telegraf.service - Telegraf...
2025-10-28T23:13:34.251470+00:00 testing-tpg (telegraf)[13597]: telegraf.service: Referenced but unset environment variable evaluates to an empty string: TELEGRAF_OPTS
2025-10-28T23:13:34.302458+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:34Z I! Loading config: /etc/telegraf/telegraf.conf
2025-10-28T23:13:34.303555+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:34Z I! Starting Telegraf 1.36.3 brought to you by InfluxData the makers of InfluxDB
2025-10-28T23:13:34.303985+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:34Z I! Available plugins: 239 inputs, 9 aggregators, 35 processors, 26 parsers, 65 outputs, 6 secret-stores
2025-10-28T23:13:34.304023+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:34Z I! Loaded inputs: netflow
2025-10-28T23:13:34.304042+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:34Z I! Loaded aggregators:
2025-10-28T23:13:34.304059+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:34Z I! Loaded processors:
2025-10-28T23:13:34.304074+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:34Z I! Loaded secretstores:
2025-10-28T23:13:34.304090+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:34Z I! Loaded outputs: prometheus_client
2025-10-28T23:13:34.304101+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:34Z I! Tags enabled: host=testing-tpg
2025-10-28T23:13:34.304116+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:34Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"testing-tpg", Flush Interval:10s
2025-10-28T23:13:34.304129+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:34Z W! [agent] The default value of 'skip_processors_after_aggregators' will change to 'true' with Telegraf v1.40.0! If you need the current default behavior, please explicitly set the option
to 'false'!
2025-10-28T23:13:34.304142+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:34Z D! [agent] Initializing plugins
2025-10-28T23:13:34.304153+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:34Z D! [agent] Connecting outputs
2025-10-28T23:13:34.304165+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:34Z D! [agent] Attempting connection to [outputs.prometheus_client]
2025-10-28T23:13:34.304182+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:34Z I! [outputs.prometheus_client] Listening on http://[::]:9273/metrics
2025-10-28T23:13:34.304194+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:34Z D! [agent] Successfully connected to outputs.prometheus_client
2025-10-28T23:13:34.304206+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:34Z D! [agent] Starting service inputs
2025-10-28T23:13:34.304217+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:34Z I! [inputs.netflow] Listening on udp://[::]:6343
2025-10-28T23:13:34.304247+00:00 testing-tpg systemd[1]: Started telegraf.service - Telegraf.
2025-10-28T23:13:43.667769+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:43Z D! [inputs.netflow] received 908 bytes
2025-10-28T23:13:43.667873+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:43Z W! [inputs.netflow] Unknown counter raw flow message 2001: 0000000d0000000500000001021018257e25000100000004000000010242860755ba00050000000800000001ba04c592574e00000000000600000001d6774cc9e
9b6000000000020000000014a89c38cc0e000000000000b000000017486e288e62700000000000a000000017486e288e62700000000001c00000001c2145402b6dc000000000002000000017486e288e625000000000009000000017486e288e627000000000007000000017486e288e62700000000001b000000013217a788bf290ade0000001e
00000001dabbbeccf4950000
2025-10-28T23:13:43.667954+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:43Z W! [inputs.netflow] This message is only printed once.
2025-10-28T23:13:43.667983+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:43Z W! [inputs.netflow] Unknown counter raw flow message 2010: 07b4342800000f7600000b4807b721ad00000b480000000000000000
2025-10-28T23:13:43.668006+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:43Z W! [inputs.netflow] This message is only printed once.
2025-10-28T23:13:43.668022+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:43Z W! [inputs.netflow] Unknown counter raw flow message 2009: 00000001000000c80001d4c0ffffffff00299f4c00410c26000001320004c7bd000001c806aa2de306dcf5100000093600000000000001a100000000
2025-10-28T23:13:43.668039+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:43Z W! [inputs.netflow] This message is only printed once.
2025-10-28T23:13:43.668055+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:43Z W! [inputs.netflow] Unknown counter raw flow message 2008: 0000004400000027000000000000003e0000000000000000000000000000000000000006000000000000000000000000000000000000000000000fb1000000000
0000fb10000000000000000000000000000000000000000000000000000000000000000
2025-10-28T23:13:43.668074+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:43Z W! [inputs.netflow] This message is only printed once.
2025-10-28T23:13:43.668089+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:43Z W! [inputs.netflow] Unknown counter raw flow message 2007: 00000001000000400e5c834900000000000000000000000000000000000000000e5c83470e4e6c320000000000000001000000000000000000000000000000000
00000000000000000000000
2025-10-28T23:13:43.668119+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:43Z W! [inputs.netflow] This message is only printed once.
2025-10-28T23:13:43.668140+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:43Z W! [inputs.netflow] Unknown counter raw flow message 2005: 000000038991e00000000001c9acd0000000135102b7c3840000001e3ecbe000003be57c00841c2d0000003abf60f40000a8c111
2025-10-28T23:13:43.668165+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:43Z W! [inputs.netflow] This message is only printed once.
2025-10-28T23:13:43.668201+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:43Z W! [inputs.netflow] Unknown counter raw flow message 2004: 00000001d252a00000000000604dc0000000000000000000000000000eef900000000000a1fc600000000000000000000000000000000000077f886e09e194560
000000000000000
2025-10-28T23:13:43.668223+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:43Z W! [inputs.netflow] This message is only printed once.
2025-10-28T23:13:43.668241+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:43Z W! [inputs.netflow] Unknown counter raw flow message 2003: 41687ae1415a666641
58a3d7000000090000042900000002000005dc00121c0567731d8a0000000015518e420f7c18d400047a6800000000006b512afa83a04c270b9443000000000000000000000000
2025-10-28T23:13:43.668260+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:43Z W! [inputs.netflow] This message is only printed once.
2025-10-28T23:13:43.668277+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:43Z W! [inputs.netflow] Unknown counter raw flow message 2006: 00000007c5b36d350429c55b000000000000000000000007c33a6faf000000000000000000000000
2025-10-28T23:13:43.668294+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:43Z W! [inputs.netflow] This message is only printed once.
2025-10-28T23:13:43.668315+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:43Z W! [inputs.netflow] Unknown counter raw flow message 2000: 000000085370696e65444b316571e19b6571519ba571e19b6571e19b00000003000000020000000f352e31302e302d32312d616d64363400
2025-10-28T23:13:43.668332+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:43Z W! [inputs.netflow] This message is only printed once.
2025-10-28T23:13:44.224286+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:44Z D! [inputs.netflow] received 1324 bytes
2025-10-28T23:13:44.304287+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:44Z D! [outputs.prometheus_client] Wrote batch of 10 metrics in 193.342µs
2025-10-28T23:13:44.304339+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:44Z D! [outputs.prometheus_client] Buffer fullness: 0 / 10000 metrics
2025-10-28T23:13:44.700026+00:00 testing-tpg telegraf[13597]: 2025-10-28T23:13:44Z D! [inputs.netflow] received 892 bytes

I hope someone could shed some light on these errors.

Thanks.

Okay, so the Netflow plugin not able to decode some messages is a totally different problem, you might want to create a new issue for that on GitHub so that support for these types can be added.

The issue you were talking about is about messages which are processed. So again, please give some examples of metrics (hint; also enable the file output next to Prometheus) which are not showing up on Prometheus.

So here is the output you were requesting. From influxdb:

sflow,agent_address=10.168.7.14,dst_ip=172.16.17.12,dst_mac=74:86:e2:88:e6:27,dst_port=4789,ether_type=IPv4,header_protocol=ETHERNET-ISO88023,host=testing-tpg,input_ifindex=17,output_ifindex=13,sample_direction=ingress,source_id_index=17,source_id_type=0,src_ip=172.16.7.15,src_mac=e8:b5:d0:e3:50:4c,src_port=54733 udp_length=1032i,ip_ecn="0",bytes=107400000i,frame_length=1074i,drops=0i,sampling_rate=100000i,ip_dscp="0",ip_flags=0i,ip_fragment_offset=0i,ip_total_length=1052i,ip_ttl=253i,header_length=128i 1761755323864162802

And when I look at the metrics output from telegraf for prometheus by running curl http://testing-tpg:9273/metrics I don’t see anything about sflow:

# HELP go_gc_duration_seconds A summary of the wall-time pause (stop-the-world) duration in garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 2.3309e-05
go_gc_duration_seconds{quantile="0.25"} 8.1813e-05
go_gc_duration_seconds{quantile="0.5"} 0.000100564
go_gc_duration_seconds{quantile="0.75"} 0.000121097
go_gc_duration_seconds{quantile="1"} 0.000214644
go_gc_duration_seconds_sum 0.00675111
go_gc_duration_seconds_count 65
# HELP go_gc_gogc_percent Heap size target percentage configured by the user, otherwise 100. This value is set by the GOGC environment variable, and the runtime/debug.SetGCPercent function. Sourced from /gc/gogc:percent.
# TYPE go_gc_gogc_percent gauge
go_gc_gogc_percent 100
# HELP go_gc_gomemlimit_bytes Go runtime memory limit configured by the user, otherwise math.MaxInt64. This value is set by the GOMEMLIMIT environment variable, and the runtime/debug.SetMemoryLimit function. Sourced from /gc/gomemlimit:bytes.
# TYPE go_gc_gomemlimit_bytes gauge
go_gc_gomemlimit_bytes 9.223372036854776e+18
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 25
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.25.3"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated in heap and currently in use. Equals to /memory/classes/heap/objects:bytes.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 1.5447024e+07
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated in heap until now, even if released already. Equals to /gc/heap/allocs:bytes.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 1.337662e+08
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table. Equals to /memory/classes/profiling/buckets:bytes.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.594888e+06
# HELP go_memstats_frees_total Total number of heap objects frees. Equals to /gc/heap/frees:objects + /gc/heap/tiny/allocs:objects.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 768642
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata. Equals to /memory/classes/metadata/other:bytes.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 3.948816e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and currently in use, same as go_memstats_alloc_bytes. Equals to /memory/classes/heap/objects:bytes.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 1.5447024e+07
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used. Equals to /memory/classes/heap/released:bytes + /memory/classes/heap/free:bytes.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 5.914624e+06
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use. Equals to /memory/classes/heap/objects:bytes + /memory/classes/heap/unused:bytes
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 1.8464768e+07
# HELP go_memstats_heap_objects Number of currently allocated objects. Equals to /gc/heap/objects:objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 69208
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS. Equals to /memory/classes/heap/released:bytes.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 4.816896e+06
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system. Equals to /memory/classes/heap/objects:bytes + /memory/classes/heap/unused:bytes + /memory/classes/heap/released:bytes + /memory/classes/heap/free:bytes.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 2.4379392e+07
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.7617603903786979e+09
# HELP go_memstats_mallocs_total Total number of heap objects allocated, both live and gc-ed. Semantically a counter version for go_memstats_heap_objects gauge. Equals to /gc/heap/allocs:objects + /gc/heap/tiny/allocs:objects.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 837850
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures. Equals to /memory/classes/metadata/mcache/inuse:bytes.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 2416
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system. Equals to /memory/classes/metadata/mcache/inuse:bytes + /memory/classes/metadata/mcache/free:bytes.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 15704
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures. Equals to /memory/classes/metadata/mspan/inuse:bytes.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 220640
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system. Equals to /memory/classes/metadata/mspan/inuse:bytes + /memory/classes/metadata/mspan/free:bytes.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 261120
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place. Equals to /gc/heap/goal:bytes.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 3.6839346e+07
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations. Equals to /memory/classes/other:bytes.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 650062
# HELP go_memstats_stack_inuse_bytes Number of bytes obtained from system for stack allocator in non-CGO environments. Equals to /memory/classes/heap/stacks:bytes.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 786432
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator. Equals to /memory/classes/heap/stacks:bytes + /memory/classes/os-stacks:bytes.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 786432
# HELP go_memstats_sys_bytes Number of bytes obtained from system. Equals to /memory/classes/total:byte.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 3.1636414e+07
# HELP go_sched_gomaxprocs_threads The current runtime.GOMAXPROCS setting, or the number of operating system threads that can execute user-level Go code simultaneously. Sourced from /sched/gomaxprocs:threads.
# TYPE go_sched_gomaxprocs_threads gauge
go_sched_gomaxprocs_threads 2
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 8
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 3.45
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 524287
# HELP process_network_receive_bytes_total Number of bytes received by the process over the network.
# TYPE process_network_receive_bytes_total counter
process_network_receive_bytes_total 3.9951896e+08
# HELP process_network_transmit_bytes_total Number of bytes sent by the process over the network.
# TYPE process_network_transmit_bytes_total counter
process_network_transmit_bytes_total 1.10819185e+08
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 13
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 1.48307968e+08
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.76175282716e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 5.857308672e+09
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19

Please let me know if you have any questions.

Thanks.