If I send multiple, independent, counter increments within the same Telegraf agent interval, it only results in 1 increment for the counter. The remaining requests for the same counter seem to be discarded.
Is this a bug or a configuration issue?
If I send multiple, independent, counter increments within the same Telegraf agent interval, it only results in 1 increment for the counter. The remaining requests for the same counter seem to be discarded.
Is this a bug or a configuration issue?
Which input are you using? If you are using the statsd input then data is aggregated over the length of the interval.
I am using statsd but the data is not aggregated.
Bellow, part of my telegraf configuration.
Any idea of what could be wrong?
Thank you.
[[inputs.statsd]]
# Address and port to host UDP listener on
service_address = ":8125"
# Delete gauges every interval (default=false)
delete_gauges = true
# Delete counters every interval (default=false)
delete_counters = true
# Delete sets every interval (default=false)
delete_sets = false
# Delete timings & histograms every interval (default=true)
delete_timings = true
# Percentiles to calculate for timing & histogram stats
percentiles = [90]
# convert measurement names, “.” to “_” and “-” to “__”
convert_names = false
templates = [
"* measurement.field"
]
# Number of UDP messages allowed to queue up, once filled,
# the statsd server will start dropping packets
allowed_pending_messages = 10000
# Number of timing/histogram values to track per-measurement in the
# calculation of percentiles. Raising this limit increases the accuracy
# of percentiles but also increases the memory usage and cpu time.
percentile_limit = 1000
# UDP packet size for the server to listen for. This will depend on the size
# of the packets that the client is sending, which is usually 1500 bytes.
udp_packet_size = 1500
Never mind my comments.
The issue was on the data visualization on Grafana.
Telegraf is aggregating the data over the length of the interval as expected.
If someone else encounters a similar issue please check your querys when pulling the data.