"value_mean" "value_min" and "value_max" all the values are same when i do the aggregation. Why?

print(“process 1 Id :”, os.getpid())
print(“adding items to queue”)
x = 0
upload_time = time.time()


while x < 1000:
    #y = random.randint(0, 60)
    x = random.randint(0, 10)
    curr_time = time.time()
    statsd_client.set('random.randint', x)
    if curr_time - upload_time > 60:
        statsd_client.incr('queue_size_enqueing', q.qsize())
        print("metric sent")
        enque_count = 0
        upload_time = curr_time
    print("added to queue")
    x =x+ 1
    #if int(t) % 10 == 0:
# Keep the aggregate basicstats of each metric passing through.
 ## The period on which to flush & clear the aggregator.
period = "0.1s"
## If true, the original metric will be dropped by the
## aggregator and will not get sent to the output plugins.
drop_original = false

# #`Preformatted text`Configures which basic stats to push as fields
stats = ["mean","max","min"]

this is what i did in telegraf.
but i want the mean of number of items added to queue per second. This aggregation has to reach influx. but i’m getting same values.

I see you are sending using the statsd protocol. The input plugin for statsd has built in aggregation, in order to match other statsd implementations. Items are aggregated over the plugin’s interval, 10s by default, and then for counters only the final count is emitted.

You could still further aggregate the output using the basicstats aggregator but you would need a much larger period, at least long enough for 2 points to be received.

What do you prefer??
I am not able to figure how do i get the average number of items enqued.
and can you please explain how does the “rate” works. I went through the docs not much in detail in that.