Hi everyone!
I want to use [aggregators.starlark] plugin from telegraf to calculate delta metric every 5 minutes - (current_metric - last_5m_metric)/last_5m_metric.
This is a block of starlark code:
state = {
"last": None,
"delta_disk_usage": None
}
def add(metric):
# Load from the shared state the metric assigned to the calculation metric before update last metric value
last_metric = state['last']
state['last'] = deepcopy(metric)
if last_metric != None:
# Create a new metric named "cpu_diff"
result = Metric("disk_delta_usage")
# Set the field "value" to the difference between the value of the last metric and the current one
result.fields["value"] = (metric.fields["used"] - last_metric.fields["used"])/last_metric.fields["used"]
result.time = metric.time
state['delta_disk_usage'] = result
def push():
return state.get("delta_disk_usage")
def reset():
state.clear()
It seems like when we call state.clear()
in reset()
function, all state variable is reset and I lost the last value that I reserved from this line of code: state['last'] = deepcopy(metric)
From the [processors.starlark] plugin, we can reserve old metric, like this example: (telegraf/compare_metrics.star at master · influxdata/telegraf · GitHub)
But, I want to use [aggregators.starlark] because it has period
config.
So, can we have a way to save old metric when using [aggregators.starlark] plugin?
Hello @tantrinh0812,
I don’t know if that’s possible with starlark. @jpowers might know.
I’d suggest using the execd processor plugin instead.
Aggregators work over a certain time period. At the end of that period reset is called to start collecting new data with a fresh slate.
If you want to save data without this reset, create a new variable to store the last variable when reset is called. That way you are saving the last value at the end of every period. The script ultimate needs those three functions, but you can control everything else like adding other functions or variables.
Hi @jpowers,
I follow your suggestion and added another variable metric_reserved
load("logging.star", "log")
state = {
"disk_delta_usage": None,
}
metric_reserved = {
"last": None
}
def add(metric):
# Load from the shared state the metric assigned to the calculation metric before update last metric value
last_metric = metric_reserved['last']
metric_reserved["last"] = deepcopy(metric)
if last_metric != None:
# Create a new metric named "cpu_diff"
result = Metric("disk_delta_usage")
# Set the field "value" to the difference between the value of the last metric and the current one
result.fields["value"] = (metric.fields["used"] - last_metric.fields["used"])/last_metric.fields["used"]
result.time = metric.time
state['disk_delta_usage'] = result
def push():
return state.get("disk_delta_usage")
def reset():
state.clear()
In reset()
function I only refresh state variable
and keep metric_reserved
, but Telegraf shows this error:
2023-03-28T06:00:00Z E! [aggregators.starlark] Traceback (most recent call last):
2023-03-28T06:00:00Z E! [aggregators.starlark] /opt/delta.star:14:18: in add
2023-03-28T06:00:00Z E! [aggregators.starlark] Error: cannot insert into frozen hash table
2023-03-28T06:00:00Z E! [aggregators.starlark] Traceback (most recent call last):
2023-03-28T06:00:00Z E! [aggregators.starlark] /opt/delta.star:14:18: in add
2023-03-28T06:00:00Z E! [aggregators.starlark] Error: cannot insert into frozen hash table
2023-03-28T06:00:00Z E! [aggregators.starlark] Traceback (most recent call last):
2023-03-28T06:00:00Z E! [aggregators.starlark] /opt/delta.star:14:18: in add
2023-03-28T06:00:00Z E! [aggregators.starlark] Error: cannot insert into frozen hash table
2023-03-28T06:00:00Z E! [aggregators.starlark] Traceback (most recent call last):
2023-03-28T06:00:00Z E! [aggregators.starlark] /opt/delta.star:14:18: in add
2023-03-28T06:00:00Z E! [aggregators.starlark] Error: cannot insert into frozen hash table
disk,cluster=cluster-k8s,device=mapper/ubuntu--vg-ubuntu--lv,fstype=ext4,host=bbdfbbb7f-gl4zd,mode=ro,path=/etc/telegraf total=49350922240i,free=22274949120i,used=24535904256i,used_percent=52.4149902991933,inodes_total=3080192i,inodes_free=2511710i,inodes_used=568482i 1679983200000000000
Do you think it is because we do not refresh variable metric_reserved
?
Thanks for your suggestion!
hmm I did not expect to see this. I’m wondering if we should add this % change calc to the basic stats as an alternative.
So, shall we stop there and wait for the support at the basic stats plugin?
Yeah, can you please file a feature request Sign in to GitHub · GitHub