Telegraf sending cached metric to prometheus

I have a scenario where sometime the telegraf custom scripts data is getting cached and sending old metrics to prometheus .
e.g i have script check.sh that check https response every 5m and send the metrics to prometheus . Sometime the https response could be 500 or 200 . However , when the response we got as 500 for once its getting cached there after all 200 response is not picked up .
Its like at same time we will have both 500 and 200 response metric , however the correct metric is 200 response metric at that time …

Any idea why its happening like that ?Could someone please help ?

stringData:
  telegraf.conf: |-
    [global_tags]
      pod_id = "XXX"
      namespace = "XXX"
    [[outputs.prometheus_client]]
      listen = ":9273"
      metric_version = 2
      expiration_interval = "60m"
    [[inputs.procstat]]
      exe = "telegraf"
    [[inputs.exec]]
      commands = [
        "/opt/scripts/check.sh"
      ]
      timeout = "15s"
      interval = "5m"
      data_format = "influx"
      [inputs.exec.tags]
        target = "test"
        alert_group = "http"


Why have you set the expiration_interval = "60m" which is not the default. This will cause metrics to hang around for an hour?

Also what version of telegraf?

The telegraf version in 1.29.4 . I have set expiration_interval to 60m , i had seen some scenario where the metric was unavailable in prometheus for quite some time . What is the default value ?

From the plugin readme it is 60 seconds.

i had seen some scenario where the metric was unavailable in prometheus for quite some time

This is because you also have various intervals for your data. One at 5m, and one probably at the default 10s. If you are not expiring your data for an hour, then if the exec script, which checks every 5mins has different data it may not get replaced till the 60min expiration is hit. Try setting this to 5mins, your longest input interval and see if it resolves the issue.