Write data to bucket _time is always off by 10 seconds

When I run the query below, it gives me back the desired truncated to the minute _time, and it’s value.

For example, the _time value is 2023-08-10T20:14:00.000Z. How can it be that when I look up the last insert in ProducedConsumed_Wh, it’s _time value is 2023-08-10T20:14:10.000Z?

The _time values in that ProducedConsumed_Wh bucket are always off by +10sec. Why?

import "date"

from(bucket: "VenusOS")
  |> range(start: date.truncate(t: -1m, unit: 1m), stop: date.truncate(t: 0m, unit: 1m))

  |> filter(fn: (r) => r["_measurement"] == "Shunt")
  |> filter(fn: (r) => r._field == "Power_W")
  |> aggregateWindow(
    every: 1m,
    fn: (tables=<-, column) =>
        |> integral(unit: 1h, interpolate: "linear")

  |> keep(columns: ["_time", "_value"])
  |> timeShift(duration: -1m)
  |> map(
      fn: (r) => ({r with _time: r._time, _measurement: "CQ_Integral_Wh", _field: "Battery_Wh"}),
  |> to(
    bucket: "ProducedConsumed_Wh"

Seems odd, can you share the query that you use to read the stored value on ProducedConsumed_Wh bucket?

Maybe you are using an aggregate function to query the result wich samples every 10s or something like that?.

I do some indexing and store new values from other queries all the time, never had an issue with _time being different to the one I stored from a previous query.

Here is that query.

from(bucket: "ProducedConsumed_Wh")
  |> range(start: -5m)
  |> filter(fn: (r) => r["_measurement"] == "CQ-integral")
  |> filter(fn: (r) => r["_field"] == "Battery_Wh")
  |> aggregateWindow(every: v.windowPeriod, fn: last, createEmpty: false)
  |> yield(name: "last")

and its results:

@ fercasjr

You are totally right. When I remove the aggregate Window function, it doesn’t overshoot anymore.

Thank you. Problem solved.