Unexpected timestamp after downsampling

If have create a task

option task = {name: “TestTask”, every: 5m, offset: 1m}

from(bucket: “HausTest”)
|> range(start: -task.every)
|> filter(fn: (r) => r[“_measurement”] == “fronius”)
|> drop(columns: [“requestTimeStamp”, “source”])
|> aggregateWindow(every: 5m, fn: mean, createEmpty: false)
|> to(bucket: “HausStundenArchiv”)
In the bucket “HausStundenArchiv” I expect times like
2023-06-15T14:45:00.000Z but what I get looks like
2023-06-15T14:45:10.000Z

Is this expected? And how can get my expected timestamp?

Hello @Lutte,
Welcome!

The interval at which the task runs. This option also determines when the task first starts to run, depending on the specified time (in duration literal).
Data type: Duration
For example, if you save or schedule a task at 2:30 and run the task every hour (1h):
option task = {name: "aggregation", every: 1h}
The task first executes at 3:00pm, and subsequently every hour after that.

So most likely you’re starting the task at 45:6 or something and so it starts at the next 5 min interval.

I dont know maybe you can achieve this with cron instead?

I haven’t used it a lot, so im not sure. Maybe */5 * * * *

Thank you for the fast answer.
It was my mistake. The downsampled timestampes are as expected.
I was looking at the downsampled data with an aggregation in the query, which caused the unexpected timestamps.