Target bucket with all "_time" values moved forward

Hello,
I have a primary bucket (source: raspberry4) which collects data of my Raspberry Pi 4 every 30 seconds and a secondary (target: raspberry4-intraday) where I wanna downsample data with the 15 minute mean. The task is run every 15 minutes and this is the code:

import "math"

option task = {name: "raspberry4_AVG15", every: 15m}

from(bucket: "raspberry4")
  |> range(start: -task.every)
  |> filter(fn: (r) => r._measurement == "cpu_temp")
  |> filter(fn: (r) => r._field == "value")
  |> aggregateWindow(every: 15m, fn: mean, createEmpty: false)
  |> map(fn: (r) => ({r with cputemp_avg15: math.round(x: r._value) / 1000.0}))
  |> to(bucket: "raspberry4-intraday", fieldFn: (r) => ({"AVG15min": r.cputemp_avg15}))

If I query the source bucket with a window of 15 minutes, I get these records (starting from 7:30AM CET) with a 3 hour range:

Values for time and AVG15 are correct and checked.
When I query the target bucket (range 3 hours, window AUTO), I get these results (I cannot upload more than 1 image):

The record at 8:00AM in the source bucket (38.30) becomes the record at 8:01AM in the target, and so on thereafter.

When I query the target bucket by customising the window period to 15 minutes, I get these results:

The record at 8:00AM in the source bucket (38.30) becomes the record at 8:15AM in the target, and so on. All records are shifted forward by 15 minutes.

Furthermore, if I change the range to 1 day, I get even different time records:

The record at 8:00AM in the source bucket (38.30) becomes the record at 8:04AM in the target. There are no records with the right time.

Unless I missed something, it is a real mess and because of that I cannot rely on the values reported in the target bucket with reference to the time.

Could you address me on this matter?

Thank you for support.

Regards.
Riccardo