I’m downsampling data to15m using TICK script, but script not querying at exact 15m slot
Raw data
> SELECT "SensorId","2002","2004" FROM Reading WHERE time > now() - 1m LIMIT 5;
name: Reading
time SensorId 2002 2004
---- -------- ---- ----
2018-07-25T06:00:07Z 13867 0.15211888122558595 0.6903297305107117
2018-07-25T06:00:07Z 16474 0.009301428794860839 0.04855547845363617
2018-07-25T06:00:07Z 13852 222.72334375 373.1100769042969
2018-07-25T06:00:07Z 16432 18.61921501159668 34.59856033325195
2018-07-25T06:00:07Z 16419 16.664958984375 27.60993766784668
TICK Script
batch
|query('SELECT mean(*) FROM "mydb"."autogen".Reading')
.period(15m)
.every(15m)
.groupBy(time(15m), 'SensorId')
.offset(1d)
|influxDBOut()
.database('mydb')
.retentionPolicy('autogen')
.measurement('Reading_Agg_15m')
.precision('s')
This TICK script will run at every 15 min, query 15 mins data of yesterday, calculate mean and store it in Reading_Agg_15m
measurement.
So for an instance at 2018-07-24 12:45:00
tick script will run and query data from 2018-07-23 12:45:00
to 2018-07-23 13:00:00
, but when I checked in influxdb it’s showing sliglty different time range.
> show queries
SELECT mean(*) FROM mydb.autogen.Reading
WHERE time >= '2018-07-23T12:45:06.169978273Z' AND time < '2018-07-23T13:00:06.169978273Z'
GROUP BY time(15m, 0s), SensorId
Any explaination why time selection moved ~6 second?