Downsample schema.fieldsAsCols() table

Hi all, i need to create a task to run every midnight that downsample the data of previos day in one line. I need to keep al column downsampled because my table have multiple column that i use to filter the results.

import "influxdata/influxdb/schema"

option task = {name: "Downsample Kita PT", every: 1d, offset: 0m}

from(bucket: "kitalog")
    |> range(start: -task.every)
    |> filter(fn: (r) => r["_measurement"] == "KitaPT")
    |> schema.fieldsAsCols()
    |> filter(fn: (r) => r["pdc_pt_prod_acs"] == 1 and r["pdc_pt_defrosting"] == 0)
    |> aggregateWindow(every: 10s, fn: mean, createEmpty: false, column: "pdc_pt_in_w",...."other columns")
    |> aggregateWindow(every: task.every, fn: mean, createEmpty: false, column: "pdc_pt_in_w",...."other columns")
    |> to(bucket: "KitalogDaily", org: "xxxxxx")

How can correct the query to obtain the result?
Thank!!

Hi @TheMiloNet,
I would perform your downsample aggregations on your data before you pivot. Unless you have a need I would use pivot in your queries with the downsample bucket rather than during the task.

import "influxdata/influxdb/tasks"
option task = {name: "Downsample Kita PT", every: 1d, offset: 0m}

from(bucket: "kitalog")
    |> range(start: -task.every)
    |> filter(fn: (r) => r["_measurement"] == "KitaPT")
   // |> filter(fn: (r) => r["pdc_pt_prod_acs"] == 1 and r["pdc_pt_defrosting"] == 0)
    |> aggregateWindow(every: 10s, fn: mean, createEmpty: false, column: "_value")
    |> aggregateWindow(every: task.every, fn: mean, createEmpty: false, column: "value")
    |> to(bucket: "KitalogDaily", org: "xxxxxx")

Yours is a great suggestion. I thank you! I’ll try and I’ll let you know if I can get the result I hope for.