Task Failing after server reboot / update (unsupported aggregate column type string)

I had a downsample script running happy for ages that is all of a sudden failing with “could not execute task run: runtime error: unsupported aggregate column type string”
Any idea whats going on ?
InfluxDB v2.6.1
Server: 9dcf880

The task looks like

import "experimental"

option task = {name: "downsample30", every: 30m, offset: 5m}

fromBucket = "telegraf"

toBucket = "telegraf_1y"
toMinMaxBucket = "telegraf_minmax_1y"

all_data =
    from(bucket: fromBucket)
        |> range(start: -task.every)
        |> filter(
            fn:
                (r) =>
                    r._measurement
                        =~
                        /^cpu|^disk|^mem|ipmi_sensor|diskio|^processes|system|internal_.+|net|^pf|^libvirt_.+|^zfs_.+|zfs/,
        )

numeric_data =
    all_data
        |> filter(
            fn: (r) =>
                r._field
                    !~
                    /^uptime_format|health_ok$|libvirt_domain_block_stats_read_bytes_total|ifDescr/,
        )

selective_data =
    from(bucket: fromBucket)
        |> range(start: -task.every)
        |> filter(fn: (r) => r._measurement =~ /^interface/)
        |> filter(
            fn:
                (r) =>
                    r._field
                        =~
                        /^ifInOctets|^ifHCInOctets|^ifOutOctets|^ifHCOutOctets|^ifInOctets|^ifOutOctets|^ifOutErrors|^ifInErrors/,
        )

selective_data
    |> aggregateWindow(every: task.every, fn: mean)
    |> set(key: "aggregate", value: "mean")
    |> toInt()
    |> set(key: "rollup_interval", value: string(v: task.every))
    |> to(bucket: toBucket)

numeric_data
    |> aggregateWindow(every: task.every, fn: mean)
    |> set(key: "aggregate", value: "mean")
    |> toInt()
    |> set(key: "rollup_interval", value: string(v: task.every))
    |> to(bucket: toBucket)

numeric_data
    |> aggregateWindow(every: task.every, fn: min)
    |> set(key: "aggregate", value: "min")
    |> toInt()
    |> set(key: "rollup_interval", value: string(v: task.every))
    |> to(bucket: toMinMaxBucket)

numeric_data
    |> aggregateWindow(every: task.every, fn: max)
    |> set(key: "aggregate", value: "max")
    |> toInt()
    |> set(key: "rollup_interval", value: string(v: task.every))
    |> to(bucket: toMinMaxBucket)

and the logs are now showing

Started task from script: “import "experimental"\n\noption task = {name: "downsample30", every: 30m, offset: 5m}\n\nfromBucket = "telegraf"\n\ntoBucket = "telegraf_1y"\ntoMinMaxBucket = "telegraf_minmax_1y"\n\nall_data =\n from(bucket: fromBucket)\n |> range(start: -task.every)\n |> filter(\n fn:\n (r) =>\n r.measurement\n =~\n /^cpu|^disk|^mem|ipmi_sensor|diskio|^processes|system|internal.+|net|^pf|^libvirt_.+|^zfs_.+|zfs/,\n )\n\nnumeric_data =\n all_data\n |> filter(\n fn: (r) =>\n r._field\n !~\n /^uptime_format|health_ok$|libvirt_domain_block_stats_read_bytes_total|ifDescr/,\n )\n\nselective_data =\n from(bucket: fromBucket)\n |> range(start: -task.every)\n |> filter(fn: (r) => r._measurement =~ /^interface/)\n |> filter(\n fn:\n (r) =>\n r._field\n =~\n /^ifInOctets|^ifHCInOctets|^ifOutOctets|^ifHCOutOctets|^ifInOctets|^ifOutOctets|^ifOutErrors|^ifInErrors/,\n )\n\nselective_data\n |> aggregateWindow(every: task.every, fn: mean)\n |> set(key: "aggregate", value: "mean")\n |> toInt()\n |> set(key: "rollup_interval", value: string(v: task.every))\n |> to(bucket: toBucket)\n\nnumeric_data\n |> aggregateWindow(every: task.every, fn: mean)\n |> set(key: "aggregate", value: "mean")\n |> toInt()\n |> set(key: "rollup_interval", value: string(v: task.every))\n |> to(bucket: toBucket)\n\nnumeric_data\n |> aggregateWindow(every: task.every, fn: min)\n |> set(key: "aggregate", value: "min")\n |> toInt()\n |> set(key: "rollup_interval", value: string(v: task.every))\n |> to(bucket: toMinMaxBucket)\n\nnumeric_data\n |> aggregateWindow(every: task.every, fn: max)\n |> set(key: "aggregate", value: "max")\n |> toInt()\n |> set(key: "rollup_interval", value: string(v: task.every))\n |> to(bucket: toMinMaxBucket)\n”
2023-03-03 16:08:02
Completed(failed)
2023-03-03 16:08:02
could not execute task run: runtime error: unsupported aggregate column type string

OK, I think I found the problem. The regex “|net|” was picking up libvirt_net which indeed has some strings in it which was added with a new version of telegraf when I updated the hypervisor influxDB was running on

1 Like

Hello @mdtancsa,
You might find the following example useful too:

I’m glad you solved it and thank you for sharing your solution

thanks very much @Anaisdg, thats a very helpful example in this case. I will certainly add it to my downsampling script at some point