Copy large buckets using to() function

Hi!
I need to copy data from a bucket to the new one (backup, move to another server, …).
I’m using simple flux query with to() function at the end, pointing to another bucket on the same server or a bucket on a different server -

from(bucket: "original")
  |> range(start: -400d)
  |> to(bucket: "new_one")

Problem is, some records are not transferred. For example I have three measurements (min, avg, max) and only min and max are copied, avg is somehow skipped. When I limit range only to -4d, or event -40d, then it is ok… Also, if I use filter() with the specific measurement, it is moved.

Are there any limitations of this to() function? Bucket is pretty big, few million of records. I need to be sure all data between buckets are transferred.

InfluxDB v2.7.10

If i run same query from command line, via “influx query”, it takes much, much longer time to finish, but it seems to be complete… Are there any limitations of web gui to process data?

@LPs, I’m not 100% certain, but it would make sense that the GUI would have query timeouts that it enforces client-side that the CLI wouldn’t necessarily have.

Another option here would be to perform the migration in time-based batches instead of in one huge time range. There’s actually a Flux migration task you could use here: Migrate data from InfluxDB Cloud to InfluxDB OSS | InfluxDB OSS v2 Documentation

The article is mainly about migrating data from InfluxDB Cloud to InfluxDB OSS v2, but you could still use it to migrate data between two OSS v2 instances.