“You cannot write to the same bucket you are reading from.”

In the Tasks documentation are the following two statements:

In the vast majority of task use cases, once data is transformed, it needs to sent and stored somewhere. This could be a separate bucket with a different retention policy, another measurement, or even an alert endpoint (Coming).

Important Notes
• You cannot write to the same bucket you are reading from.

The statement in the first block is entirely correct and mentions writing to another measurement. But, the note bullet in the second block seems to suggest it’s not possible to calculate a third measurement from two source measurements and write the result back to the originating bucket.

Is this so? Why? We presently do this very thing in v1 using Kapacitor. In various TICK scripts, we calculate derived values from source measurements and write the result to another measurement in the same database. Surely this must be possible in v2, albeit mapped to buckets.

If the note bullet above is not true or is incomplete, what is the appropriate Flux means for writing the results of a calculation to a measurement in the same bucket as its inputs?

I’ve seen the to() function documentation. The examples don’t show using to() for writing to a new measurement. I assume that by specifying _measurement among the columns that it should be possible.

In summary…

  1. Is it possible in a Task to write a measurement to the same bucket from which the input measurements originated?
  2. If so, what is the proper usage of Flux to do so? Use to() with an appropriate manipulation of _measurement?

This was a safety measure put in place to make it harder for people to create a loop where they read data and write more data and then read it again ad infinitum. Although I think it’s probably one we should revisit as there are likely better ways to prevent people from shooting themselves in the foot. For now the limitation exists, but we’ll look at lifting it.

Ah ha! Thank you for the quick reply!

We would much appreciate revisiting this decision. The protection aspect makes lots of sense now that I know the thinking. If said protection were possible at the granularity of a measurement instead of a bucket, this seems like a sensible revision.

In our case, it just feels a bit silly and overly complex to maintain one bucket for raw measurements and a second bucket for measurements containing derived calculations. If it were absolutely necessary I’m guessing we could live with it. But, I can already identify a handful of headaches this would cause when we update our own tools and other application configurations to use v2 down the road.