Hello,
I’ve got two buckets with similar tags (“name” in bucket 1 and “item” in bucket 2 but the values are the same i.e. name:apples & item:apples)
Bucket 1 has the latest price for apples and is updated every 1h Bucket 2 has the quantity and original amount spent on apples and is updated only when the qty changes
I would like to update bucket 1 to add the calculated value of apples based on the current price (from bucket 1) multiplied by the last value of qty from bucket 2 for the tag apples.
I would like to do this for every tag value in bucket 1 i.e apples, oranges, bananas etc every time bucket 1 is updated
p.s. the above is just a simplified example. I am actually trying to store and create a dashboard for my crypto currency profile. I manually update the “holdings” bucket for each asset with the amount I’ve invested and quantity of each asset that I own and with help from fellow community users I’ve created a Telegraf http input that collects and stores the latest prices for the currencies I’m holding into a different bucket every hour.
Progress report:
So far, I’ve got the following
Query bucket 1 to get the id, name, qty and amount invested. Save result in variable b1
Query bucket 2 to get the id, name and current price. Save result in variable b2
join b1 and b2 on id
Perform column calculations into a new column (_total_value)
Hi @anishpq
Thanks for your help. Would you be able to help with the code?
If I’m not mistaken, I need to do the join the columns from b1 and b2 first in order to create the new column _total_value as it’s a result of the calculation of columns from both tables right?
I was thinking that once the tables are joined and the new column is created, I would then use the fieldsascols() function and output it to “bucket 2”
join(tables: {b1, b2}, on: ["id"])
|> map(fn: (r) => ({ r with _total_value: r._value_qty * r._value_price}))
|> schema.fieldsAsCols()
|> to(bucket: "bucket 2", org: "myorg")
Edit: running the above returns “undefined identifier schema”
Hello @t481,
Oh sorry I didn’t notice that b1 is returning just one value. Are you returning just one value with one table? Or one value with multiple tables.
I was going to suggest using fieldsAsCol if you want to perform math across multiple fields at the same timestamp (but join works too).
You don’t need to use fieldsAscol() before the to() function.
You can simply do:
join(tables: {b1, b2}, on: ["id"])
|> map(fn: (r) => ({ r with _total_value: r._value_qty * r._value_price}))
|> to(bucket: "bucket 2") //org is the org you're querying from by default
If you get an error about measurement already existing, then you might need to write the data to a new measurement by adding:
join(tables: {b1, b2}, on: ["id"])
|> map(fn: (r) => ({ r with _total_value: r._value_qty * r._value_price}))
|> set(key: "measurement",value: "myNewMeasurement")
|> to(bucket: "bucket 2") //org is the org you're querying from by default
@Anaisdg
I’m not sure I fully understand your question but I’m only selecting the qty field values from b1 hence the filter by “qty” _field.
I ran into the “measurement exists” and overcame that by selecting the _measurement from b2.
However, it returned
table has no _field column
Isn’t the _field the column generated by the map function as “_total_value” with the _value being the result of the calculation created as a result for every point?