How to devide all values by sum

Hi Community,

I would like to calculate the percentage of each value with the following query, and map it to _valuepct. To realize this, I need the sum of all values and map them first to a column called _valuesum. How is that possible?

My flux query so far:

  from(bucket:"VenusOS")
  |> range(start: -3mo)
  |> filter(fn: (r) => r["_measurement"] == "Shunt")
  |> filter(fn: (r) => r["_field"] == "SoC_1")
  |> aggregateWindow(every: 30m, fn: mean, createEmpty: true)
  |> fill(usePrevious: true)
  |> histogram(bins: [94.0, 95.0, 96.0, 97.0, 98.0, 99.0, 100.0])
  |> difference()
  |> keep(columns: ["le", "_value"])

This is the result I would like to get:

I know how to calculate the _valuepct with

|> map(fn: (r) => ({ r with _valuepct: r._value / r._valuesum }))

But I can’t figure out, how to get the outcome of sum() in the _valuesum column.

Could somebody help me please? I am breaking my head about this.

I just did that recently.

It gets tricky however depending on what do you want. I believe you can set a parameter to histogram so it will be normalized, but also, I believe histogram function on influx is a cumulative value so, not sure if it is what you need.

|> histogram(bins: [94.0, 95.0, 96.0, 97.0, 98.0, 99.0, 100.0, normalize: true])

first the explanation:

you need to create a second table that is the same as the first one, then get the sum. sum does not have _time and will only have a single value, rename the _value column, then you need to use union() function to “merge the 2 tables” , after that you need to use sort() function to move the total value at the top and then use fill to populate that value to all rows in the the table, after that remove the entry that does not have the _time value with filter exists r._time, or any other column that does not exist for the first entry, then you can use your map function:

 A = from(bucket:"VenusOS")
  |> range(start: -3mo)
  |> filter(fn: (r) => r["_measurement"] == "Shunt")
  |> filter(fn: (r) => r["_field"] == "SoC_1")
  |> aggregateWindow(every: 30m, fn: mean, createEmpty: true)
  |> fill(usePrevious: true)
  |> histogram(bins: [94.0, 95.0, 96.0, 97.0, 98.0, 99.0, 100.0])
  |> difference()
  |> keep(columns: ["le", "_value"])


B = A
  |> toFloat() ///just in case your numbers were integers and you want to get the % value later on
  |> rename(columns: {_value: "_valuesum", })
  |> sum(column: "_valuesum")  
C =

union(tables: [B, A])  
  |> sort(columns: ["le"], desc: false)
  |> fill(column: "_valuesum", usePrevious: true) 
  |> filter(fn: (r) => exists r._value)  
  |> map(fn: (r) => ({ r with _valuepct: 100.0 * r._value / r._total }))  
  

|> yield()

Give it a try and let me know :slight_smile:

there is a easier way to do it extracting a scalar value to _valuesum. but the script I gave you works on as many groups as you have in your table.
`

Thank you so much!

I needed this, to create a “histogram” from a bar chart in Grafana. Currently, it’s not possible to create a histogram from 2 columns.

With a slight modification, I was able to create it with:

A = from(bucket:"VenusOS")
  |> range(start: -3mo)
  |> filter(fn: (r) => r["_measurement"] == "Shunt")
  |> filter(fn: (r) => r["_field"] == "SoC_1")
  |> aggregateWindow(every: 30m, fn: mean, createEmpty: true)
  |> fill(usePrevious: true)
  |> histogram(bins: [94.0, 95.0, 96.0, 97.0, 98.0, 99.0, 100.0])
  |> difference()
  |> keep(columns: ["le", "_value"])

B = A
  |> toFloat() ///just in case your numbers were integers and you want to get the % value later on
  |> rename(columns: {_value: "_valuesum", })
  |> sum(column: "_valuesum")  

C = union(tables: [B, A])
  |> sort(columns: ["le"], desc: false)
  |> fill(column: "_valuesum", usePrevious: true) 
  |> filter(fn: (r) => exists r._value)  
  |> map(fn: (r) => ({ r with _valuepct: 100.0 * r._value / r._valuesum }))
  |> map(fn: (r) => ({r with le: string(v: r.le)  }))
  |> keep(columns: ["le", "_valuepct"])
  |> yield()

The Result:

Thank you.

1 Like

My pleassure.
You were lucky because I needed to do something similar last week.