Migrate query to calculate energy (kWh)

Hello,

I’m new to Flux, and I’m trying to calculate monthly power data (kilowatt hours).

I’ve got this query which works in InfluxQL, but doesn’t allow grouping of data monthly which Flux allegedly does.

SELECT integral("value", 1h) / 1000 FROM "power" WHERE $timeFilter GROUP BY time(1d,12) fill(null)

My data is essentially the current energy usage in watts written every 5 seconds.

time power.value
2020-04-18T22:22:35.000Z 179.97
2020-04-18T22:22:40.000Z 179.32
2020-04-18T22:22:45.000Z 178.77

Here’s my attempt at a Flux query which isn’t working.

A) How would I integrate the /1000 into the query?
B) I seemed to need the duplicate(stop, time) or the query would fail with not finding the _time field. Is this correct how I’ve done it?

from(bucket: "iotaWatt/autogen")
  |> range($range)
  |> filter(fn: (r) => r._measurement == "power" and r._field == "value")
  |> window(every: 1m)
  |> integral(unit:1h)
  |> sort(columns: ["_time"])
  |> duplicate(column: "_stop", as: "_time")
  |> aggregateWindow(every: 1mo, fn: sum)

Cheers,
Mike

1 Like

Hello,
Why do you include |> duplicate(column: "_stop", as: "_time") ? Also I’m not sure that you need to both window by 1mo and perform and aggregateWindow by 1mo as well.
To divide by 1000, you would use
|> map(fn: (r) => ({ r with _value: r._value/1000.0})).

If I don’t include the duplicate function, I receive this error:

The window is one minute, the aggregate function is 1 month (1mo).

Hello @mabnz,
Did the map work for you?
Do you have another question?

Out of curiosity, can you please share the raw output from

|> range(. $range)
|> filter(fn: (r) => r._measurement == "power" and r._field == "value")

There should be a time column there already.

The map sort of worked - but only returned one row of results instead of the 3 I was expecting for my time period (I’m aggregating by month, and allowed 3 months range, so would have expected 3 (one per 30ish days), but instead I got:

#datatype,string,long,string,string,dateTime:RFC3339,dateTime:RFC3339,dateTime:RFC3339,double
#group,false,false,true,true,true,true,false,false
#default,_result,,,,,,,
,result,table,_field,_measurement,_start,_stop,_time,_value
,,0,value,power,2020-01-23T20:41:06.354405687Z,2020-04-22T20:41:06.354405687Z,2020-02-01T00:00:00Z,
,,0,value,power,2020-01-23T20:41:06.354405687Z,2020-04-22T20:41:06.354405687Z,2020-03-01T00:00:00Z,
,,0,value,power,2020-01-23T20:41:06.354405687Z,2020-04-22T20:41:06.354405687Z,2020-04-01T00:00:00Z,
,,0,value,power,2020-01-23T20:41:06.354405687Z,2020-04-22T20:41:06.354405687Z,2020-04-22T20:41:06.354405687Z,601.9094843750224

For your query you suggest:

#datatype,string,long,dateTime:RFC3339,dateTime:RFC3339,dateTime:RFC3339,double,string,string
#group,false,false,true,true,false,false,true,true
#default,_result,,,,,,,
,result,table,_start,_stop,_time,_value,_field,_measurement
,,0,2020-04-22T20:50:35.646933682Z,2020-04-22T20:51:35.646933682Z,2020-04-22T20:50:40Z,350.42,value,power
,,0,2020-04-22T20:50:35.646933682Z,2020-04-22T20:51:35.646933682Z,2020-04-22T20:50:45Z,349.81,value,power
,,0,2020-04-22T20:50:35.646933682Z,2020-04-22T20:51:35.646933682Z,2020-04-22T20:50:50Z,348.97,value,power
,,0,2020-04-22T20:50:35.646933682Z,2020-04-22T20:51:35.646933682Z,2020-04-22T20:50:55Z,347.85,value,power
,,0,2020-04-22T20:50:35.646933682Z,2020-04-22T20:51:35.646933682Z,2020-04-22T20:51:00Z,355.08

Anyone? No one knows about aggregate functions?

Hi @mabnz,

Can you share the latest version of your query, please?

I don’t understand why you’re windowing on 1m before the integral, can you help me understand?

Thanks

You had to do this because integral() is an aggregate function (reduces multiple rows into a single row) and drops the _time column. aggregateWindow() is designed to handle the complexity of adding the _time column back in after running an aggregate. You’re basically running aggregateWindow(), just in long form.

He’s trying to calculate the integral (area under the curve) for a given period of time (kwH), but I think windowing by 1m is too aggressive. This will likely cause the query to fail due to memory constraints, especially if you’re querying a large period of time.

Try this:

from(bucket: "iotaWatt/autogen")
  |> range($range)
  |> filter(fn: (r) => r._measurement == "power" and r._field == "value")
  |> aggregateWindow(
    every: 1h,
    fn: (tables=<-, column) =>
      tables
        |> integral(unit: 1h)
        |> map(fn: (r) => ({ r with _value: r._value / 1000.0})))
  |> aggregateWindow(fn: sum, every: 1mo)
1 Like

That didn’t seem to work - strangely, no results returned and no error.

msg="Executed Flux query" log_id=0MFIGB2G000 service=httpd compiler_type=flux response_size=2 query="from(bucket: \"iotaWatt/autogen\")\n |> range(start: -40d)\n |> filter(fn: (r) => r._measurement == \"grid\" and r._field == \"value\")\n |> aggregateWindow(\n every: 1h,\n fn: (tables=<-, column) =>\n tables\n |> integral(unit: 1h)\n |> map(fn: (r) => ({ r with _value: r._value / 1000.0})))\n |> aggregateWindow(fn: sum, every: 1mo)" stat_total_duration=0.000ms stat_compile_duration=0.000ms stat_queue_duration=0.000ms stat_plan_duration=0.000ms stat_requeue_duration=0.000ms stat_execute_duration=0.000ms stat_max_allocated=0 stat_concurrency=0

What does $range resolve to?

In that reply, $range resolved to 40 days.

I’ve since changed it to 180d, and have seen different results.

I am now getting data back, but not what I’m expecting. The first two months are populated (with the wrong data, ‘802.70’ and ‘25.96’) and the remaining 4 I’m expecting a result for, have none.

Query:

data:"from(bucket: "iotaWatt/autogen") |> range(start: -180d) |> filter(fn: (r) => r._measurement == "grid" and r._field == "value") |> aggregateWindow( every: 1h, fn: (tables=<-, column) => tables |> integral(unit: 1h) |> map(fn: (r) => ({ r with _value: r._value / 1000.0}))) |> aggregateWindow(fn: sum, every: 1mo) "

Results:

#datatype,string,long,string,string,dateTime:RFC3339,dateTime:RFC3339,double,dateTime:RFC3339
#group,false,false,true,true,true,true,false,false
#default,_result,,,,,,,
,result,table,_field,_measurement,_start,_stop,_value,_time
,,0,value,grid,2019-11-02T19:54:14.277631668Z,2020-04-30T19:54:14.277631668Z,802.7061565208334,2019-12-01T00:00:00Z
,,0,value,grid,2019-11-02T19:54:14.277631668Z,2020-04-30T19:54:14.277631668Z,25.965423076388884,2020-01-01T00:00:00Z
,,0,value,grid,2019-11-02T19:54:14.277631668Z,2020-04-30T19:54:14.277631668Z,0,2020-02-01T00:00:00Z    
,,0,value,grid,2019-11-02T19:54:14.277631668Z,2020-04-30T19:54:14.277631668Z,0,2020-03-01T00:00:00Z
,,0,value,grid,2019-11-02T19:54:14.277631668Z,2020-04-30T19:54:14.277631668Z,0,2020-04-01T00:00:00Z
,,0,value,grid,2019-11-02T19:54:14.277631668Z,2020-04-30T19:54:14.277631668Z,0,2020-04-30T19:54:14.277631668Z

Can you provide the some of the output from:

from(bucket: "iotaWatt/autogen")
  |> range(start: -180d)
  |> filter(fn: (r) => r._measurement == "grid" and r._field == "value"
)

For example:

#datatype,string,long,dateTime:RFC3339,dateTime:RFC3339,dateTime:RFC3339,double,string,string
#group,false,false,true,true,false,false,true,true
#default,_result,,,,,,,
,result,table,_start,_stop,_time,_value,_field,_measurement
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:10:25Z,2987.63,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:10:30Z,2991.98,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:10:35Z,3013.92,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:10:40Z,3019.19,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:10:45Z,3012.22,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:10:50Z,3006.62,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:10:55Z,3000.75,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:11:00Z,3005.9,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:11:05Z,3011.02,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:11:10Z,3004.04,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:11:15Z,3012.32,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:11:20Z,3012.55,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:11:25Z,3013.5,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:11:30Z,3011.07,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:11:35Z,2096.74,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:11:40Z,1142.57,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:11:45Z,1135.42,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:11:50Z,1138.56,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:11:55Z,1135.42,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:12:00Z,1127.35,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:12:05Z,1138.57,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:12:10Z,1128.47,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:12:15Z,1132.65,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:12:20Z,1135.23,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:12:25Z,1132.41,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:12:30Z,1133.46,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:12:35Z,1128.35,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:12:40Z,1108.8,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:12:45Z,1108.18,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:12:50Z,1138.38,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:12:55Z,1146.94,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:13:00Z,1109.28,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:13:05Z,1115.97,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:13:10Z,1114.3,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:13:15Z,1112.5,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:13:20Z,1108.67,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:13:25Z,1115.18,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:13:30Z,1111.99,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:13:35Z,1123.2,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:13:40Z,1128.03,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:13:45Z,1116.68,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:13:50Z,1115.92,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:13:55Z,1110.78,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:14:00Z,1113.28,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:14:05Z,1115.82,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:14:10Z,1105.4,value,grid
,,0,2019-11-03T03:10:22.947788477Z,2020-05-01T03:10:22.947788477Z,2019-11-03T03:14:15Z,1115.96,value,grid

Hmm, I wonder if Flux is truncating the response because of its size and Grafana doesn’t have a way to display that only partial results are returned. With points every 5 seconds, your query is likely returning, then aggregating approximately 3.1 million points. Although you should still get results for the 40 day time range…

What version of InfluxDB are you using?

InfluxDB 1.8.0, Grafana 6.7.3.

Why do you try to calculate monthly consumption from a database that has points for every 5 seconds in time? Build a continuous query that downsamples your data into 1h chunks and calculate your monthly consumption from that measurement.

1 Like

Hi @mabnz did you resolve your issue? I need similar solution. I have evey 5 seconds recorded current power consumption and I need to build “electricity meter” in Flux. Daily and Monthly and Year to date total consumption.

Scott’s code worked great for me, although I left off the last line as I want to first create hourly data. (I will also integrate to daily and monthly
The only thing missing is changing the units label from Watts to kWh in the output

Can anyone show the working Code?

Hello @Anaisdg ,
I am currently switching from InfluxDB 1 to Version 2. At the moment I am sending the data from the electricity meter to both databases. In Grafana, I have already successfully visualized the data from V1. At the same time, I am now working on visualizing the data from V2 in Grafana. When I have successfully completed this, V1 can be switched off.
I still need some help here because I get different results of the visualization in Grafana. At the moment I don’t know where it is.
The following V1 code is intended to display the averages per hour for the current day:

SELECT non_negative_difference(last(“value”)) FROM “stromzaehler” WHERE (“measurement” = ‘KW/h’) AND $timeFilter GROUP BY time($__interval) tz(‘CET’)

Here the V2 code:

from(bucket: “stromzaehler”)
|> range(start: -1d)
|> filter(fn: (r) => r[“_measurement”] == “stromzaehler”)
|> aggregateWindow(every: 1h, fn: spread, createEmpty: false)
|> drop(columns: [“measurement”, “month”, “week”, “year”])

As already described, the results in Grafana are different. Could it be due to the time zone? The raw data is written to the DB in UTC.
The visualization in Grafana takes place in each case as “Gauge” and “Time Series”.
The most complicated part then follows. The determined data should be linked to the costs. Here the data “Consumption”, “Costs” and “Credit” of the current year per month are to be visualized in “Table”. Here is the V1 code:

SELECT spread(“value”) AS “Verbrauch”, spread(“value”) * $price_per_unit / 100 + $basic_charge / 12 AS “Kosten”, spread(“value”) * $price_per_unit / 100 + $basic_charge / 12 - $part_payment FROM “stromzaehler” WHERE (“year” =~ /^$year$/) AND $timeFilter GROUP BY “month” tz(‘CET’)

Many thanks for the support.
Best Regards
Thomas