Hey Louis,
Out of curiosity, where are you “looking”? Just random google searches or ?? Obviously, we want to make your search experience better so that you can find the answer to this. So a few hints for us here would be helpful. Thanks!
Have a peek at the official documentation. In particular,
and
I’m not sure you need integral in this case, perhaps a simple difference between points will suffice?
Also, if you aren’t sure about shaping your data, I’d strongly suggest reviewing line protocol and the elements which determine a particular series: Handle duplicate data points when writing to InfluxDB | InfluxDB OSS 2.0 Documentation
and this is pretty helpful as well:
In terms of the specifics, I used the array.from
function to dynamically build the sample data you supplied – since I don’t have access to what you’ve written in. I added deviceID as a tag to uniquely identify the series. I’m assuming you have more than one device suppling data? Then, I just used the difference
function to yield the result.
import "experimental/array"
values = [
{ _time: 2020-01-01T00:00:00Z, _field: "kWh", _value: 100, deviceId: "12345"},
{ _time: 2020-01-02T00:00:00Z, _field: "kWh", _value: 120, deviceId: "12345"},
{ _time: 2020-01-03T00:00:00Z, _field: "kWh", _value: 125, deviceId: "12345"},
{ _time: 2020-01-04T00:00:00Z, _field: "kWh", _value: 134, deviceId: "12345"},
{ _time: 2020-01-05T00:00:00Z, _field: "kWh", _value: 156, deviceId: "12345"}
]
array.from(rows: values)
|> difference(nonNegative: false, keepFirst: true, columns: ["_value"])
Result:
table |
_time |
_field |
_value |
deviceId |
0 |
2020-01-01T00:00:00Z |
kWh |
|
12345 |
0 |
2020-01-02T00:00:00Z |
kWh |
20 |
12345 |
0 |
2020-01-03T00:00:00Z |
kWh |
5 |
12345 |
0 |
2020-01-04T00:00:00Z |
kWh |
9 |
12345 |
0 |
2020-01-05T00:00:00Z |
kWh |
22 |
12345 |
You’ll notice that the first value is null because there is no difference available. However, you can eliminate this by setting keepFirst: false
(or just removing keepFirst
as false
is the default.
Does that help?