How to compare the current series of data with previous series using kapacitor (tick script)

kapacitor

#1

Hello,

We have continuous stream of data flowing from influx to kapacitor. Influx is collecting at every 3 seconds. But in tick i used perioid as 1min with stream.
Query1: Will kapacitor get 20 series of data in 1min, since influx is getting every 3 sec?? (I have seen only one series getting into kapacitor)
Query2: How to compare with previous series with current series?
Below is my script:
dbrp “store”.“autogen”

var current = stream
|from()
.measurement(‘test’)
.groupBy()
|window()
.period(1m)
.every(1m)

var old = current
|shift(-1m)

|alert()
.id(’/jobs:jobs/telemetry-measurement:measurement/measurement=test/query=test’)
.crit(lambda: ((‘current.tag1’== ‘old.tag1’)AND(‘current.tag2’==‘old.tag2’)AND(‘current.field1’ != ‘old.field1’)))
.log(’/tmp/out.tmp’)

Especially, i had a doubt at lambda condition: there tag1 and tag2 are tags and field1 is the field. And where the single or double quotes can be added…

Query3: Is it possible to join based on tags… since if you combined with timestamp… then previous and current series will not matched because they are having different timestamps. Or timestamp will not be taken into consideration while doing these type of joins?

Could you please provide the 2 variations (Query#2 and #3) for the above script. Your help will be much appreciated.

Thanks a lot !

Regards,

Rajesh


How to compare 2 consecutive series of data and the ways to access/compare variables in Tick?
#2

Kapacitor will receive 20 batches of data per minute through the stream.

I think it would be helpful to get familiar with a series in InfluxDB, which you can read about here or watch a video here.

What exactly are you trying to compare with previous and next?

Here is an example using batches comparing the minute from the previous day with the current minute.

// Query the current minute
var this_minute = batch
    |query('''
SELECT count("usage_user") FROM "telegraf"."autogen"."cpu"
''')
        .period(1m)
        .every(1m)

var last_minute = batch
    |query('''
SELECT count("usage_user") FROM "telegraf"."autogen"."cpu"
''')
        .period(1m)
        .every(1m)
        // This is what gives us the previous miniute of data
        .offset(24h)
    // we need to shift the previous minute forward
    |shift(1m)

this_minute
    |join(last_minute)
        .as('this', 'last')
    |log()

Try implementing some of this logic and let me know how it goes.


#3

Thanks Katy. As you said, if we are receiving 20 batches of data. I want to compare each batch with previous one. Instead of just telling one minute /one hour back. can we do the above one using stream?


#4

For clarification, what do you want to compare within those batches?


#5

lambda: ((‘current.tag1’== ‘old.tag1’)AND(‘current.tag2’==‘old.tag2’)AND(‘current.field1’ != ‘old.field1’)

In this current is the latest batch of data and old is the former (which is just before the current).


#6

Here is something close to what you want.

dbrp "telegraf"."autogen"

var data = stream
    |from()
        .measurement('cpu')
        .groupBy(*)
    |window()
        .periodCount(61)
        .everyCount(1)
        .fillPeriod()

var current = data
    |last('usage_user')

var previous = data
    |first('usage_user')

current
    |join(previous)
        .as('curr', 'prev')

A batch task like the previous example would be simpler. The case that you’re talking about isn’t typical for Kapacitor usage.


#7

Thanks Katy… You mean it is not possible with kapacitor compare the just previous one. Or the above one will be suffice to try out.


#8

You should try it with the above script, but you still might need to experiment with it.


#9

Sure… Thanks a lot !


#10

Hi,

Scenario:
We have one device with multiple interfaces. Each device has interfaces INTF1,INTF2,INTF3 etc. These interface names can be same across multiple devices.Each interface will have state information(UP/DOWN). So, we have to notify whenever change happened on state information based on device and interface name.

Devicename      Interfacename     state  in-pkt     out-pkt ......
-------------------------------------------------------
Device1           INTF1            UP     500         0

Device1           INTF2            UP     500         10

Device2          INTF1            UP     500          0

Device1          INTF3            DOWN    545         3

Device2          INTF3           UP       564          10

Device3         INTF4            DOWN     788        90

so, whenever state changes based on device and interfacename as key- we have to log those information.

dbrp "test"."autogen"

var keys= ['Devicename','Interfacename']
var data = stream
|from()
.measurement('statechange')
.groupBy(keys)
|window()
.period(1m)
.every(1m)

|alert()
.info(lambda: "state" == 'UP')

|alert()
.warn(lambda: "state" == 'DOWN')

var result=data
|alert()
.id('/jobs:jobs/telemetry-measurement:measurement/measurement=statechange/query=statechange')
.stateChangesOnly()
.crit(lambda: 1==1)
.log('/tmp/result.tmp')

Issue: It’s giving only the first alert when the first event happened. But, we have to notify whenever there is some event happened and it should be continuous validation.

Even we have tried the ChangeDetect using the state information but it is not giving the any result.

dbrp “ncx”.“autogen”
// Which measurement to consume
var keys= [‘device’,‘name’]
var int_state = stream
|from()
.measurement(‘change’)
.groupBy(keys)
|window()
.period(1m)
.every(1m)

var result= int_state
|changeDetect(‘openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/admin-status’)
|alert()
.id(’/jobs:jobs/telemetry-measurement:measurement/measurement=change/query=change’)
.log(’/tmp/changeDetect.tmp’)
.post(‘http://192.168.1.105:1983/atom/telemetry/kapacitor/publish’)

Q1: Is this correct the correct way of usage for changedetect node

Q2:Do we need any other change/way required to achieve the goal? Let me know, if you need any further info on this.

For us, Shift node is not much useful. Because, we have continuous stream of data and we have to compare 2 consecutive points and identify the change.

Rajesh


#11

I think the issue here is that the .id() property on the |alert() node is set to a specific value. Kapacitor uses the alert ID internally (it’s not just an external designation) to keep track of separate alerts. Each alert needs to have it’s own specific “id” so that each triggered alert can then be resolved when the threshold is no longer met. Right now, every alert is being assigned the same ID (/jobs:jobs/telemetry-measurement:measurement/measurement=change/query=change) so only one alert is technically being triggered.

In your case, you’ll probably want to break out your alerts by tag values device and name. The alert ID string would look something like this:

.id('kapacitor/{{ index .Tags "device" }}/{{ index .Tags "name" }}')

Check out the docs for other the template options for the ID.


#12

Hey Rajesh,

What are you trying to achieve with


var result=data
|alert()
.id('/jobs:jobs/telemetry-measurement:measurement/measurement=statechange/query=statechange')
.stateChangesOnly()
.crit(lambda: 1==1)
.log('/tmp/result.tmp')

I’m not sure I totally follow what the point of this is.

I think if we change

dbrp "test"."autogen"

var keys= ['Devicename','Interfacename']
var data = stream
|from()
.measurement('statechange')
.groupBy(keys)
|window()
.period(1m)
.every(1m)
|alert()
.info(lambda: "state" == 'UP')
.warn(lambda: "state" == 'DOWN')

where the info and warn are on the same alert node, we should see both types of alerts.


#13

Hi Michael,

Lets say we have only 2 states which are UP and DOWN. Whenever it changes from one state to another then only alert should happen. That’s why we have used 1==1 with statechangesonly property. So , now whenever there is some change happens then only alert will happen.

As per your snippet, it will raise two alarms with different levels based on status.

Let me know if you need further info.


#14

Hi,

Is this the only case with Statechangesonly or for generic case. Since, we have used .id with static value for all our other use cases and we are getting continuous alert whenever condition matches. Below is one such example.

dbrp "test"."autogen"

var interface_status = stream
|from()
.measurement('test')
.groupBy(*)
|window()
.period(1m)
.every(1m)

var up_data = interface_status
|where(lambda: "status" == 'UP')
|count('status')


var down_data = interface_status
|where(lambda: "status" == 'DOWN')
|count('status')

up_data
|join(down_data)
.as('up', 'down')
|alert()
.id('/jobs:jobs/telemetry-measurement:measurement/measurement=test/query=test')
.crit(lambda: "up.count" > 5 AND "down.count" >= 0)
.log('/tmp/out.tmp')

#15

What about

dbrp "test"."autogen"

var keys= ['Devicename','Interfacename']
var data = stream
|from()
.measurement('statechange')
.groupBy(keys)
|window()
.period(1m)
.every(1m)
|changeDetect('state')
|alert()
  .info(lambda: TRUE)

#16

Hi Michael,

I tried out with changeDetect but it is giving continuous message though there is no change. What we want is it should event based not the periodic updates. If there is some change in state of interface then only alert should happen. But it is continuously giving the update.

Requirement is like this:

We have thousands of devices (D1, D2, D3…so on) – TAG’s

Each device with no.of interfaces (Intf1,Intf2,Intf3…etc) -TAGS

Each interface will have status (UP/DOWN/other)–Fields

we want to get an alert whenever there is some change in state of interface for that device. So , that is why we tried grouping with Devicename,Interfacename and other combination with Devicename,Interfacename,State.

However, i have tried by removing the period and every, still no luck.

Both of them giving the correct result but continuous alert should not be the case. Because change is not continuously happening.

Query1: How to make this only trigger based not continuous ?

Query2 : How to access the variable’s field (More details on this same thread above)


#17

Hey Rajesh

Off hand it sounds like we might not be grouping all the points into their respective series and therefore it feels like we are continuously alerts since points are coming together from different series. One thing we could try would be to use groupBy(*).

dbrp "test"."autogen"

var data = stream
|from()
.measurement('statechange')
.groupBy(*)
|window()
.period(1m)
.every(1m)
|changeDetect('state')
|alert()
  .info(lambda: TRUE)

If that doesn’t solve the issue, what would be helpful is a toy example using data expressed in line protocol with examples of data that you would expect to trigger an alert.

Something like the following example expressing the average of a series for a time window:

With data

cpu,host=A value=10 T0
cpu,host=B value=10 T0

cpu,host=A value=12 T1
cpu,host=B value=20 T1

Where points are grouped into their individual series, and windowed into the period [T0,T1], I expect

cpu,host=A value=10 T0
cpu,host=A value=12 T1

not to alert because the average cpu value for host=A is 11 which is less than 15.

And I expect

cpu,host=B value=15 T0
cpu,host=B value=20 T1

to alert because the average cpu value for host=B is 15 which is equal 15.


#18

Hi Michael,

The solution is not working for our usecase. Because in our case, each series will be one interface details w.r.t that particular device.

The actual data is like below:

{“id”:"/jobs:jobs/telemetry-measurement:measurement/measurement=intf/query=intf",“message”:"/jobs:jobs/telemetry-measurement:measurement/measurement=intf/query=intf is INFO",“details”:"{\u0026#34;Name\u0026#34;:\u0026#34;intftest\u0026#34;,\u0026#34;TaskName\u0026#34;:\u0026#34;intf\u0026#34;,\u0026#34;Group\u0026#34;:\u0026#34;device=172.16.18.176,name=Null0,path=openconfig-interfaces:interfaces/interface,system_id=IOSXR02\u0026#34;,\u0026#34;Tags\u0026#34;:{\u0026#34;device\u0026#34;:\u0026#34;172.16.18.176\u0026#34;,\u0026#34;name\u0026#34;:\u0026#34;Null0\u0026#34;,\u0026#34;path\u0026#34;:\u0026#34;openconfig-interfaces:interfaces/interface\u0026#34;,\u0026#34;system_id\u0026#34;:\u0026#34;IOSXR02\u0026#34;},\u0026#34;ServerInfo\u0026#34;:{\u0026#34;Hostname\u0026#34;:\u0026#34;infra-kapacitor-6d4d66658b-jqb4f\u0026#34;,\u0026#34;ClusterID\u0026#34;:\u0026#34;84decbe8-63f8-4648-a849-96dd76a1d301\u0026#34;,\u0026#34;ServerID\u0026#34;:\u0026#34;2732b57a-46c5-4e29-bd02-44a9548e34b6\u0026#34;},\u0026#34;ID\u0026#34;:\u0026#34;/jobs:jobs/telemetry-measurement:measurement/measurement=intf/query=intf\u0026#34;,\u0026#34;Fields\u0026#34;:{\u0026#34;_sequence\u0026#34;:\u0026#34;2250230\u0026#34;,\u0026#34;_timestamp\u0026#34;:\u0026#34;1531698815825\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/index\u0026#34;:\u0026#34;0\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/admin-status\u0026#34;:\u0026#34;UP\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/in-broadcast-pkts\u0026#34;:\u0026#34;0\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/in-discards\u0026#34;:\u0026#34;0\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/in-errors\u0026#34;:\u0026#34;0\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/in-multicast-pkts\u0026#34;:\u0026#34;0\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/in-octets\u0026#34;:\u0026#34;0\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/in-unicast-pkts\u0026#34;:\u0026#34;0\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/in-unknown-protos\u0026#34;:\u0026#34;0\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/last-clear\u0026#34;:\u0026#34;2018-04-16T02:37:37Z\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/out-broadcast-pkts\u0026#34;:\u0026#34;0\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/out-discards\u0026#34;:\u0026#34;0\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/out-errors\u0026#34;:\u0026#34;0\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/out-multicast-pkts\u0026#34;:\u0026#34;0\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/out-octets\u0026#34;:\u0026#34;0\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/out-unicast-pkts\u0026#34;:\u0026#34;0\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/description\u0026#34;:\u0026#34;\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/index\u0026#34;:\u0026#34;0\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/last-change\u0026#34;:\u0026#34;7852595\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/mtu\u0026#34;:\u0026#34;1500\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/oper-status\u0026#34;:\u0026#34;UP\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/type\u0026#34;:\u0026#34;iana-if-type:other\u0026#34;},\u0026#34;Level\u0026#34;:\u0026#34;INFO\u0026#34;,\u0026#34;Time\u0026#34;:\u0026#34;2018-07-15T23:53:35.825Z\u0026#34;,\u0026#34;Duration\u0026#34;:122551000000,\u0026#34;Message\u0026#34;:\u0026#34;/jobs:jobs/telemetry-measurement:measurement/measurement=intf/query=intf is INFO\u0026#34;}\n",“time”:“2018-07-15T23:53:35.825Z”,“duration”:122551000000,“level”:“INFO”,“data”:{“series”:[{“name”:“intftest”,“tags”:{“device”:“172.16.18.176”,“name”:“Null0”,“path”:“openconfig-interfaces:interfaces/interface”,“system_id”:“IOSXR02”},“columns”:[“time”,"_sequence","_timestamp",“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/index”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/admin-status”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/in-broadcast-pkts”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/in-discards”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/in-errors”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/in-multicast-pkts”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/in-octets”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/in-unicast-pkts”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/in-unknown-protos”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/last-clear”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/out-broadcast-pkts”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/out-discards”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/out-errors”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/out-multicast-pkts”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/out-octets”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/out-unicast-pkts”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/description”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/index”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/last-change”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/mtu”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/oper-status”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/type”],“values”:[[“2018-07-15T23:53:35.825Z”,“2250230”,“1531698815825”,“0”,“UP”,“0”,“0”,“0”,“0”,“0”,“0”,“0”,“2018-04-16T02:37:37Z”,“0”,“0”,“0”,“0”,“0”,“0”,"",“0”,“7852595”,“1500”,“UP”,“iana-if-type:other”]]}]},“previousLevel”:“INFO”,“recoverable”:true}
{“id”:"/jobs:jobs/telemetry-measurement:measurement/measurement=intf/query=intf",“message”:"/jobs:jobs/telemetry-measurement:measurement/measurement=intf/query=intf is INFO",“details”:"{\u0026#34;Name\u0026#34;:\u0026#34;intftest\u0026#34;,\u0026#34;TaskName\u0026#34;:\u0026#34;intf\u0026#34;,\u0026#34;Group\u0026#34;:\u0026#34;device=172.16.18.176,name=PW-Ether7,path=openconfig-interfaces:interfaces/interface,system_id=IOSXR02\u0026#34;,\u0026#34;Tags\u0026#34;:{\u0026#34;device\u0026#34;:\u0026#34;172.16.18.176\u0026#34;,\u0026#34;name\u0026#34;:\u0026#34;PW-Ether7\u0026#34;,\u0026#34;path\u0026#34;:\u0026#34;openconfig-interfaces:interfaces/interface\u0026#34;,\u0026#34;system_id\u0026#34;:\u0026#34;IOSXR02\u0026#34;},\u0026#34;ServerInfo\u0026#34;:{\u0026#34;Hostname\u0026#34;:\u0026#34;infra-kapacitor-6d4d66658b-jqb4f\u0026#34;,\u0026#34;ClusterID\u0026#34;:\u0026#34;84decbe8-63f8-4648-a849-96dd76a1d301\u0026#34;,\u0026#34;ServerID\u0026#34;:\u0026#34;2732b57a-46c5-4e29-bd02-44a9548e34b6\u0026#34;},\u0026#34;ID\u0026#34;:\u0026#34;/jobs:jobs/telemetry-measurement:measurement/measurement=intf/query=intf\u0026#34;,\u0026#34;Fields\u0026#34;:{\u0026#34;_sequence\u0026#34;:\u0026#34;2250230\u0026#34;,\u0026#34;_timestamp\u0026#34;:\u0026#34;1531698815825\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/index\u0026#34;:\u0026#34;0\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/admin-status\u0026#34;:\u0026#34;DOWN\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/in-discards\u0026#34;:\u0026#34;0\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/in-unicast-pkts\u0026#34;:\u0026#34;0\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/out-discards\u0026#34;:\u0026#34;0\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/out-unicast-pkts\u0026#34;:\u0026#34;0\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/description\u0026#34;:\u0026#34;\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/index\u0026#34;:\u0026#34;0\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/last-change\u0026#34;:\u0026#34;1531698816\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/mtu\u0026#34;:\u0026#34;1514\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/oper-status\u0026#34;:\u0026#34;DOWN\u0026#34;,\u0026#34;openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/type\u0026#34;:\u0026#34;iana-if-type:ifPwType\u0026#34;},\u0026#34;Level\u0026#34;:\u0026#34;INFO\u0026#34;,\u0026#34;Time\u0026#34;:\u0026#34;2018-07-15T23:53:35.825Z\u0026#34;,\u0026#34;Duration\u0026#34;:122551000000,\u0026#34;Message\u0026#34;:\u0026#34;/jobs:jobs/telemetry-measurement:measurement/measurement=intf/query=intf is INFO\u0026#34;}\n",“time”:“2018-07-15T23:53:35.825Z”,“duration”:122551000000,“level”:“INFO”,“data”:{“series”:[{“name”:“intftest”,“tags”:{“device”:“172.16.18.176”,“name”:“PW-Ether7”,“path”:“openconfig-interfaces:interfaces/interface”,“system_id”:“IOSXR02”},“columns”:[“time”,"_sequence","_timestamp",“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/index”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/admin-status”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/in-discards”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/in-unicast-pkts”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/out-discards”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/counters/out-unicast-pkts”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/description”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/index”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/last-change”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/mtu”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/oper-status”,“openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/type”],“values”:[[“2018-07-15T23:53:35.825Z”,“2250230”,“1531698815825”,“0”,“DOWN”,“0”,“0”,“0”,“0”,"",“0”,“1531698816”,“1514”,“DOWN”,“iana-if-type:ifPwType”]]}]},“previousLevel”:“INFO”,“recoverable”:true}

I feel, the solution which you given will work when we have single series which will provide the data about all the interfaces w.r.t all devices.

Let me know, if you need any further info on this.

The tick script which i have used:

dbrp “atom”.“autogen”

var data = stream
|from()
.measurement(‘intftest’)
.groupBy(*)
|window()
.period(1m)
.every(1m)

|changeDetect(‘openconfig-interfaces:interfaces/interface/subinterfaces/subinterface/state/admin-status’)
|alert()
.id(’/jobs:jobs/telemetry-measurement:measurement/measurement=intf/query=intf’)
.info(lambda: TRUE)
.log(’/tmp/rajesh.tmp’)


#19

Hey Rajesh

Can you offer a bit more insight other than

The solution is not working for our usecase. Because in our case, each series will be one interface details w.r.t that particular device.

Specifically what would be useful is a toy example with data expressed line protocol.


#20

Here is the example for with sample data:

test, Device=Device1, Interfacename=Interface1 , status=UP , MTU=1500 , timestamp= T1 ***
test, Device=Device1, Interfacename=Interface2 , status=DOWN , MTU=1500 , timestamp= T2
test, Device=Device2, Interfacename=Interface1 , status=DOWN , MTU=1500 , timestamp= T2
test, Device=Device2, Interfacename=Interface2 , status=UP , MTU=1505 , timestamp= T3
test, Device=Device1, Interfacename=Interface1 , status=DOWN , MTU=1500 , timestamp= T3 ***
test, Device=Device1, Interfacename=Interface3 , status=UP , MTU=1500 , timestamp= T6

For the device1 Interface1 status got changed at T3, so First alert for that series should happen there.

Here test is the measurement name.

Basically, we are comparing the previous(history) series(line protocol) with current and raising an alert if any change happens on that particular interface w.r.t that device.