Tolerance not working with joins holt winters

I have issue with kapacitor join node with tolerance. In the following code, data goes through join node only if two timestamps were equal

[holtwinters:log3] 2017/10/12 08:06:20 I!  {"Name":"measurement","Database":"","RetentionPolicy":"","Group":"","Dimensions":{"ByName":false,"TagNames":[]},"Tags":null,"Fields":{"rvalue":1560.73297119},"Time":"2017-10-12T08:06:20-07:00"}

[holtwinters:log7] 2017/10/12 08:06:20 I!  {"Name":"measurement","Database":"","RetentionPolicy":"","Group":"","Dimensions":{"ByName":false,"TagNames":[]},"Tags":null,"Fields":{"hw":1559.79650879},"Time":"2017-10-12T08:06:20-07:00"}

The following data points doesnt go through join node, even with tolerance(10s)

[holtwinters:log3] 2017/10/12 08:03:00 I!  {"Name":"measurement","Database":"","RetentionPolicy":"","Group":"","Dimensions":{"ByName":false,"TagNames":[]},"Tags":null,"Fields":{"rvalue":1910.36984253},"Time":"2017-10-12T08:03:00-07:00"}

[holtwinters:log8] 2017/10/12 08:03:00 I!  {"Name":"measurement","Database":"","RetentionPolicy":"","Group":"","Dimensions":{"ByName":false,"TagNames":[]},"Tags":null,"Fields":{"hw":2172.644114545176},"Time":"2017-10-12T15:03:06.965293824Z"}

var real = batch
    |query('select kw_total from "db"."autogen"."measurement" where "name" = \'cr1\'')
        .period(10s)
        .every(10s)
        .align()
    |last('kw_total')
        .as('rvalue')
    |log()

var pred = batch
    |query('select kw_total from "db"."autogen"."measurement" where "name" = \'cr1\'')
        .offset(10s)
        .period(1m)
        .every(10s)
        .align()
    |shift(10s)
    |holtWinters('kw_total', 1, 0, 10s)
      .as('value')
   |last('value')
        .as('hw')
   |log()
real
    |join(pred)
        .as('real', 'pred')
        .tolerance(10s)
    |log()

Any help greatly appreciated…

Thanks,
Kranthi

Can we get the output of kapacitor show <task>

DOT:
digraph holtwinters {
graph [throughput="0.00 batches/s"];

query4 [avg_exec_time_ns="3.855294ms" batches_queried="7" errors="0" points_queried="33" working_cardinality="0" ];
query4 -> shift5 [processed="7"];

shift5 [avg_exec_time_ns="662ns" errors="0" working_cardinality="0" ];
shift5 -> holtWinters6 [processed="7"];

holtWinters6 [avg_exec_time_ns="0s" errors="0" working_cardinality="0" ];
holtWinters6 -> last7 [processed="7"];

last7 [avg_exec_time_ns="6.05µs" errors="0" working_cardinality="0" ];
last7 -> log8 [processed="6"];

log8 [avg_exec_time_ns="0s" errors="0" working_cardinality="0" ];
log8 -> join10 [processed="6"];

query1 [avg_exec_time_ns="1.737632ms" batches_queried="8" errors="0" points_queried="10" working_cardinality="0" ];
query1 -> last2 [processed="8"];

last2 [avg_exec_time_ns="0s" errors="0" working_cardinality="0" ];
last2 -> log3 [processed="8"];

log3 [avg_exec_time_ns="0s" errors="0" working_cardinality="0" ];
log3 -> join10 [processed="8"];

join10 [avg_exec_time_ns="2.067µs" errors="0" working_cardinality="1" ];
join10 -> log11 [processed="0"];

log11 [avg_exec_time_ns="0s" errors="0" working_cardinality="0" ];
}

Hmm. Very weird. Nothing is obviously wrong from what I can tell. What if we try to use a stream instead of a batch. There’s no reason why that would make a difference, but it may help. Also maybe add a groupBy(*).

Sure. I will try and let you know.

Regards,
Kranthi

I did try with stream, it actually worked… I dont know why it is not working with the batch. I will try with batch, as I need to use offset, which I cant use with the stream…

   stream0 [avg_exec_time_ns="0s" errors="0" working_cardinality="0" ];
stream0 -> from5 [processed="76"];
stream0 -> from1 [processed="76"];

from5 [avg_exec_time_ns="10.776µs" errors="0" working_cardinality="0" ];
from5 -> window6 [processed="38"];

window6 [avg_exec_time_ns="3.157µs" errors="0" working_cardinality="1" ];
window6 -> holtWinters7 [processed="29"];

holtWinters7 [avg_exec_time_ns="1.842287ms" errors="0" working_cardinality="0" ];
holtWinters7 -> last8 [processed="29"];

last8 [avg_exec_time_ns="1.99µs" errors="0" working_cardinality="0" ];
last8 -> log9 [processed="7"];

log9 [avg_exec_time_ns="0s" errors="0" working_cardinality="0" ];
log9 -> join11 [processed="7"];

from1 [avg_exec_time_ns="13.384µs" errors="0" working_cardinality="0" ];
from1 -> window2 [processed="38"];

window2 [avg_exec_time_ns="4.435µs" errors="0" working_cardinality="1" ];
window2 -> last3 [processed="29"];

last3 [avg_exec_time_ns="13.91µs" errors="0" working_cardinality="0" ];
last3 -> log4 [processed="29"];

log4 [avg_exec_time_ns="0s" errors="0" working_cardinality="0" ];
log4 -> join11 [processed="29"];

join11 [avg_exec_time_ns="3.154µs" errors="0" working_cardinality="1" ];
join11 -> log12 [processed="6"];

log12 [avg_exec_time_ns="0s" errors="0" working_cardinality="0" ];
}

Tick script:

var real = stream
    |from()
        .measurement('power_meter')
        .where(lambda: "name" == 'cr1_total')
    |window()
        .period(10s)
        .every(10s)
        .align()
    |last('kw_total')
        .as('rvalue')
    |log()

var pred = stream
    |from()
        .measurement('power_meter')
        .where(lambda: "name" == 'cr1_total')
    |window()
        .period(10s)
        .every(10s)
        .align()

    |holtWinters('kw_total', 1, 0, 10s)
        .as('value')
    |last('value')
        .as('hw')
    |log()

real
    |join(pred)
       .as('real', 'pred')
       .tolerance(10s)
    |log()

Looks like I am missing something. Let me know if I need to make any changes to the TICK script.

Taking another look at the logs

[holtwinters:log3] 2017/10/12 08:03:00 I!  {"Name":"measurement","Database":"","RetentionPolicy":"","Group":"","Dimensions":{"ByName":false,"TagNames":[]},"Tags":null,"Fields":{"rvalue":1910.36984253},"Time":"2017-10-12T08:03:00-07:00"}

[holtwinters:log8] 2017/10/12 08:03:00 I!  {"Name":"measurement","Database":"","RetentionPolicy":"","Group":"","Dimensions":{"ByName":false,"TagNames":[]},"Tags":null,"Fields":{"hw":2172.644114545176},"Time":"2017-10-12T15:03:06.965293824Z"}

it looks like the timestamps are for different timezones, I don’t think that is causing the issue, but it could be.

Any chance you could supply some example data so that I can try to reproduce the issue myself?

Yeah, I had the same doubt. Anyways my issue is resolved, I created an UDF for holtwinters and working good…

Thanks for your support. .

Regards,
Kranthi

1 Like