Kapacitor UDF: Recordings running and size is 0B

Hi all,

I am trying to get the Custom Anomaly Detection UDF example working, but I’m facing the same problem as this user in the google groups

Basically, I define my task (also visible in the chronograph) but it never reaches the status finished when doing kapacitor list recordings.

ID                                   Type    Status    Size      Date
fe869dcc-15e4-4de7-98e8-c415b90ae859 stream  running   0 B       03 Jul 18 13:32 UTC

Running kapacitor stats ingress shows that db printer with Measurement temperatures has received 86402 points.

Any ideas?


edit: The UDF log I get:

ts=2018-07-03T12:14:04.882Z lvl=error msg="received error message" service=kapacitor task_master=main task=print_temps1 node=tTest7 err="error processing request of type snapshot: cannot write None response"
1 Like

Hi ,

There is no snapshot method in that UDF example. I didn’t try that example.
According to my understanding, a UDF has to send a response from snapshot method.
You can try moving average (or any other) example from kapactor github repo.

1 Like

I am currently having the same issue. I have tried simply using the same code from the init method as the snapshot method. Consequently, the error message did disappear but it didn’t get the stream to reach the proper status and get some bytes. So I’m quite sure it’s only a symptom, not the cause. Any other suggestions maybe? Thank you!

Response from init method is of type/method response.init. so for snapshot, it will response.snapshot.
You can use snapshot method from outlier.py example from Kapacitor repo. You can find template for snapshot and restore.

That unfortunately doesn’t seem to solve the problem. I don’t think the snapshot method is the main problem, there is some other issue.

I have found the problem. The reason why it’s not working is because the tutorial is heavily outdated. The recording tool only records entries, which are inserted into the database after the recording tool has been started. Though this might be true, the timestamp is used to identify which entry is new. So in this case, it the timestamp starts at 1/1/2016 which is not even remotely the current date. I deleted the old lines of datetime code and replaced it with

points.append("%s hotend=%f,bed=%f,air=%f %d" % ( measurement, hotend, bed, air, time.time() + i )) .
Hope I could help.