Passing record to custom function

Is it possible to pass record to custom function? I’d like to call custom function from filter() that would work similarly to contains(), but it would need the whole row because it would use some if-else if-else condition block and use different columns (tags) accordingly.

Given custom function like

match = (r) => contains(value:r.g3, set: ["abc"])

I can verify that it somehow works

> row = { g3: "abc" }
> match(r:row)

With dataset like


I get empty table as result when I include the function in filter():

data |> range(start: -1y) |> filter(fn: (r) => r._measurement == "m" and match(r))

When I inline the function body it works as expected:

> data |> range(start: -1y) |> filter(fn: (r) => r._measurement == "m" and contains(value: r.g3, set: ["abc"]))
Result: _result
Table: keys: [_start, _stop, _field, _measurement, g1, g2, g3]
                   _start:time                      _stop:time           _field:string     _measurement:string               g1:string               g2:string               g3:string                  _value:float                      _time:time  
------------------------------  ------------------------------  ----------------------  ----------------------  ----------------------  ----------------------  ----------------------  ----------------------------  ------------------------------  
2018-12-17T09:20:10.127752113Z  2019-12-17T13:45:00.491825337Z                       x                       m                       a                      ab                     abc                             1  2019-12-17T13:32:12.906454593Z  

@alespour What version of Flux are you using?

0.57.0, standalone interpret (flux repl with CSV source). When I try alpha 21 and bucket source, I get “Error: panic: runtime error: invalid memory address or nil pointer dereference”.

2019-12-18T09:27:37.332393Z	info	Dispatcher panic	{"log_id": "0Jm0~XHW000", "service": "storage-reads", "component": "dispatcher", "error": "panic: runtime error: invalid memory address or nil pointer dereference"}
goroutine 21166 [running]:
runtime/debug.Stack(0xc0009b05a0, 0x2b0d000, 0x1c7bb3b)
	/usr/local/go/src/runtime/debug/stack.go:24 +0x9d*poolDispatcher).Start.func1.1(0xc0009b0600)
	/go/pkg/mod/ +0x196
panic(0x19a27a0, 0x42b3360)
	/usr/local/go/src/runtime/panic.go:522 +0x1b5*compilerScope).LocalRange(0xc00ff0ab00, 0xc00ff0acc0)
	<autogenerated>:1 +0x32, 0xc01378c640, 0x7fd95eeb5780, 0xc00ff0ab00, 0xc00ff0ab00, 0xc006239790)
	/go/pkg/mod/ +0x50, 0xc00ff0ab00, 0xc01378c640, 0x2b66700, 0xc0011c3ff0, 0x1968560, 0xc01342d1d0, 0xc0000c66c0, 0xc0000c66c0)
	/go/pkg/mod/ +0x297*CompilationCache).Compile(0xc012dae5c0, 0x2b66700, 0xc0011c3ff0, 0x1, 0xc0000c6748, 0x28, 0xc001a2ca88)
	/go/pkg/mod/ +0xc8*dynamicFn).prepare(0xc0009b06c0, 0xc0011f8700, 0x9, 0x9, 0xc01342d1d0, 0xd0, 0x1b43dc0)
	/go/pkg/mod/ +0x2d6*RowPredicateFn).Prepare(0xc0009b06c0, 0xc0011f8700, 0x9, 0x9, 0x47, 0xc012d36b48)
	/go/pkg/mod/ +0x58*filterTransformation).Process(0xc01342cb70, 0x7e58fd24279ba3c4, 0x38c6b017ca6f0aad, 0x7fd95eeb56e0, 0xc0001aa900, 0x7fd964288101, 0x200000003)
	/go/pkg/mod/ +0x61, 0xc01342c630, 0x2b55bc0, 0xc01342cb70, 0x2b2ae00, 0xc012daeee0, 0xc012d36f0a, 0xc001ade000, 0xc012d36f68)
	/go/pkg/mod/ +0x2db*consecutiveTransport).processMessages(0xc0009b0720, 0x2b4d100, 0xc01342c630, 0xa)
	/go/pkg/mod/ +0xc6*poolDispatcher).run(0xc0009b0600, 0x2b4d100, 0xc01342c630)
	/go/pkg/mod/ +0x65*poolDispatcher).Start.func1(0xc0009b0600, 0x2b4d100, 0xc01342c630)
	/go/pkg/mod/ +0x95
created by*poolDispatcher).Start
	/go/pkg/mod/ +0x7e

I think I could use slightly different (and better?) approach and define my custom function as a wrapper of filter() like

match = (tables=<-, some params...) => {
    |> filter(fn: (r) => 
       if exists r.tagName ...

and then

data |> range(start: -1y) |> filter(fn: (r) => r._measurement == "m") |> match(some params)

Thank you for the attention.

I’d definitely create an issue on the Flux project regarding the panic. The Flux team will definitely want to know about it. Although I have a feeling it’s an issue with the REPL. My guess is that it’s having trouble parsing the nested function. I know there are some outstanding issues with the REPL and often run into them myself. Another option would be to try in the InfluxDB 2.0 UI’s Data Explorer.

Either of your proposed approaches should work. Nothing jumps out at me as to why they wouldn’t work.

Thank you, @scott, I reported the same/similar issue to Influxdb project some time ago, (it only occurs with bucket source, I thought), so I have transferred the issue from there to Flux project (#2288).