Limit on join function?

i need to join two tables together where i use the new join functionality.

the join works until the data table has more than x entrys. so my question is, is there any limit on using the join function? If i put limit 1000 to the data table it works, if i put in limit 1001 it fails with “panic: arrow/array: index out of range”

with best regards

this are my datas:

tableMETADATA _timeNO GROUPDATETIME:RFC3339 NameNO GROUPSTRING topicGROUPSTRING
0 2022-01-01T00:00:00.000Z Waermepumpe GERNER/Teststation/Zaehler/1
1 2022-01-01T00:00:00.000Z Elektroauto GERNER/Teststation/Zaehler/2
2 2022-01-01T00:00:00.000Z Netz Export GERNER/Teststation/Zaehler/Netz_Export
tableDATA _valueNO GROUPDOUBLE _timeNO GROUPDATETIME:RFC3339 topicGROUPSTRING
0 10 2022-11-28T10:53:10.000Z GERNER/Teststation/Zaehler/1
0 10 2022-11-28T10:57:00.000Z GERNER/Teststation/Zaehler/1
0 0 2022-11-28T11:00:50.000Z GERNER/Teststation/Zaehler/1
0 10 2022-11-28T11:04:40.000Z GERNER/Teststation/Zaehler/1

this should be archived:

tableENDTABELLE _valueNO GROUPDOUBLE _timeNO GROUPDATETIME:RFC3339 NameNO GROUPSTRING topicGROUPSTRING
0 10 2022-11-28T10:53:10.000Z Waermepumpe GERNER/Teststation/Zaehler/1
0 10 2022-11-28T10:57:00.000Z Waermepumpe GERNER/Teststation/Zaehler/1
0 0 2022-11-28T11:00:50.000Z Waermepumpe GERNER/Teststation/Zaehler/1
0 10 2022-11-28T11:04:40.000Z Waermepumpe GERNER/Teststation/Zaehler/1

this is my query:

import “strings”
import j “join”

metadata=from(bucket: “data”)
|> range(start: 2021-12-31T23:00:00.000Z, stop: 2022-01-01T22:00:00.000Z)
|> filter(fn: (r) => r[“topic”] =~ /GERNER/Teststation/)
|> filter(fn: (r) => r[“topic”] =~ /Zaehler/)
|> filter(fn: (r) => r[“_measurement”] == “Bezeichnung”)

|> rename(columns: {_value: “Name”})
|> drop(columns: [“db”,“_start”,“_field”,“_stop”,“_measurement”])

//|> yield(name: “metadata”)

data=from(bucket: “data”)
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r[“topic”] =~ /GERNER/Teststation/)
|> filter(fn: (r) => r[“topic”] =~ /Zaehler/)
|> filter(fn: (r) => r[“_measurement”] == “WH”)
|> aggregateWindow(every: v.windowPeriod, fn: last, createEmpty: false)
|> map(fn: (r) => ({ r with _value:r._value * 10.}))
|> difference(nonNegative: true)

|> drop(columns: [“db”,“_start”,“_field”,“_stop”,“_measurement”])

|> limit(n:1000, offset: 0)

//|> yield(name: “data”)
jointable=j.right(
left: data,
right: metadata,
on: (l, r) => l.topic == r.topic,
as: (l, r) => ({l with “Name”: r[“Name”]}),
)

|> yield(name: “Endtabelle”)

Hello @automationsloesung,
I’m not sure. There shouldn’t be.
I’ve created an issue for you here:

@scott have you seen this before?
Thank you!

No, I haven’t. This is new to me.

Hi, I confirm I see the same. Seems to be due to this even though it is marked as fixed. If the joins has too many elements it returns the panic. I got this when updating to 2.6.1, and writing my first join.right query. In 2.1.1 (there was only experimental join and the base join (inner) and I had no such issue). In my case I can make the issue appear by using a small v.windowPeriod in the every inside my aggregation function.

@scott this is still there with 2.7.1 and described in greater detail in internal error: panic: arrow/array: index out of range when joining a long table with another one · Issue #5402 · influxdata/flux · GitHub