"internal error: panic: arrow/array: index out of range"

Hi,

We’re running grafana with the hosted/paid version of influx. Have the time, I get this weird error when I post a query:
“internal error: panic: arrow/array: index out of range”

The strnge this is that it happens inconsistently. I hit refresh on the same query and it will work just fine. The same query will only work one our of three times. Has anyone seen this before?

It’s a little frustrating since we’ve never had this issue when hosting our own influx server. This happened when we switched to the hosted version.

Thanks

not an InfluxDB2 user myself (yet), but I’ll try the following.
In recent versions of grafana you can look at the query itself by using the Inspect -> Query from the visual itself.

From there you can copy the query and see if there is something strange there (but that probably will be fine), you can then copy it out and run it yourself (from Chronograf, CLI or whatever you have available) to see if that changes something or if you still get the panic error consistently.

This should allow you to understand if the problem is on InfluxDB2 itself or the Grafana datasource, you might also have a look at the InfluxDB logs and look for errors as you may find something more detailed there.

If you discover something more (enough to replicate it consistently) I suggest you to open an issue on GitHub, as “panics” are usually something that has to be managed

Hi All,

I'm experiencing the same error panic: arrow/array: index out of range. But in my case I can reproduce the problem usign the flux limit(n: 2893). Using limit(n: 2892) is working fine.
Any Help ?
    import "join"
    import "array"
    
    left = from(bucket:"my-bucket") 
                            |> range(start: -30d, stop: now()) 
                            |> filter(fn: (r) => r._measurement == "connection")
                            |> filter(fn: (r) => r.objectId == "6336066ade7d35284cc44c71" )
                            |> limit(n:2893)
                            |> group()
                            
    right = array.from(rows:[{id:"6378003535ec1c1bf666e12c",name:"Site1"},{id:"63694a802583aa0042be47ef",name:"Site2"},{id:"6244f6cf0df74d077c21055f",name:"Site3"}])
             
    join.inner(
              left: left,
              right: right,
              on: (l, r) => (l.siteId == r.id),
              as: (l, r) => ({l with site: r.name})
          )
          |> group()

1 Like

Same here. Using join.inner without limiting the rows results the error you provided before.

Any update here? I still have the same problem.

same here. If I join more than 1000 rows I get the error (verified by using a limit(n:1000) before the join. In that case it works).

1 Like

That is very annoying. It is very common to have query results with more than 1000+ rows. Did you find any usable workaround or help in this topic?

In the past I used the deprecated original join function, that seemed to work (consider that it is an inner join, so you might need some workarounds if you were trying to do a left join). Now I’ll try to do the same, but for the future I’ll change the timeseries database

1 Like

For example my join.left (stored in the result table) got changed as follows (from the commented section to the uncommented one). Important, you have to comment out the initial import "join" (I am using 2.7.1)

//result = join.left(
//    left: data,
//    right: association,
//    on: (l, r) => l.uid == r.uid,
//    as: (l, r) => ({_time: l._time, _field: l._field, _value: l._value, uid: l.uid, phasecode: r.phasecode, reportingdeviceid: r.reportingdeviceid, channelnumber: r.threephasesystemid, name: r.name, vb: r.vb}),
//)
...

result = join(tables: {t1: data,t2: association}, on : ["uid"])
|> rename(columns: {threephasesystemid: "channelnumber"})
...