Memory allocation limit reached: limit 10000000 bytes, allocated: xxxxxxxx, wanted: xxxxx

Steps to reproduce:

  1. Happens on any dashboard that queries a lot of data with a simple query
    from(bucket: "PLC_Router_Data") |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r["_measurement"] == "data") |> filter(fn: (r) => r["host"] == v.my_PLC_source) |> filter(fn: (r) => r["_field"] =~ /ress/ and r["_field"] !~ /Application/) |> group(columns: ["_field"], mode:"by") |> toFloat() |> sort(columns: ["_time"], desc: false)

Expected behavior:
I expect that performing the same query on the same dataset for the same duration either results in:

  1. The query always fails if we are genuinely doing something that consumes massive amounts of memory (it succeeds 50% of the time)
  2. The query always succeeds if it’s not

Actual behavior:
Sometimes you get data

Sometimes you get the failure

Environment info:

  • InfluxDB 2.0 Cloud
1 Like

Hello @asmith,
I’m not sure. I’m passing your question along to someone who can help. Thanks for your patience.

same question here. Any updates?

Thanks for joining in @wangye_sli .
No updates. But I know that if @Anaisdg is on the case then we’re good!!! :slight_smile:

I am still getting this. It happened 2 days ago in fact
It is happening as I explore data with Dashboards

The impact would be much more severe if it was happening to “Tasks”.

I’m uncomfortable with the idea that InfluxDB Cloud 2.0 “sometimes” completes a query and “sometimes” doesn’t.

It doesn’t sound like the reliability we need.

Clearly the query is sometimes hitting a 10MB RAM limit.
But under what circumstances is memory limited to 10MB?
Do all queries get this 10MB limit?
Or is it applied under certain circumstances?

I think the limit applies to dashboard graph only. I was able to walk around the limit warning by using customized duration when aggregating:

|> aggregateWindow(every: duration(v:"60m"), fn: last, createEmpty: false)

Right, but

|> aggregateWindow(every: duration(v:“60m”), fn: last, createEmpty: false)

is going to massively reduce the data returned and the RAM needed.
So what you have done is change the query in a way which hides the bug/behaviour.

clicking refresh on the same dashboard “sometimes” gets the error and sometimes doesn’t.

That’s not reliable and is certainly undesirable. I’m seeking to get it fixed.

Can you successfully reproduce the undesirable behaviour?

RAM exceeding alert happens when there are multiple plots on the same dashboard