the whole table

Hi there,

I’m looking at this link: Convert results to JSON and send them to a URL | InfluxDB OSS 2.0 Documentation, where a code excerpt is as follow:

import "http"
import "json"

from(bucket: "noaa")
  |> filter(fn: (r) => r._measurement == "average_temperature")
  |> mean()
  |> map(fn: (r) => ({ r with
    jsonStr: string(v: json.encode(v: {"location":r.location,"mean":r._value}))}))
  |> map(fn: (r) => ({r with
      url: "",
      headers: {x:"a", y:"b"},
      data: bytes(v: r.jsonStr)

This sends a http post for each of the records in the table.

My question: is there a way to send the whole table in 1 request?


Usually, that’s an approach that doesn’t scale very well (not just in influxdb, but many many programming languages) . At some point, it’s just too much data to handle, either on the sender end, the transport in the middle, the receiver end, or the receiver application behind the endpoint.

Is there a specific problem you’re seeing with the way it’s working currently? What’s the problem you need to solve?

1 Like

Hi there,

For me I’d think sending a HTTP request for every rows would be less scalable.

Do you prefer 100 request, 1kb each or 1 request of 100kb?

Yes that’s the idea, batching up the requests to a reasonable size. That’s where you’ll need to test for your use case.

Influxdb docs recommend a batch size of around 5000 for writes, so a useful starting point . Your use case may perform better slightly above this number or below.

This isn’t relevant, I need to “batch” out with that http packages, can you point me to the doc that helps doing this?

We have the same needs here.

We have Flux queries that return a small (10-100) number of rows and we would like to send them as one HTTP POST request.

A working solution would be greatly appreciated.

Currently, findColumn() can extract an array of values.

It would be nice to have findColumns() to extract and array of records to be given to json.encode() (for instance).