Process a lot of data in influx using go client


Hi, I have around 2 million samples I need to process from an influx database and am working a program using the golang client. I can set Chunking to something reasonable, and if I run a Query it churns for a couple minutes, and then returns all the data at once using 3G of memory or so. Is it possible to receive the data one chunk at a time so it does not use as much memory? I can’t find any examples and looking at the Query code I don’t see anything obvious.


Hello Cliff,

You’ve run into a big problem with the current default go client. While chunking helps prevent the server from running out of memory from a single query, it doesn’t help the client to process the query iteratively.

While this isn’t finished, I have been working on an alternative experimental client that I’m hoping will eventually become the defacto client. Unfortunately, we don’t have a dedicated team working on it at the moment and the API is not considered stable. The library is located here:

This library is designed to read the chunks one at a time and iterate over them. It also automatically handles concatenating them into a single stream so you don’t have to mess around with dealing with partial responses.

If you use it, I recommend vendoring whatever code you’re using in your repository and submitting any issues with any usability problems or suggestions. I’m always looking for feedback on the new client so we can have a stable API for it in the future and it can potentially be used as a model for clients in other languages.

I hope this helps!

1 Like

Thanks Jonathan. Looks interesting – will give it a try …