Influxdbv2 large line protocol ingest

Hello!

I am trying to populate a v2 bucket with data as directly as possibly using a stream/file of line protocol values and the ‘influx’ CLI program. I found that I could easily ingest a 80MB file using v1.7, but performing the equivalent actions with v2, I start to get parsing errors not too deep into the file.

After inspection, the parsing errors start to occur at approx 8000 bytes into the stream/file. It is almost as if the WriteParserMaxBytes or MaxBatchSizeBytes in the http/api_handler package is set to non-zero values that is truncating the bulk write.

If there is someone out there who can provide some pointers (e.g. how to verify this, read/write these variables), or more directly - how to get the ‘influx’ application to write a large file (using the @filename.txt argument) - I would greatly appreciate it!

How many points are in the file? The optimal batch write size is 5000 lines of line protocol.

Thanks for the reply!
Typically there will be several hundred thousand lines/points per ingest. but the first fault comes only 100 or so lines into the files/stream.

The data are properly formatted as (example)
MEASUREMENT_NAME val=1.012345678E+01 1543623342354300000
etc.

Where the first parsing error occurs, it appears as if it split one of the time tags. More specifically, it hit some byte limit that happened to correspond to a point in the middle of a time tag. However, it continues to read and parse the input, but it is now out of sync and starts to throw all sorts of errors.

As I mentioned before, under v1.7 it had no problem ingesting the entire file with the same formatting.

Looks like the master branch has the fix…
write/batcher.go line 72