I’ve discovered the source of the issue.
What didn’t work
Enabling more verbose logging (debug = true) and outputting the data to a file (outputs.file) did not report any error.
What did work
Enabling verbose logging on the Telegraf “gateway” process reported the following error
D! [inputs.influxdb_listener] Http_listener received a single line longer than the maximum of 65536 bytes
2020-02-17T10:49:23Z D! [inputs.influxdb_listener] Http_listener received a single line longer than the maximum of 65536 bytes
2020-02-17T10:49:23Z D! [inputs.influxdb_listener] Http_listener received a single line longer than the maximum of 65536 bytes
2020-02-17T10:49:23Z D! [outputs.influxdb] Wrote batch of 5000 metrics in 95.2228ms
2020-02-17T10:49:23Z D! [outputs.influxdb] Buffer fullness: 11833 / 50000 metrics
The Issue
The issue is that the query text is too big to be stored, strings have a limit of 64kb (docs here)
From this old post, looks like the limit only applies to tag values, and not to field values.
@daniel can you confirm that the limit does not apply to field values?
At the moment a solution could be the parameter “max_line_size” in the influx listener plugin used by the Telegraf “gateway”
[[inputs.influxdb_listener]]
{...}
## Maximum line size allowed to be sent in bytes.
## 0 means to use the default of 65536 bytes (64 kibibytes)
max_line_size = 0
{...}
Update:
changing the “max_line_size” configuration helps and I get fewer errors, but another problem has appered…
2020-02-17T11:49:00Z E! [outputs.influxdb] When writing to [http://127.0.0.1:8086]: received error partial write: __HugeLineProtocolString__ : max key length exceeded: 85962 > 65535 dropped=0; discarding points
The point key max size is 64kb, and the string alone weights 84kb, and I think for this there is no workaround if not opening an issue to change this column from tag to field