Telegraf csv files feed

can you guys share your experience with feeding csv files to telegraf input?
How would you organize the files flow so that new files come periodically to the input while processed files deleted soon. I see that all files matching to
files = ["/data/siem/input/*.csv"]
are continuously written to the output until they are deleted:

2019-08-23T11:16:00Z D! [outputs.influxdb] Buffer fullness: 0 / 500000 metrics
2019-08-23T11:16:10Z D! [outputs.influxdb] Wrote batch of 6204 metrics in 522.035882ms
2019-08-23T11:16:10Z D! [outputs.influxdb] Buffer fullness: 6204 / 500000 metrics
2019-08-23T11:16:20Z D! [outputs.influxdb] Wrote batch of 6204 metrics in 557.214832ms
2019-08-23T11:16:20Z D! [outputs.influxdb] Buffer fullness: 6204 / 500000 metrics

and if i continue feeding files there i get an error soon:

2019-08-23T11:16:32Z D! [outputs.influxdb] Wrote batch of 18612 metrics in 2.144446423s
2019-08-23T11:16:32Z D! [outputs.influxdb] Buffer fullness: 31020 / 500000 metrics
2019-08-23T11:16:41Z E! [outputs.influxdb] when writing to [http://127.0.0.1:8086]: 413 Request Entity Too Large: Request E
ntity Too Large

So i see that processed files should be removed from the feed directory. How to do that somehow automatically and to be sure that deleted files are processed?

Which plugin are you using?

Hi
I use input.file and output.influxdb

I think what you want to do work best with the tail input. This plugin is similar to the file input, but instead of processing the entire file each interval the plugin will watch the files for changes and only process new lines.

Telegraf doesn’t have a mechanism for deleting the files once they are processed, so you will need to handle this externally. I’ve heard of some users who run a script from cron that uses find to remove old files.

Hmm. I don’t think tail will help. There is quite a big set of files mirrored from the system collecting bulk stats from a number of gateways. So putting files to the input is good.
Is there a way to raise a limit causing the error:
2019-08-23T11:16:41Z E! [outputs.influxdb] when writing to [http://127.0.0.1:8086]: 413 Request Entity Too Large: Request E
ntity Too Large
???
In my first version of the collector i used perl script parsing csv files and preparing a query file for influx. The query file size never was a problem - i used to input a gigabyte query or even more. The issue is sloooow Perl even when splitted in several processes.

What is your agent metric_batch_size set to? You may need to lower this in order to send smaller batches. You can increase this value in InfluxDB’s configuration, max-body-size, but I recommend leaving it at the default and sending more smaller batches.

Thank you, Daniel. I’ll give it a try.

Hmm. Influx config shows

  # The maximum size of a client request body, in bytes. Setting this value to 0 disables the limit.
  # max-body-size = 25000000

And the telegraf config for metric_batch_size is only 1000:

  ## Telegraf will send metrics to outputs in batches of at most
  ## metric_batch_size metrics.
  ## This controls the size of writes that Telegraf sends to output plugins.
  metric_batch_size = 1000

Very big difference does not seem to be a problem here. Am i wrong ?

And definitely yes, Daniel, you were absolutely right about using tail plugin. It is far more close to what was needed. But still - if i have a large set of files or large amount of metrics within the files i get the errors like below. And can’t figure out what’s wrong…

2019-09-04T10:27:00Z E! [outputs.influxdb] when writing to [http://127.0.0.1:8086]: 413 Request Entity Too Large: Request Entity Too Large
2019-09-04T10:27:00Z D! [outputs.influxdb] Buffer fullness: 54126 / 500000 metrics
2019-09-04T10:27:00Z E! [agent] Error writing to outputs.influxdb: could not write any address

That value should give plenty of head room, though I can’t say it will definitely fit. This is because Telegraf operates on metrics, not on bytes, which could be arbitrarily long.

Just as a sanity check I’d add a file output plugin temporarily:

[[outputs.file]]
  files = ["/tmp/telegraf"]

Run Telegraf for a bit, so that you have at least 1000 lines in the file. Then check this file to see how large a random collection of 1000 lines from the file are:

shuf -n 1000 tmp | wc -c

Is it anywhere close to 25000000?

Sorry, my fault. Just looked into the wrong telegraf.conf file. The batch size was 100000 which is definitely too large. Reduced it to 10000 and everything goes fine.