Input and output capacity of telegraf

how many metrics is collect and process by telegraf in a second?

That depends on so many factors that the question as it stands cannot be
answered.

For example:

What input format/s are being used?

What output format/s are being generated?

What operating system are you running it on?

What hardware are you running it on?

What else is the machine also running?

Regards,

Antony.

Input format/s - influxdb_listener

Output format/s - http

data_format: “influx”

Operating system - centOS 7, Kubernetes environment.

Is there any way to find the maximum capacity of Telegraf.

what is the CPU loads when Telegraf is in overload mode?

what is the Input and output capacity when it’s collect and process metrics in a second?

As mentioned earlier, limit depends on what plugins you are going to use and how fast those can collect data.

I don’t see any limits on telegraf as such. Custom plugins with lot of processing may take more CPU or Time to return.

what you are planning to put on telegraf ?

Input data plugin - influxdb_listener

Output data plugin - http

data_format: “influx”

Operating system - centOS 7, Kubernetes environment.

Usually what I see as the limiting factor is how fast Telegraf can write to the output plugin. The best way to gauge performance is to run Telegraf for a bit with the internal plugin enabled, this will create some new measurements including one called internal_write, here is an example:

internal_write,output=file buffer_limit=10000i,buffer_size=39i,errors=0i,metrics_added=65i,metrics_dropped=0i,metrics_filtered=0i,metrics_written=26i,write_time_ns=3938348i 1568833200000000000

If you look at the write_time_ns you can see how long it takes to write, and the agent’s metric_batch_size divided by the write_time is the most you can send without falling behind. If you are going to be proxying a lot of metrics I suggest keeping the internal plugin enabled and also monitoring the metrics_dropped field, if this is increasing then you have a problem.