Hi there, I’m working on a proof of concept for an InfluxDB project where we are sending metrics from our 100+ application servers using collectd and I hope to ingest them into a centralised InfluxDB, visualise with Chronograf and Alert using Kapacitor.
We use 6 Linux servers in our datacenters to allow us to receive logs via SFTP and process them before sending them to other databases.
Is it more efficient to have the application servers sending metrics using collectd to the 6 local Linux servers running Telegraf, unpack the binary stream there and then forward collectd/JSON to the central InfluxDB/Chronograf server (fan-in)?
(100+) Application (collectd) -> (6) Linux Data Collection Server (Telegraf) -> (1) ICK Server
Or should we just send all the 100+ collectd binary streams directly to the central server that would unpack all the collectd streams and run the full TICK stack there (hub and spoke architecture)?
(100+) Application (collectd) -> (1) TICK Server
Thanks!