I am designing a system that should be able to process around 1 million metrics per second. Is this within the realm of possible with telegraf?
I can supply adequate compute resources if that’s all it takes, but I want to know what sort of scaling limitations there may be. Will I need to scale horizontally with more telegraf processes? Can I just run one process if I feed it enough resources?
My input is OPCUA and my output is prometheus.
I supposed that 1 million of data is not coming from a single source right?
That’s too much for a single node opcua server.
If it’s cominf from multiple source you could have setup clustered influxdb and use a load balancer for read /write data to influxdb.
You could scale up horizontal (putting more instance of influxdb behind the load balancer) in order to accomodate the high ingestion of data (1M/secs).
It’s not coming from a single source, it’s coming from multiple sources. I am actually writing to Victoria metrics (prometheus compatible TSDB). Our current pipeline is able to handle this many metrics so I know the db can ingest the amount we need.
My question is specifically about telegraf. Can we use that to read OPCUA and write to a prometheus database with 1 million metrics per second? Will we need to run multiple instances of telegraf? Are there any write-ups or case studies of people doing something similar?
1 Like
I’d say it’s at least not out-of-range from Telegraf side. It of course depends on if you are doing other processing etc in Telegraf and if your machine can handle the potential peaks…
I would say give it a try! Let us know how it is going and feel free to ask if you need any help!