Greetings Everyone. We are using Prometheus in Openshift to log infrastructure and application metrics. I understand that this data can be sent to InfluxDB. Can I get some guidance on what this architecture would look like? I’m thinking that there is more than one solution to this problem. This documentation
indicates that Prometheus should be configured to use the Prometheus remote read and write API by updateing the Prometheus configuration file. How would this be implemented in an OpenShift cluster where prometheus is running as a pod. Would I be able to continue to use the existing image? Where would the change need to be made? In the Operator? Daemon set? Config Map?
Another solution might be to use Telegraph. The config file has this section
# # Read metrics from one or many prometheus clients # [[inputs.prometheus]]
This also has it’s drawbacks because I would still have to maintain an additional process and configuration.
Still another solution might be to install another instance of Prometheus on a server and have it scrape the data from my existing Prometheus data sources and forward it on to the InfluxDB. I’d like to avoid having to stand up another instance of Prometheus if I can. This approach sounds kind of kludgy.
Any guidance or documentation would be greatly appreciated.