I’m writing a Python script to collect lots of data from the Oracle database and load into InfluxDB in a remote server to build dashboards in Grafana.
My code works sweetly if, for each metric collected metric, the script loads it to InfluxDB. However, using this approach, for i.e. 10 metrics collected there will be 10 database calls.
I would like to avoid this approach and load all 10 metrics using a single database call, such as “influx -import”, but using Python and loading into a remote DB.
The code I’m using is:
v_payload = [
{
“measurement”: “SRV_HEATH”,
“tags”: {
“CUSTOMER”:env.CUSTOMER,
“HOSTNAME”:env.HOSTNAME,
“RESOURCE”:“MEM”
},
“time”: v_curtime,
“fields”: {
“PCT_USED”: v_mempct
}
}
]
client.write_points(v_payload)
I already tried to store all the metrics into a list and then client.write_points the list and some other tries.
Does anyone have an example of how to achieve what I’m trying to?
Here’s another example of working with pandas and influxdb. In this blog, I take advantage of the pivot() function to easily create a DataFrame when I have multiple tags and fields.
Actualy I just realized that I’m using influxdb-python, and not influxdb-client-python.
I just installed influxdb-client-python but now I’m facing some difficulties.
What exactly is the organization and how “my-token” works?
I’m using InfluxDB 1.8