when running the Source - Python example during inital setup I am struggling with the below command:
write_api = client.write_api(write_options=SYNCHRONOUS)
Error I receive:
NameError: name ‘client’ is not defined
Any idea how to fix this?
when running the Source - Python example during inital setup I am struggling with the below command:
write_api = client.write_api(write_options=SYNCHRONOUS)
Error I receive:
NameError: name ‘client’ is not defined
Any idea how to fix this?
Hello @github-ps1304,
Can you please share your entire python script?
It seems like the variable client
hasn’t been initialized yet when you’re trying to use it in the line write_api = client.write_api(write_options=SYNCHRONOUS)
.
Here’s a step-by-step way to initialize the client
and subsequently the write_api
for InfluxDB using the Python client:
influxdb-client
library installed. You can do that using pip:pip install influxdb-client
client
, you need to initialize it. Here’s how you can do that:from influxdb_client import InfluxDBClient, Point, WritePrecision
from influxdb_client.client.write_api import SYNCHRONOUS
# Initialize the client
client = InfluxDBClient(url="your_influxdb_url", token="your_token")
Make sure to replace "your_influxdb_url"
with the actual URL of your InfluxDB instance and "your_token"
with the authentication token.
3. Using the client:Now, you can use this client
to get a write_api
:
write_api = client.write_api(write_options=SYNCHRONOUS)
client.close()
Hi @Anaisdg - tx for supporting me!!
Let me quickly explain what I am trying to achieve:
we have a Pi4 running that is collecting sensor data from 142 sensors in total (temperature and sensor ID). the data is in a csv file (including a timestamp) - We want to inject this data from the csv file into InfluxDB - With “Sensors” as measurement - “temperature” as field-value and “sensor ID” as tag.
Below is a code that is working so far (but its injecting the line directly)
from influxdb_client import InfluxDBClient, Point, Dialect
from influxdb_client.client.write_api import SYNCHRONOUS
client = InfluxDBClient(url="http://192.168.2.52:8086", token="_NUa3ESff_5kbsJHDIjmFNiqb88bGYAAlVWgVUu3EBfTvrUGEVVIhrXpW-F_Csgsr8pu0RKa4Gmr9_R8ZiImKQ==", org="pshome")
write_api = client.write_api(write_options=SYNCHRONOUS)
query_api = client.query_api()
"""
Prepare data
"""
_point1 = Point("Sensors").tag("sensorID", "100").field("temperature", 27.3)
_point2 = Point("Sensors").tag("sensorID", "101").field("temperature", 26.3)
_point3 = Point("Sensors").tag("sensorID", "201").field("temperature", 28.3)
_point4 = Point("Sensors").tag("sensorID", "301").field("temperature", 29.3)
write_api.write(bucket="panda", record=[_point1, _point2, _point3, _point4])
"""
Query: using Pandas DataFrame
"""
data_frame = query_api.query_data_frame('from(bucket:"panda") '
'|> range(start: -10m) '
'|> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") '
'|> keep(columns: ["sensorID", "temperature"])')
print(data_frame.to_string())
"""
Close client
"""
client.close()
the outcome is exactly what I am looking for (only the timestamp is missing in the script, hence InfluxDB is adding the time of injection to it):
result table sensorID temperature
0 _result 0 100 25.3
1 _result 0 100 27.3
2 _result 1 101 24.3
Would be great to have the same outcome with reading a csv file and injecting it that way to InfluxDB.
Hi @Anaisdg - another reply
the below code is also working but not with the desired outcome (sensor id should be tag)
#!/usr/bin/python
import requests
import uuid
import random
import time
import sys
import csv
import json
INFLUX_TOKEN='_NUa3ESff_5kbsJHDIjmFNiqb88bGYAAlVWgVUu3EBfTvrUGEVVIhrXpW-F_Csgsr8pu0RKa4Gmr9_R8ZiImKQ=='
ORG="pshome"
INFLUX_CLOUD_URL='192.168.2.52'
BUCKET_NAME='python2'
# Be sure to set precision to ms, not s
QUERY_URI='http://{}:8086/api/v2/write?org={}&bucket={}&precision=ms'.format(INFLUX_CLOUD_URL,ORG,BUCKET_NAME)
headers = {}
headers['Authorization'] = 'Token {}'.format(INFLUX_TOKEN)
measurement_name = 'Sensoren'
# Increase the points, 2, 10 etc.
number_of_points = 1000
batch_size = 1000
data_end_time = int(time.time() * 1) #milliseconds
id_tags = []
for i in range(100):
id_tags.append(str(uuid.uuid4()))
data = []
current_point_time = data_end_time
with open('/Users/peterschlafmann/Z-ProjektDashboard/csv-import/data_ps.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
print('Processed')
_data_end_time = int(time.time() * 1000) - (100 * 1 * 1000)
for row in csv_reader:
_row = 0
current_point_time = current_point_time - 1000
_data_end_time = _data_end_time + (1 * 1000)
if row[0] == "TIMESTAMP":
pass
else:
_add = int(time.time()) - int(row[0])
#_row = int((int(row[0]) + 5847435 + 952068) * 1000)
_row = int((int(row[0])) * 1000)
print(_add)
print(_data_end_time, row[0],_row, '\n')
data.append("{measurement},location={location} Temperature={Power_A},Sensor-ID={Power_B} {timestamp}"
.format(measurement=measurement_name, location="IBM-Rack", Power_A=row[2], Power_B=row[3], timestamp=_row))#timestamp=row[0]))......(data_end_time + 1000)
count = 0
if __name__ == '__main__':
# Check to see if number of points factors into batch size
count = 0
if ( number_of_points % batch_size != 0 ):
raise SystemExit( 'Number of points must be divisible by batch size' )
# Newline delimit the data
for batch in range(0, len(data), batch_size):
time.sleep(10)
current_batch = '\n'.join( data[batch:batch + batch_size] )
print(current_batch)
r = requests.post(QUERY_URI, data=current_batch, headers=headers)
count = count + 1
print(r.status_code, count, data[count])
How to inject the sensor id from the csv data (example below):
1598918400,,30.50,101
1598918500,,31.50,201
which is : timestamp, temperature, sensorID