ValueError: cannot convert float NaN to integer

Hi there,

When querying data with python influxDB client to pandas dataframe I get the error message ValueError: cannot convert float NaN to integer.

.venv\Lib\site-packages\influxdb_client\client\flux_csv_parser.py", line 288, in _to_value

  • return int(str_val)*
  •       ^^^^^^^^^^^^*
    

ValueError: cannot convert float NaN to integer.

Is there a solution or workaround for this problem?

Best regards,
Jan

Hello @JanD,
Can you share your python script? (including the flux query?)
As well as the python client version you’re using?
Thank you!

Hello Anaisdg,

Thanks for your interest. Attached you find the python script and below the query.
I use the latest release of the python influxdb_client.

Best regards,

The query i run is:

import “experimental/date/boundaries”
import “influxdata/influxdb/schema”
lastWeek = boundaries.week(week_offset: -1)
from(bucket: “claire-test-data”)

Outlook-logo TUe.png

(Attachment download_data.py is missing)

Hello Anaisdg,

Thanks for your interest. Below you find the python script and below the query.
I use the latest release of the python influxdb_client.

Best regards,
Jan

The python script:

import os
import time
from dotenv import load_dotenv
from influxdb_client import InfluxDBClient
import pandas as pd
import matplotlib
import certifi
import re

For windows ssh certification keys

fh = open(certifi.where(), “r”)
cert = fh.read()
fh.close()

load secret data from .env file

load_dotenv()
token = os.getenv(‘TOKEN’)
org = os.getenv(‘ORG’)
bucket = os.getenv(‘BUCKET’)
host = os.getenv(‘INFLUXDB_HOST’)

Init influxDB client

client = InfluxDBClient(url=host, org=org, token=token)
query_api = client.query_api()

read excel with MAC adresses in pandas dataframe

location_data = pd.read_excel(‘./sensor_locations.xlsx’)
mac_adresses_to_filter = ‘’
for i in range(1,location_data.shape[0]):
if location_data.Active[i] == True:
mac_adresses_to_filter += f’|{location_data.MAC_adresses[i]}’

print( f’MACS {mac_adresses_to_filter}')

query in flux language

start_time = ‘2023-12-20T00:00:00’
stop_time = ‘2023-12-20T23:59:59’

filter_string =“AM232X_RH|AM232X_TEMP|ENS210_RH|ENS210_TEMP|NDIR_CO2|NUMPM0|NUMPM1|NUMPM2|NUMPM4|NUMPM10|PM1|PM2.5|PM4|PM10|IAQ|TVOC”
keep_column_string = ‘[“_time”, “MAC”, "’ + re.sub(‘[|]’, ‘", "’, filter_string) + ‘"]’

query = f’‘’
// Flux query to download data from last week for sensor with MAC adress … /

from(bucket: “claire-test-data”)

range(start: {start_time}Z, stop: {stop_time}Z)
filter(fn: (r) => r._measurement == “environment”)
filter(fn: (r) => r.MAC =~ /^({mac_adresses_to_filter})$/)
filter(fn: (r) => r._field =~ /^({filter_string})$/)
pivot(rowKey:[“_time”], columnKey: [“_field”], valueColumn: “_value”)
group()
keep(columns: {keep_column_string})
‘’’
print(query)

Run query and return data as pandas data frame

data = query_api.query_data_frame(query=query)

print(data.head())

close client

client.close()

csv_filename = f’AC_data_{start_time}{stop_time}.csv’
pk1_filename = f’…/Data_archive/AC_data
{start_time}{stop_time}.pk1.zip’.replace(‘:’,'‘)
zip_archive_filename = f’…/Data_archive/AC_data_{start_time}_{stop_time}all.zip’.replace(‘:’,'')

print(zip_archive_filename)
print(csv_filename)

save data as compressed csv

data.to_csv(zip_archive_filename, sep=‘,’, compression={‘method’: ‘zip’, ‘archive_name’: f’{csv_filename}'}, mode=‘a’)

save data as pickle file format

data.to_pickle(pk1_filename, compression=‘zip’)

The query i run is:

import “experimental/date/boundaries”
import “influxdata/influxdb/schema”
lastWeek = boundaries.week(week_offset: -1)
from(bucket: “claire-test-data”)