Python pandas influxdb 2.0 error

I am only now testing the free cloud tier of 2.0, and I am adjusting my python script.

My current scripts and dataframe formats work just fine writing to influx 1.8, pretty much out of the box. No specific data type conflicts or else.

Using the new python client and 2.0, data seems to be transfered but right at then end I get this:

The batch item wasn’t processed successfully because: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({‘Date’: ‘Tue, 11 Aug 2020 09:30:37 GMT’, ‘Content-Type’: ‘application/json; charset=utf-8’, ‘Transfer-Encoding’: ‘chunked’, ‘Connection’: ‘keep-alive’, ‘Strict-Transport-Security’: ‘max-age=15724800; includeSubDomains’, ‘x-platform-error-code’: ‘invalid’})

Same error for most if not all dataframe columns. Some values seem to be written, but with considerable errors in scale or others. My python script also does not finish, but I have to force it out of the above error code.

Are the changes I need to make in how a dataframe is passed or?

Thanks in advance!

Hello @rjvt,
Please make sure that your timestamp column contains datetime objects and that you set that column to be the index. Can you please share your python script?

My script does nothing too fancy: It creates a dict of dataframes, and a for loop writes one column at a time. This way I can see the progress. Each dataframe looks like this:

Capture

The current influx 1.8 script is:

client.write_points(df, measurement, protocol = ‘‘line’’, batch_size=10000)

For influx 2.0:

write_client.write(db, record=df, data_frame_measurement_name=measurement)

Are you able to query your instance successfully?
Are you able to write just one dataframe?
Can you please set debug=True when you instantiate the client?

  1. I can query, only just a few fields of data (the ones that were transferred successfully)
  2. I am able to write some data, some fields go through and some fail. I just tested again with Debug on, and the extra information I see is “invalid field format”. But all values are numeric, I checked, and I even tested with forcing numeric conversion on all dataframes before they are sent. Some of them have a large amount of NaN, but that is normal for my data.
  3. Just tested, and here is maybe a longer error code:

8-12 06:53:19,329 [8696] ERROR influxdb_client.client.write_api:367: [JupyterRequire] The batch item wasn’t processed successfully because: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({‘Date’: ‘Wed, 12 Aug 2020 06:53:19 GMT’, ‘Content-Type’: ‘application/json; charset=utf-8’, ‘Content-Length’: ‘104’, ‘Connection’: ‘keep-alive’, ‘Strict-Transport-Security’: ‘max-age=15724800; includeSubDomains’, ‘x-platform-error-code’: ‘invalid’})
HTTP response body: {“code”:“invalid”,“message”:“unable to parse ‘cedrobclone3 1597103940000000000’: invalid field format”}

The timestamp index for all dataframes is exactly the same, if this helps. And again: Writing this data with the previous influxdb library into influx 1.8 works 100%.

Thanks!

@rjvt did you solve your problem? I’m having exactly the same issue now in 2022.