Python Influx Client Errors

So a need has arisen to add tags to our data to better filter and sort for some additional use cases (and display in a table). Having run into InfluxDB tag queries failing · Issue #9742 · influxdata/influxdb · GitHub and [Feature request] Drop field · Issue #6150 · influxdata/influxdb · GitHub and Adding new tag to existing series, I embarked upon writing a Python script to pull all the points and properly sort and tag them with the additional tags into a new measurement.

However, for the past 12 run attempts (which take 4+ hours apiece), it continually fails before it completes, and I have at this point no way to resume, so I have to start over each time. I’ve run it on multiple different machines, including the VM itself…

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/urllib3/connection.py", line 141, in _new_conn
    (self.host, self.port), self.timeout, **extra_kw)
  File "/usr/lib/python3.6/site-packages/urllib3/util/connection.py", line 83, in create_connection
    raise err
  File "/usr/lib/python3.6/site-packages/urllib3/util/connection.py", line 73, in create_connection
    sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/urllib3/connectionpool.py", line 601, in urlopen
    chunked=chunked)
  File "/usr/lib/python3.6/site-packages/urllib3/connectionpool.py", line 357, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/lib64/python3.6/http/client.py", line 1239, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/usr/lib64/python3.6/http/client.py", line 1285, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/usr/lib64/python3.6/http/client.py", line 1234, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/lib64/python3.6/http/client.py", line 1026, in _send_output
    self.send(msg)
  File "/usr/lib64/python3.6/http/client.py", line 964, in send
    self.connect()
  File "/usr/lib/python3.6/site-packages/urllib3/connection.py", line 166, in connect
    conn = self._new_conn()
  File "/usr/lib/python3.6/site-packages/urllib3/connection.py", line 150, in _new_conn
    self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f17428024e0>: Failed to establish a new connection: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/requests/adapters.py", line 440, in send
    timeout=timeout
  File "/usr/lib/python3.6/site-packages/urllib3/connectionpool.py", line 639, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "/usr/lib/python3.6/site-packages/urllib3/util/retry.py", line 388, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='127.0.0.1', port=8086): Max retries exceeded with url: /write?db=telegraf&precision=h&rp=coarse (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f17428024e0>: Failed to establish a new connection: [Errno 111] Connection refused',))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./influxdbretag.py", line 94, in <module>
    client.write_points(write_data, 'h', 'telegraf', 'coarse')
  File "/usr/lib/python3.6/site-packages/influxdb/client.py", line 468, in write_points
    tags=tags, protocol=protocol)
  File "/usr/lib/python3.6/site-packages/influxdb/client.py", line 532, in _write_points
    protocol=protocol
  File "/usr/lib/python3.6/site-packages/influxdb/client.py", line 312, in write
    headers=headers
  File "/usr/lib/python3.6/site-packages/influxdb/client.py", line 252, in request
    timeout=self._timeout
  File "/usr/lib/python3.6/site-packages/requests/sessions.py", line 508, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python3.6/site-packages/requests/sessions.py", line 618, in send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python3.6/site-packages/requests/adapters.py", line 508, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=8086): Max retries exceeded with url: /write?db=telegraf&precision=h&rp=coarse (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f17428024e0>: Failed to establish a new connection: [Errno 111] Connection refused',))

I’m shocked at how much time I’ve wasted trying to perform what should be a simple operation to carry forward the past 10 months of data that we have. At this point I’m having to resort to what feel like ridiculous hacks simply to squash some series together with additional tags.

The VM has plenty of horsepower (8 vCPU Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz, 16GB of memory), the DB has around 67,710 series (which will squash down to 22k series).

I’m at a loss for how to proceed with this.