InfluxDB Error - http: TLS handshake error from x.x.x.x: EOF

Hi ,
I keep getting the following errors from InfluxDB docker

2021/01/21 04:33:06 http: TLS handshake error from 172.22.0.14:47574: EOF
2021/01/21 04:33:37 http: TLS handshake error from 172.22.0.14:47884: EOF
2021/01/21 04:34:18 http: TLS handshake error from 172.22.0.14:48272: EOF
2021/01/21 04:35:32 http: TLS handshake error from 172.22.0.14:48922: EOF
2021/01/21 04:36:38 http: TLS handshake error from 172.22.0.14:49566: EOF
2021/01/21 04:37:53 http: TLS handshake error from 172.22.0.14:50240: EOF
2021/01/21 04:39:00 http: TLS handshake error from 172.22.0.14:50740: EOF
2021/01/21 04:40:05 http: TLS handshake error from 172.22.0.14:51336: EOF

I’d like to know the following :

  1. The reason for such errors
  2. How to prevent them?

Some points to note :

  1. InfluxDB is behind an nginx proxy
  2. The origin of this connection to InfluxDB are devices on the internet.

Hello @prashanthjbabu ,
Can you please share your docker compose or configuration?
Thank you

Hello @Anaisdg ,
Thanks for your reply . Here’s the config :

  influxdb:
    image: ${DOCKER_REGISTRY}/influxdb:${DOCKER_LABEL}
    extends:
      file: common.yml
      service: env-base
    restart: always
    volumes:
      - influxdata:/var/lib/influxdb
      - ${INTERNAL_CERT}:/etc/ssl/certs/server.crt
      - ${INTERNAL_KEY}:/etc/ssl/private/server.key
    ports:
      - "${TMDB_PORT}:8086"
    environment:
      - INFLUXDB_HTTP_HTTPS_CERTIFICATE=/etc/ssl/certs/server.crt
      - INFLUXDB_HTTP_HTTPS_PRIVATE_KEY=/etc/ssl/private/server.key

Is there any particular config you’re looking for?

Just to add on , things are working find behind the nginx proxy , I’m able to access everything . Its just that once in a while these errors popup.

@prashanthjbabu,
Ah okay. Hmm I’m not sure then. Let me share this with the team. Heads up it might take a little while for me or someone to get back to you. Thanks for your patience.

1 Like

@Anaisdg Sure , will wait for your response.

1 Like

Hi @prashanthjbabu, do you see any client-side failures that correspond to the TLS errors in the server? My quick initial search shows that the handshake errors could be caused by a wide range of problems, so it’d be good to narrow down the possibilities.

I’d also say: when I’ve deployed setups like yours in the past, I’ve typically let the nginx proxy handle everything related to TLS, so the server running behind it can use plain HTTP and avoid all the edge-cases. If your security requirements allow that kind of setup, you might want to explore the possibility. I believe the relevant nginx documentation is here.

Hi @dan-moran

Thanks for your response .

Unfortunately the clients are too many and geographically distributed . Also these errors occur in very few clients , so its hard to pin point which client is failing and when .

I could probably try the HTTP behind nginx instead of HTTPS .

Do you think these errors are client-side or server-side?

@prashanthjbabu it could be either (or both)

  • This suggests there could be a few clients out there using unsupported protocols
  • This suggests another piece of your infra (maybe nginx, or some piece of the platform where you’re running the system) might be killing long-lived TLS connections

If your InfluxDB is open to the internet, I wouldn’t be surprised to see some bad client requests that can’t properly handle the TLS required by the server.

1 Like

@dan-moran thanks for your response ! If thats the case , wouldnt it make more sense to reduce the error level to WARN for such errors in influxdb? Currently i’ve set log level to ERROR and still see these messages periodically . It would be better if this moved to WARN , so more critical errors are caught with ERROR level?

@prashanthjbabu the logs are being generated by the standard library, so I’m not sure how much flexibility we’ll have to modify the behavior. I created an issue to start the process by replacing the server’s default error-logger to the rest of our logging infrastructure.

1 Like

@dan-moran I understand. Thanks!

I have the same issue when running from a docker container, think it might be related to the default scaper. I am unable to scape metrics from an https connection even though the cert is signed and shows as valid when accessing from a browser

msg=“http: TLS handshake error from 172.19.3.4:51466: EOF” log_id=0VCQY49l000 service=http
http: TLS handshake error from 172.19.3.35:21999: remote error: tls: bad certificate" log_id=0VCQY49l000 service=http
Unable to gather" log_id=0VCQY49l000 error=“Get "https://xxxx:8086/metrics\”: x509: certificate signed by unknown authority"