Port 8086 http connection refused

I’m running influxdb version 1.4.2 on ubuntu 18.04. Everything was going well till I started to inject via NIFI heavy data stream and could see on the influxdb repl weird things happen. After I couldn’t log in the repl anymore. I saw on the Internet some issues with the port 8086 & connection refused but it didn’t help me, can help me to solve this problem ? Here are some more information :

Hi @ddebeul welcome ,

is it possible that the enabled is set to false in your http section
in /etc/influxdb/influxdb.conf ?

[http]
enabled = false

It should be set to true …

[http]
enabled = true

Thanks MarcV, it’s to true.

What kind of weird things?

Based on your screenshot it looks that influx failed under the heavy load and caused an OOM, restarted and is processing and opening up your data. Until that is finished you won’t be able to connect to the influx CLI

Weird things : I couldn’t have access anymore to the repl, and lot of readable information pass through the repl and after at least 5 min I could have access to the console, the repl ejected me.

Yes, I don’t have access to the repl.

I involuntarily reproduced my problem with influxdb 1.7.5 on a vm (ubuntu 18.04) and apache nifi to inject the data into influx. This is the message I got from the vm :

I had an error with nifi :


I don’t know if the error is the consequence or the cause of the influxdb oom.

HI @ddebeul

Sorry fo rthe delay. That first message with the OOM is the issue as far as i can see. Influx can’t cope with the amount of data you’re trying to write and runs out of memory. When this happens the service will restart but you will need to wait for it to go through the whole start up process which can take a while. Once Influx has caught up you should then be able to connect to the CLI as normal. If Influx carries on trying to insert the data once back online, it will only OOM again.

I think the nifi errors are related to the OOM. If the influx service is still loading up then you won’t be able to send data to it.

I’d suggest:

  1. Switching the index version to TSI1 - this should alleviate some of the memory issues (if you’re using SSD)
  2. breaking downt the data you want to import into smaller batches. There is an influx bench marking tool that could help InfluxDB INCH - It might be worth testing inserting your data with that to get a good idea of the maxmium you could send into your Influx instance.

Sorry if you’ve already read these, but this should help
Schema and Design
Series Cardinality

Hope that helps in some way.

1 Like

Thanks for the information. It takes so long that I reinstalled influxdb from scratch as it is a test environment. I can’t image this situation in a production environment.

The TICK stack works in production, but you need to scale it to your needs. You’ll need to size your hardware to suit the amount of data you’re sending and figure out what you want to keep as a tag or field.

I think the recommended sizing for large data > 750,000 writes per second you’re looking at 8 cores, 32gb RAM. Beyond that you’re probably looking at an enterprise solution with multiple nodes.