I have an instance of Influx 2.0 running on a docker container receiving data from a Node.js app also running on a separate docker container and writing to the Influx instance through the npm api for influx. After a about an hour of writing metrics to Influx I start receiving this message fairly frequently:
We are running into a similar error with the JS client. Our environment is a Node.js 12 Google Cloud function called from a mobile app, and we are using Influx 2 Cloud. This causes us to drop data so we are trying to determine what we’re doing to cause the request timeout.
This error is occurring more frequently as our number of requests increase, so I am not sure if it may be related to how our Influx cloud instance is configured, which is currently hosted on GCP.
@diglez@mhall119 do you have any recommendations for how to prevent these timeouts or where to look when troubleshooting these types of read or write timeouts with the Javascript client?
Below is our error log:
Unhandled error y [RequestTimedOutError]: Request timed out
at ClientRequest.<anonymous> (/workspace/node_modules/@influxdata/influxdb-client/dist/index.js:16:9601)
at ClientRequest.emit (events.js:314:20)
at ClientRequest.EventEmitter.emit (domain.js:506:15)
at TLSSocket.emitRequestTimeout (_http_client.js:715:9)
at Object.onceWrapper (events.js:420:28)
at TLSSocket.emit (events.js:326:22)
at TLSSocket.EventEmitter.emit (domain.js:506:15)
at TLSSocket.Socket._onTimeout (net.js:483:8)
at listOnTimeout (internal/timers.js:554:17)
at processTimers (internal/timers.js:497:7)
Hey there - I’m having the same trace come up, but might have found a solution. I’ve got an AWS Lambda running the influx-client-js. My success rate is around 95%, with about 60 executions a minute. I’m trying to track down the 5% … near as I can tell other people are having the same issue, like yourself.
I’m getting a RequestTimeoutError, but the service is generally available and working well. I think under load it just can’t respond in time. My Lambda is at 15 seconds, so plenty of room to run there. Setting an option called ‘timeout’ didn’t seem to have an effect, but changing this took me to 100%. Most things run in less than a few ms just fine now, with the max spiking up to 100-300ms now.
Root problem: near as I can tell, passing in a timeout in the options doesn’t work, looks like its hardcoded to 30 * 1000, so I just changed this value
class Pool {
/**
* Creates a new Pool instance.
* @param {IPoolOptions} options
*/
constructor(options) {
this._options = Object.assign({ backoff: new exponential_1.ExponentialBackoff({
initial: 300,
max: 10 * 1000,
random: 1,
}), maxRetries: 2, requestTimeout: 30 * 1000 }, options);
this._index = 0;
this._hostsAvailable = new Set();
this._hostsDisabled = new Set();
this._timeout = this._options.requestTimeout;
}