Influxdb-relay problem

Hi, we are recently having problems with our influxdb-relay. Sometimes, there is an increase in the use of rss and vms memory and then the influxdb-relays buffer becomes full, with the consequent loss of data.

We have seen that there are some erroneous http requests to the client.

Is it possible that this amount of requests fill the buffer?

Influx-relay configuration

name = "influx-http"
bind-addr = ""
output = [
{ name="influx1", location = "",  buffer-size-mb = 4000 ,max-batch-kb = 100, max-delay-interval = "5s"},
{ name="influx2", location = "",  buffer-size-mb = 4000, max-batch-kb = 100, max-delay-interval = "5s"},

Hello @Jorge_Valles_Sanz,
Can you please share information about your cardinality, ingest rate, and schema?

Hi @Anaisdg

About cardinality, for example, the last 24 hours we have had 21105250591 unique strings on one of the nodes.
Ingest rate, taking a look at the tsm1_filestore (diskwrites) metric is about 10 Mb / s.

We have 146 bb.dd with the same policy

  • Weekly 7d duration 1 replication and fragment duration 1d
  • monthly 30d duration 1 replication fragment duration 1d
  • annual 365d duration 1 replication fragment duration 1d

Most databases have continuous_queries between retention policies
Any other information that I should share with you?


If you are using GitHub - influxdata/influxdb-relay: Service to replicate InfluxDB data for high availability, you might want to look into seeing if something like telegraf would work for what you are doing. influxdb-relay has not been updated for a long time and I don’t think it is getting any kind of on-going support.