Influx replication not working - queue just backs up

Hello! I am starting to work with the data replication API. However, I’m not having much luck. After creating the connection and the replication, the data just stays on the replication queue, it doesn’t get replicated.

I’m currently writing from my PC to an Influx database hosted on VM 1 (10.102.11.86)
I’m trying to replicate data from VM 1 to VM 2 (10.102.11.85) .

Here’s the command to create the connection:
influx remote create --host http://localhost:8086 --org MYORG --token --name secondary --remote-url http//10.102.11.95:8086 --remote-api-token --remote-org-id 77d13f6a08470322

output:
ID Name Org ID Remote URL Remote Org ID Allow Insecure TLS
09e44866f9c33000 secondary 3977f266669f66c7 http//10.102.11.95:8086 77d13f6a08470322 false

Here’s the command to create the replication:

influx replication create --host http://localhost:8086 --org MYORG --token --name primary_to_secondary_events_replication --remote-id 09e44866f9c33000 --local-bucket-id 5600814ea57ce942 --remote-bucket-id 7becfe4fcc63fd34

output:
ID Name Org ID Remote ID Local Bucket ID Remote Bucket ID Current Queue Bytes Max Queue Bytes Latest Status Code Drop Non-Retryable Data
09e44bb51f91d000 primary_to_secondary_events_replication 3977f266669f66c7 09e44866f9c33000 5600814ea57ce942 7becfe4fcc63fd34 0 67108860 0 false

However, the data just piles up in the queue and doesn’t replicate.

Here’s the output of the replication list command:
ID Name Org ID Remote ID Local Bucket ID Remote Bucket ID Current Queue Bytes Max Queue Bytes Latest Status Code
Drop Non-Retryable Data
09e44bb51f91d000 primary_to_secondary_events_replication 3977f266669f66c7 09e44866f9c33000 5600814ea57ce942 7becfe4fcc63fd34 32519 67108860 0

The current queue value (32519 above) just keeps going up and no data is replicated to the Influx DB on VM 2.
I know it’s not a firewall issue because the firewalls are down on both servers. I can ping each machine from the other. I can also log into the Influx UI on both systems.

Any ideas why the replication isn’t working?

Hello @Jack999,
Replication is only an OSS → Cloud feature not an OSS → OSS feature.
You can use the to() function in a task to replicate some data but it has some limitations:
The to() function offers an easy way to write data from an edge device to the Cloud. However, it comes with several limitations:

  • Sent over HTTP
  • No built-in functionality for failure handling
  • No built-in functionality for batching, retries, or parallelization

The to() function should only be used to consolidate data from OSS to Cloud if you meet the following conditions:

  • You intend on downsampling your data first before writing it to Cloud to limit the size of the request.
  • You have a small amount of devices or are writing a relatively small amount of data this way. You can’t use the to() function for large workloads because the data volume might cause a network failure. There also isn’t functionality built in for failure handling.
  • You aren’t trying to overcome writing a large amount of data with micro-batching and generating a really high request count.

While the to() function has some limitations, there is another option for writing IoT data from OSS to InfluxDB Cloud. You can use the mqtt.to() function to write data to a MQTT broker first if you want to handle larger workloads.

Which the OSS Replications feature solves.
You can sign up for a free Cloud trial however.

Ana - thanks for the clarification. That will save me a lot of fruitless debugging work.

As we are looking to replicate the full amount of data to another OSS instance, the to() function may not be the solution.

The mqtt.to() function looks promising. Is this viable for an OSS → OSS scenario?

Or does the consumer have to be Influx Cloud?

Unfortunately, Influx Cloud isn’t viable for most of our customers due to corporate firewall issues and other reasons.

Barring that, is it possible for us to do a “roll-your-own” solution, where one client periodically reads the Influx DB and writes to a topic, and another client periodically reads that topic and writes to the DB to be replicated.

Obviously, we’d like to avoid this scenario if at all possible, but is it do-able in your opinion?