Delivery of the data from the local influxdb to another influxdb in the cloud

I try to collect some data from our remote measurement devices into influxdb in my private cloud. The locations where my devices are installed have very limited access to the Internet that is regularly interrupted. I put a mini-pc next to each device, on which I installed influxdb and found how to stream the metrics into it. Now I want to collect the data from all devices from different locations into one database in the cloud. I wrote a small TICK script that sends data from the local database to the cloud, but the Internet connection is unstable and some of the data is lost.

Here is a piece of our configuration
kapacitor.conf

...
[[influxdb]]
  enabled = true
  name = "default"
  default = true
  urls = ["http://influxdb.tick-stack.local:8086"]
  username = "username"
  password = "password"
  ...

[[influxdb]]
  enabled = true
  name = "InfluxdbInTheCloud"
  default = false
  urls = ["https://my.cloud.com"]
  username = "username"
  password = "password"
  ...
...

tickscript

var name = 'local.to.remote'

var stream_db = stream
    |from()
        .database('local_measurements')
        .retentionPolicy('day')

stream_db
    |influxDBOut()
        .cluster('InfluxdbInTheCloud')
        .database('all_measurements')
        .retentionPolicy('_90d')
        .create()

So my questions are:

Am I using the Kapacitor correctly, or does this somehow break its philosophy? Which tool should I use to deliver data from infuxdb to another influxdb in the cloud?
Does Kapacitor guarantee the delivery of data to remote InfluxDB?
If it does - what parameters in the settings are useful for this? E.g. retry count, size of the buffer, etc
Thank you for your kind help!