Replication is slow considering the line throughput

We got a DB in the cloud (Docker container on EC2 instance), to where number of edge DBs replicate their data. At the edge there are 150 parameters per second in a normal mode and 1500 at the time of event (happening once a day). Each replication stream targets it’s own dedicated bucket.
Unfortunately, each such “burst” opens a gap of 5-10 minutes between the edge and the cloud copies. I see somehow ridiculous numbers it the replication logs - e.q. 80 K bytes “remaining to be synced”, it takes 30 min. to eliminate - on the other hand, by dividing http_write_request_bytes by http_write_request_count metrics, you get the average request size of 200 bytes…

There are no logs nether metrics on both side regarding any issues in HTTP or disk writes.

Is there a way to increase the buffer size/frequency of the replication requests?

Thanks!