I have a large dataset that I want to transform and save. But it ends up using all the memory and crashes. Is there an optimal way to do something like this in flux ?
Two months of 1Hz data on AWS r5.xlarge instance, being transferred in 1 day chunks. I ended up exporting it to files/line protocol, replaced bucket name and importing it.
Hi tintin,
How did you import the line protocol file?
I tried using the influx write command but it was rejected on the following error: “Error: failed to write data: 401 Unauthorized: unauthorized access”.
Any suggestions?