Exported out the fulll db first !
influx_inspect export -compress=“true” -database indicators -datadir /var/opt/influxdb/data -retention autogen -waldir /var/opt/influxdb/wal -out /var/opt/influxdb/backups/indicators.dmp
now it its super long for a 28gb database. ( du -sh * output )
it seems like it created a gzipped exact copy file of about 26 gb ! interesting!
now I did extract out the data for the relevant measurement
– this was equally slow but acceptable!
time zcat indicators.dmp |grep “^BB3_MA” > bb3_ma.txt
I will update if this file will load as is ( looks promising ! )
update:
I changed the measurement to _test to test what was going in… using sed in the extracted file
So I was able to load the data in batches of 4k rows ( 5 k is max allowed )
Interestingly it created a _temp measurement along with it and I believe it skipped a part of the data that was past retention time of 1 w.
I wish some one from Engg can help verify the data in the temp tables data !
The best way to recover online ( without any outage of the server ) for a single measurement that works for me now is
get the table output as csv file using select *
Use perl script to transform it into Line protocol file ( change any measurement name etc )
( I did not try out inserting directly from perl line protocol influxdb library yet )
insert using “influx import” to the specific database as appropriate. ( fastest)
Be careful as I understand that if you are importing back to the same database and same measurement , it may update the rows with same time stamps.