I’ve got a problem with influxd backup and restore. Don’t know whether its a documented feature, bug or my fault.
Scenario looks like this.
Setup:
Server “A”(v1.2.0): has databases “dbA” and “_internal”
Server “B”(v1.1.0): has databases “dbB” and some others
"dbA" != “dbB”
On server “A” I create 2 backup filesets:
remote backup of database “dbB” from server “B”.
backup of local database “A”
If I restore database “dbB” on server “A” its metadata information overrides exiting /var/lib/influxdb/meta/meta.db, so that I do not see database “A” in SHOW DATABASES output. (tsm data files for “dbA” are not lost, just meta info).
Instead I see a list of all databases present on server “B”.
If I now restore “dbA” from backup I get a reversed situation: database “dbA” is shown by SHOW DATABASES, but no signs of “dbB” - again meta.db is replaced completely.
(As an experiment I tried to recover missing metadata for databases by re-creating them manually in hope that it will then “find” existing tsm files, but it doesn’t work that way).
It is not a problem doing a full backup of metadata. But restore must be selective.
The expected restore command behaviour should be to extract from backup fileset only metadata for database being restored and MERGE it with already existing /var/lib/influxdb/meta/meta.db
Also ran into this when migrating to a new server, using it as a staging area for merging some databases and when I came to refresh the data from original server I lost the merged databases from meta
We are struggling with the same issue here. 2 use cases:
Case1: For analysis purpose we receive a .tsm file from a customer site that we want to locally “import” in our system.
So every customer has its own Influxdb running with 1 database named DbCustomerxx.
The central system has multiple databases (DbCustomer01, 02 03 …) where no data is written. But occasionally we want to add some "external’ data to our central system.
Case2: After our database was corrupted, we had to start over again from scratch. Now we want to add old data from the recovered .tsm files to the new database.
We experimented with “Insert” command but this is very consuming on time and memory.
Same problem for Influx_inspect -export command. Works flawless for small amounts of data. But if you have millions of measuring points it’s again very consuming on time and diskspace.
Any suggestions on how we can better manage these things?