"Shard XX removed during backup"

Hi. I am running Influxdb 2.1 on a pi4 in docker. I am trying to create a backup via command line
Influx CLI 2.2.1 (git: 31ac783) build_date: 2021-11-09T21:24:22Z.
The command is:

docker exec influx influx backup \
[path here] \
-t [token here] 

then the backup starts, but 18 shards are “removed during backup” - and missing afterwards in the backup.

2022/01/02 18:47:09 INFO: Backing up TSM for shard 1
2022/01/02 18:47:59 INFO: Backing up TSM for shard 3
2022/01/02 18:49:17 INFO: Backing up TSM for shard 10
2022/01/02 18:50:25 INFO: Backing up TSM for shard 19
2022/01/02 18:52:18 INFO: Backing up TSM for shard 28
2022/01/02 18:54:09 INFO: Backing up TSM for shard 37
2022/01/02 18:55:47 INFO: Backing up TSM for shard 46
2022/01/02 18:56:41 INFO: Backing up TSM for shard 26
2022/01/02 18:56:41 WARN: Shard 26 removed during backup
2022/01/02 18:56:41 INFO: Backing up TSM for shard 27
2022/01/02 18:56:41 WARN: Shard 27 removed during backup
2022/01/02 18:56:41 INFO: Backing up TSM for shard 30
2022/01/02 18:56:41 WARN: Shard 30 removed during backup
2022/01/02 18:56:41 INFO: Backing up TSM for shard 31
2022/01/02 18:56:41 WARN: Shard 31 removed during backup
2022/01/02 18:56:41 INFO: Backing up TSM for shard 32
2022/01/02 18:56:41 WARN: Shard 32 removed during backup
2022/01/02 18:56:41 INFO: Backing up TSM for shard 33
2022/01/02 18:56:41 WARN: Shard 33 removed during backup
2022/01/02 18:56:41 INFO: Backing up TSM for shard 34
2022/01/02 18:56:41 WARN: Shard 34 removed during backup
2022/01/02 18:56:41 INFO: Backing up TSM for shard 35
2022/01/02 18:56:41 WARN: Shard 35 removed during backup
2022/01/02 18:56:41 INFO: Backing up TSM for shard 36
2022/01/02 18:56:41 WARN: Shard 36 removed during backup
2022/01/02 18:56:41 INFO: Backing up TSM for shard 39
2022/01/02 18:56:41 WARN: Shard 39 removed during backup
2022/01/02 18:56:41 INFO: Backing up TSM for shard 40
2022/01/02 18:56:41 WARN: Shard 40 removed during backup
2022/01/02 18:56:41 INFO: Backing up TSM for shard 41
2022/01/02 18:56:41 WARN: Shard 41 removed during backup
2022/01/02 18:56:41 INFO: Backing up TSM for shard 42
2022/01/02 18:56:41 WARN: Shard 42 removed during backup
2022/01/02 18:56:41 INFO: Backing up TSM for shard 43
2022/01/02 18:56:41 WARN: Shard 43 removed during backup
2022/01/02 18:56:41 INFO: Backing up TSM for shard 44
2022/01/02 18:56:41 WARN: Shard 44 removed during backup
2022/01/02 18:56:41 INFO: Backing up TSM for shard 45
2022/01/02 18:56:41 WARN: Shard 45 removed during backup
2022/01/02 18:56:41 INFO: Backing up TSM for shard 48
2022/01/02 18:56:41 INFO: Backing up TSM for shard 49
2022/01/02 18:56:41 WARN: Shard 49 removed during backup
2022/01/02 18:56:41 INFO: Backing up TSM for shard 50
2022/01/02 18:56:41 INFO: Backing up TSM for shard 51
2022/01/02 18:56:41 WARN: Shard 51 removed during backup
2022/01/02 18:56:41 INFO: Backing up TSM for shard 2
2022/01/02 18:56:41 INFO: Backing up TSM for shard 4
2022/01/02 18:56:42 INFO: Backing up TSM for shard 11
2022/01/02 18:56:42 INFO: Backing up TSM for shard 20
2022/01/02 18:56:42 INFO: Backing up TSM for shard 29
2022/01/02 18:56:42 INFO: Backing up TSM for shard 38
2022/01/02 18:56:42 INFO: Backing up TSM for shard 47

Is that inteded behaviour? How can I fix that? If tried this over several days, after rebooting and while stopping the other docker tasks (that access the database) - its always the (same?) 18 shards that fail.

Any help is appreciated. Thankyou
Swantu

If anybody has a clue about it, I would still like to hear it. I have now 78 shards and still the same error :frowning:

Recovery of the data failed :frowning:

ts=2022-01-23T19:42:20.649119Z lvl=info msg="Resources opened" log_id=0ZDuJhHl000 service=kvstore-bolt service="restored bolt" path=/var/lib/influxdb2/influxd.bolt.tmp
ts=2022-01-23T19:42:21.118195Z lvl=info msg="index opened with 8 partitions" log_id=0ZDuJhHl000 service=storage-engine index=tsi
ts=2022-01-23T19:42:21.121517Z lvl=info msg="Reindexing TSM data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=33
ts=2022-01-23T19:42:21.121633Z lvl=info msg="Reindexing WAL data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=33
ts=2022-01-23T19:42:21.143420Z lvl=info msg="index opened with 8 partitions" log_id=0ZDuJhHl000 service=storage-engine index=tsi
ts=2022-01-23T19:42:21.145074Z lvl=info msg="Reindexing TSM data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=34
ts=2022-01-23T19:42:21.145134Z lvl=info msg="Reindexing WAL data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=34
ts=2022-01-23T19:42:21.163222Z lvl=info msg="index opened with 8 partitions" log_id=0ZDuJhHl000 service=storage-engine index=tsi
ts=2022-01-23T19:42:21.164862Z lvl=info msg="Reindexing TSM data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=35
ts=2022-01-23T19:42:21.164922Z lvl=info msg="Reindexing WAL data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=35
ts=2022-01-23T19:42:21.181618Z lvl=info msg="index opened with 8 partitions" log_id=0ZDuJhHl000 service=storage-engine index=tsi
ts=2022-01-23T19:42:21.183258Z lvl=info msg="Reindexing TSM data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=36
ts=2022-01-23T19:42:21.183297Z lvl=info msg="Reindexing WAL data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=36
ts=2022-01-23T19:42:21.199223Z lvl=info msg="index opened with 8 partitions" log_id=0ZDuJhHl000 service=storage-engine index=tsi
ts=2022-01-23T19:42:21.200759Z lvl=info msg="Reindexing TSM data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=39
ts=2022-01-23T19:42:21.200883Z lvl=info msg="Reindexing WAL data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=39
ts=2022-01-23T19:42:21.216905Z lvl=info msg="index opened with 8 partitions" log_id=0ZDuJhHl000 service=storage-engine index=tsi
ts=2022-01-23T19:42:21.218461Z lvl=info msg="Reindexing TSM data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=40
ts=2022-01-23T19:42:21.218511Z lvl=info msg="Reindexing WAL data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=40
ts=2022-01-23T19:42:21.235088Z lvl=info msg="index opened with 8 partitions" log_id=0ZDuJhHl000 service=storage-engine index=tsi
ts=2022-01-23T19:42:21.236835Z lvl=info msg="Reindexing TSM data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=41
ts=2022-01-23T19:42:21.236880Z lvl=info msg="Reindexing WAL data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=41
ts=2022-01-23T19:42:21.253986Z lvl=info msg="index opened with 8 partitions" log_id=0ZDuJhHl000 service=storage-engine index=tsi
ts=2022-01-23T19:42:21.255527Z lvl=info msg="Reindexing TSM data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=42
ts=2022-01-23T19:42:21.255571Z lvl=info msg="Reindexing WAL data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=42
ts=2022-01-23T19:42:21.271785Z lvl=info msg="index opened with 8 partitions" log_id=0ZDuJhHl000 service=storage-engine index=tsi
ts=2022-01-23T19:42:21.273493Z lvl=info msg="Reindexing TSM data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=1
ts=2022-01-23T19:42:21.273540Z lvl=info msg="Reindexing WAL data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=1
ts=2022-01-23T19:42:21.291660Z lvl=info msg="index opened with 8 partitions" log_id=0ZDuJhHl000 service=storage-engine index=tsi
ts=2022-01-23T19:42:21.293522Z lvl=info msg="Reindexing TSM data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=3
ts=2022-01-23T19:42:21.293609Z lvl=info msg="Reindexing WAL data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=3
ts=2022-01-23T19:42:21.314151Z lvl=info msg="index opened with 8 partitions" log_id=0ZDuJhHl000 service=storage-engine index=tsi
ts=2022-01-23T19:42:21.315642Z lvl=info msg="Reindexing TSM data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=10
ts=2022-01-23T19:42:21.315685Z lvl=info msg="Reindexing WAL data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=10
ts=2022-01-23T19:42:21.338008Z lvl=info msg="index opened with 8 partitions" log_id=0ZDuJhHl000 service=storage-engine index=tsi
ts=2022-01-23T19:42:21.339699Z lvl=info msg="Reindexing TSM data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=19
ts=2022-01-23T19:42:21.339746Z lvl=info msg="Reindexing WAL data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=19
ts=2022-01-23T19:42:21.362749Z lvl=info msg="index opened with 8 partitions" log_id=0ZDuJhHl000 service=storage-engine index=tsi
ts=2022-01-23T19:42:21.364523Z lvl=info msg="Reindexing TSM data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=28
ts=2022-01-23T19:42:21.364569Z lvl=info msg="Reindexing WAL data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=28
ts=2022-01-23T19:42:21.396232Z lvl=info msg="index opened with 8 partitions" log_id=0ZDuJhHl000 service=storage-engine index=tsi
ts=2022-01-23T19:42:21.400473Z lvl=info msg="Reindexing TSM data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=37
ts=2022-01-23T19:42:21.400666Z lvl=info msg="Reindexing WAL data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=37
ts=2022-01-23T19:42:21.417995Z lvl=info msg="index opened with 8 partitions" log_id=0ZDuJhHl000 service=storage-engine index=tsi
ts=2022-01-23T19:42:21.419587Z lvl=info msg="Reindexing TSM data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=2
ts=2022-01-23T19:42:21.419629Z lvl=info msg="Reindexing WAL data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=2
ts=2022-01-23T19:42:21.435326Z lvl=info msg="index opened with 8 partitions" log_id=0ZDuJhHl000 service=storage-engine index=tsi
ts=2022-01-23T19:42:21.436835Z lvl=info msg="Reindexing TSM data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=4
ts=2022-01-23T19:42:21.436879Z lvl=info msg="Reindexing WAL data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=4
ts=2022-01-23T19:42:21.473291Z lvl=info msg="index opened with 8 partitions" log_id=0ZDuJhHl000 service=storage-engine index=tsi
ts=2022-01-23T19:42:21.475399Z lvl=info msg="Reindexing TSM data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=11
ts=2022-01-23T19:42:21.475771Z lvl=info msg="Reindexing WAL data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=11
ts=2022-01-23T19:42:21.500789Z lvl=info msg="index opened with 8 partitions" log_id=0ZDuJhHl000 service=storage-engine index=tsi
ts=2022-01-23T19:42:21.502723Z lvl=info msg="Reindexing TSM data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=20
ts=2022-01-23T19:42:21.503062Z lvl=info msg="Reindexing WAL data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=20
ts=2022-01-23T19:42:21.519815Z lvl=info msg="index opened with 8 partitions" log_id=0ZDuJhHl000 service=storage-engine index=tsi
ts=2022-01-23T19:42:21.521557Z lvl=info msg="Reindexing TSM data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=29
ts=2022-01-23T19:42:21.521726Z lvl=info msg="Reindexing WAL data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=29
ts=2022-01-23T19:42:21.537723Z lvl=info msg="index opened with 8 partitions" log_id=0ZDuJhHl000 service=storage-engine index=tsi
ts=2022-01-23T19:42:21.539479Z lvl=info msg="Reindexing TSM data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=38
ts=2022-01-23T19:42:21.539522Z lvl=info msg="Reindexing WAL data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=38
ts=2022-01-23T19:42:21.549683Z lvl=info msg="Resources opened" log_id=0ZDuJhHl000 service=sqlite service="restored sqlite" path=/tmp/1319518480/influxd.sqlite
ts=2022-01-23T19:42:21.551836Z lvl=warn msg="internal error not returned to client" log_id=0ZDuJhHl000 handler=error_logger error="rename /tmp/1319518480/influxd.sqlite /var/lib/influxdb2/influxd.sqlite: invalid cross-device link"
ts=2022-01-23T19:42:23.329827Z lvl=info msg="index opened with 8 partitions" log_id=0ZDuJhHl000 service=storage-engine index=tsi
ts=2022-01-23T19:42:23.331397Z lvl=info msg="Reindexing TSM data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=43
ts=2022-01-23T19:42:23.331438Z lvl=info msg="Reindexing WAL data" log_id=0ZDuJhHl000 service=storage-engine engine=tsm1 db_shard_id=43

@Swantu Did you figure out how to resolve this issue? I encountered the same thing in this thread here: Shard X removed during backup - #5 by cguy

I got the same today (using 2.1.1) . Is it possible that the shard that are being removed are not supposed to be there in the first place? I mean, that those shared should have been deleted because, for example, they fall outside of the Retention policy? Has anyone checked this?