1

我有重复运行到 S3 的每日增量备份。大约 37 GiB。

在第一个月左右,一切顺利。它曾经在大约 1 小时内完成。然后它开始花费太长时间来完成任务。现在,当我输入时,它仍在运行 7 小时前开始的每日备份。

我正在运行两个命令,首先是备份,然后是清理:

duplicity --full-if-older-than 1M LOCAL.SOURCE S3.DEST --volsize 666 --verbosity 8
duplicity remove-older-than 2M S3.DEST

日志

Temp has 54774476800 available, backup will use approx 907857100.

所以temp有足够的空间,很好。然后从这个开始...

Copying duplicity-full-signatures.20161107T090303Z.sigtar.gpg to local cache.
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-13tylb-2
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-NanCxQ-3
[...]
Copying duplicity-inc.20161110T095624Z.to.20161111T103456Z.manifest.gpg to local cache.
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-VQU2zx-30
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-4Idklo-31
[...]

这种情况一直持续到今天,每个文件都需要很长时间。并继续这个......

Ignoring incremental Backupset (start_time: Thu Nov 10 09:56:24 2016; needed: Mon Nov  7 09:03:03 2016)
Ignoring incremental Backupset (start_time: Thu Nov 10 09:56:24 2016; needed: Wed Nov  9 18:09:07 2016)
Added incremental Backupset (start_time: Thu Nov 10 09:56:24 2016 / end_time: Fri Nov 11 10:34:56 2016)

经过很长一段时间...

Warning, found incomplete backup sets, probably left from aborted session
Last full backup date: Sun Mar 12 09:54:00 2017
Collection Status
-----------------
Connecting with backend: BackendWrapper
Archive dir: /home/user/.cache/duplicity/700b5f90ee4a620e649334f96747bd08

Found 6 secondary backup chains.
Secondary chain 1 of 6:
-------------------------
Chain start time: Mon Nov  7 09:03:03 2016
Chain end time: Mon Nov  7 09:03:03 2016
Number of contained backup sets: 1
Total number of contained volumes: 2
Type of backup set:                            Time:      Num volumes:
               Full         Mon Nov  7 09:03:03 2016                 2
-------------------------
Secondary chain 2 of 6:
-------------------------
Chain start time: Wed Nov  9 18:09:07 2016
Chain end time: Wed Nov  9 18:09:07 2016
Number of contained backup sets: 1
Total number of contained volumes: 11
Type of backup set:                            Time:      Num volumes:
               Full         Wed Nov  9 18:09:07 2016                11
-------------------------

Secondary chain 3 of 6:
-------------------------
Chain start time: Thu Nov 10 09:56:24 2016
Chain end time: Sat Dec 10 09:44:31 2016
Number of contained backup sets: 31
Total number of contained volumes: 41
Type of backup set:                            Time:      Num volumes:
               Full         Thu Nov 10 09:56:24 2016                11
        Incremental         Fri Nov 11 10:34:56 2016                 1
        Incremental         Sat Nov 12 09:59:47 2016                 1
        Incremental         Sun Nov 13 09:57:15 2016                 1
        Incremental         Mon Nov 14 09:48:31 2016                 1
        [...]

列出所有链后:

Also found 0 backup sets not part of any chain, and 1 incomplete backup set.
These may be deleted by running duplicity with the "cleanup" command.

这只是备份部分。这样做需要几个小时,而将 37 GiB 上传到 S3 只需 10 分钟。

ElapsedTime 639.59 (10 minutes 39.59 seconds)
SourceFiles 288
SourceFileSize 40370795351 (37.6 GB)

然后是cleanup,这给了我这个:

Cleaning up
Local and Remote metadata are synchronized, no sync needed.
Warning, found incomplete backup sets, probably left from aborted session
Last full backup date: Sun Mar 12 09:54:00 2017
There are backup set(s) at time(s):
Tue Jan 10 09:58:05 2017
Wed Jan 11 09:54:03 2017
Thu Jan 12 09:56:42 2017
Fri Jan 13 10:05:05 2017
Sat Jan 14 10:24:54 2017
Sun Jan 15 09:49:31 2017
Mon Jan 16 09:39:41 2017
Tue Jan 17 09:59:05 2017
Wed Jan 18 09:59:56 2017
Thu Jan 19 10:01:51 2017
Fri Jan 20 09:35:30 2017
Sat Jan 21 09:53:26 2017
Sun Jan 22 09:48:57 2017
Mon Jan 23 09:38:45 2017
Tue Jan 24 09:54:29 2017
Which can't be deleted because newer sets depend on them.
Found old backup chains at the following times:
Mon Nov  7 09:03:03 2016
Wed Nov  9 18:09:07 2016
Sat Dec 10 09:44:31 2016
Mon Jan  9 10:04:51 2017
Rerun command with --force option to actually delete.
4

1 回答 1

1

我发现了问题。由于一个问题,我遵循了这个答案,并将这段代码添加到我的脚本中:

rm -rf ~/.cache/deja-dup/*
rm -rf ~/.cache/duplicity/*

由于随机错误重复,这应该是一次性的。但答案没有提到这一点。因此,脚本每天都会在同步缓存后删除缓存,然后在第二天它必须再次下载整个内容。

于 2017-03-29T10:59:47.370 回答