Compare commits

..

420 Commits

Author SHA1 Message Date
Alexander Neumann
40791fff64 Add version for 0.13.0 2022-03-26 20:09:59 +01:00
Alexander Neumann
a53a4a23fd Update manpages and auto-completion 2022-03-26 20:09:59 +01:00
Alexander Neumann
b567c08e80 Generate CHANGELOG.md for 0.13.0 2022-03-26 20:09:40 +01:00
Alexander Neumann
0ca89b6fec Prepare changelog for 0.13.0 2022-03-26 20:09:39 +01:00
Alexander Neumann
d7e46c187a Merge pull request #3678 from restic/changelogs
Review, reword and polish unreleased changelog entries
2022-03-26 19:44:30 +01:00
Alexander Neumann
6aefe3e111 Merge pull request #3677 from restic/3490-polish
check: Adjust help and documentation for check --read-data-subset
2022-03-26 19:34:51 +01:00
Leo R. Lundgren
03137a34db Review, reword and polish unreleased changelog entries 2022-03-26 13:01:53 +01:00
Leo R. Lundgren
c7d637ec39 check: Adjust help and documentation for check --read-data-subset 2022-03-26 00:11:04 +01:00
Alexander Neumann
6087c4ad75 Merge pull request #3656 from lgommans/forget-security
forget: Update docs for readability and append-only considerations
2022-03-24 21:36:19 +01:00
Leo R. Lundgren
cdf478c8f4 doc: More updates to forget documentation and security considerations 2022-03-23 23:12:19 +01:00
Luc Gommans
80969a6347 Update docs according to comments from MichaelEischer in PR #3656 2022-03-23 23:12:19 +01:00
Leo R. Lundgren
676d5d498c doc: Update forget security considerations and thread model 2022-03-23 23:12:19 +01:00
Luc Gommans
9c1d49e312 Document "forget" security considerations and add references
Removing data based on a policy when the attacker had the opportunity to
add data to your repository comes with some considerations. This is
added to the 060_forget.rst documentation.

That document is also updated to reflect that restic now considers
the current system time while running "forget".

References to the security considerations section are added:
- In `restic forget --help`
- In the threat model (design.rst)
- In the (030) setup section where an append-only setup is referenced

A reference is also to be added to the `rest-server` readme's
append-only paragraph (see my fork).

This commit also resolves a typo (amount->number for countable noun),
changes a password length recommendation into the metric that
actually matters when creating passwords (entropy) since I was editing
these doc files anyway, and updates the outdated copyright year in
`conf.py`.

Some wording in 060_forget (line 21..22) was changed to clarify what
"forget" and "prune" do, to try and avoid the apparent misconception
that "forget" does not remove any data.
2022-03-23 23:12:19 +01:00
Alexander Neumann
ca1e2316cf Merge pull request #3665 from MichaelEischer/sane-list-locks
list: Never lock the repository when listing lock files
2022-03-21 11:14:44 +01:00
Alexander Neumann
0b8b524f12 Merge pull request #3512 from MichaelEischer/cleaner-lock-refresh
Prevent lock refresh from leaving behind lots of stale locks
2022-03-21 11:10:37 +01:00
Alexander Neumann
a350625554 Merge pull request #3524 from MichaelEischer/atomic-sftp
sftp: Implement atomic uploads
2022-03-21 11:08:22 +01:00
Alexander Neumann
32e61f2620 Update changelog/unreleased/issue-1106
Co-authored-by: greatroar <61184462+greatroar@users.noreply.github.com>
2022-03-21 11:04:04 +01:00
Alexander Neumann
8388f66c4c Merge pull request #3668 from greatroar/symlink-size
Report symlink sizes from FUSE mount
2022-03-21 11:02:32 +01:00
Alexander Neumann
0937008648 Merge pull request #3654 from MichaelEischer/limit-huge-tree-streams
Limit number of large tree blobs loaded in parallel by StreamTrees
2022-03-21 11:01:04 +01:00
Alexander Neumann
3a285f91bc Merge pull request #2311 from vincentbernat/feature/negative-pattern
filter: ability to use negative patterns
2022-03-20 14:02:30 +01:00
Alexander Neumann
29a5778626 Improve wording 2022-03-20 13:46:16 +01:00
Michael Eischer
53656f019a filter: address review comments 2022-03-20 13:33:08 +01:00
Michael Eischer
cd190bee14 filter: short circuit if no negative patterns 2022-03-20 13:33:08 +01:00
Vincent Bernat
2ee07ded2b filter: ability to use negative patterns
This is quite similar to gitignore. If a pattern is suffixed by an
exclamation mark and match a file that was previously matched by a
regular pattern, the match is cancelled. Notably, this can be used
with `--exclude-file` to cancel the exclusion of some files.

Like for gitignore, once a directory is excluded, it is not possible
to include files inside the directory. For example, a user wanting to
only keep `*.c` in some directory should not use:

    ~/work
    !~/work/*.c

But:

    ~/work/*
    !~/work/*.c

I didn't write documentation or changelog entry. I would like to get
feedback if this is the right approach for excluding/including files
at will for backups. I use something like this as an exclude file to
backup my home:

    $HOME/**/*
    !$HOME/Documents
    !$HOME/code
    !$HOME/.emacs.d
    !$HOME/games
    # [...]
    node_modules
    *~
    *.o
    *.lo
    *.pyc
    # [...]
    $HOME/code/linux/*
    !$HOME/code/linux/.git
    # [...]

There are some limitations for this change:

 - Patterns are not mixed accross methods: patterns from file are
   handled first and if a file is excluded with this method, it's not
   possible to reinclude it with `--exclude !something`.

 - Patterns starting with `!` are now interpreted as a negative
   pattern. I don't think anyone was relying on that.

 - The whole list of patterns is walked for each match. We may
   optimize later by exiting early if we know no pattern is starting
   with `!`.

Fix #233
2022-03-20 13:33:08 +01:00
Michael Eischer
12606b575f filter: Cleanup variable naming 2022-03-20 13:33:08 +01:00
Michael Eischer
5f145f0c7e filter: introduce pattern struct 2022-03-20 13:33:08 +01:00
Vincent Bernat
13c40d4199 filter: additional tests for filter.List() 2022-03-20 13:33:08 +01:00
Alexander Neumann
13aae82635 Merge pull request #3673 from restic/update-go
Update go
2022-03-20 12:02:32 +01:00
Alexander Neumann
b85d035956 Fix calens install step 2022-03-20 11:36:45 +01:00
Alexander Neumann
47aa4613f7 Force Go to use Module Mode 2022-03-20 11:30:01 +01:00
Alexander Neumann
a9a5acb8ce Update golangci-lint 2022-03-20 11:26:56 +01:00
Alexander Neumann
6dee59b789 Install gox before checking out code
Otherwise newer Go versions complain that the hash for the installed
version of gox is not in the go.mod, which we don't want anyways because
the tests should use the latest version of gox.
2022-03-20 11:26:56 +01:00
Alexander Neumann
2e19d19216 Use latest Go version for cross-compile and lint 2022-03-20 11:26:56 +01:00
Alexander Neumann
18a1de0de1 Use "go get" or "go install" selectively
Go 1.18 dropped support for installing binaries via "go get", Go <= 1.16
does not support it. So we need to use the right verb depending on the
Go version.
2022-03-20 11:26:56 +01:00
Alexander Neumann
9b57fcc6b0 Fix build.go, minimum Go version is 1.14 2022-03-20 10:54:33 +01:00
Alexander Neumann
17878036d8 Update tests to Go 1.18 2022-03-20 10:54:24 +01:00
Jason Lenz
2b1932a258 Report symlink sizes from FUSE mount for snapshot dir
Fixes #3667.
2022-03-17 22:21:47 -05:00
greatroar
fdc738fb70 Report symlink sizes from FUSE mount
Fixes #3667.
2022-03-13 16:48:35 +01:00
MichaelEischer
daea461f15 Merge pull request #3663 from jimt/find-msgs
Remove period from find messages
2022-03-07 22:23:49 +01:00
Jim Tittsler
a3d99217a4 Remove period from find messages
Simplifies cut-and-paste of IDs (and makes stylistically
consistent with other messages) #3659
2022-03-07 11:16:04 +09:00
MichaelEischer
e0ab689ccd Merge pull request #3664 from DanielG/fix-doc-warning
doc: Fix block quote warning
2022-03-06 21:56:26 +01:00
Michael Eischer
7af69fd7b9 list: Never lock the repository when listing lock files
There's no point in locking the repository just to list the currently
existing lock files. This won't work for an exclusively locked
repository and is also confusing to users.
2022-03-06 21:44:51 +01:00
Daniel Gröber
49b67c8aaa doc: Fix block quote warning 2022-03-06 18:15:55 +01:00
rawtaz
44d543ede3 Merge pull request #3653 from MichaelEischer/fix-ls-option-help
ls: Fix description for --host, --tag and --path options
2022-02-19 23:06:24 +01:00
Michael Eischer
5ef4ee7760 ls: Fix description for --host, --tag and --path options 2022-02-19 22:36:02 +01:00
Michael Eischer
254c8743fc Limit number of large tree blobs loaded in parallel by StreamTrees
Load tree blobs with more than 50MB only from a single goroutine. Very
large tree blobs with for example 400 MB size can otherwise require
roughly 1GB * streamTreeParallelism memory.
2022-02-19 12:26:09 +01:00
MichaelEischer
ad4f4dbc7a Merge pull request #3645 from greatroar/stdin-parent
Don't set a parent for --stdin backups
2022-02-19 11:36:51 +01:00
greatroar
63f6a9b085 Don't set a parent for --stdin backups
Loading any parent tree for these only wastes time and memory.
Fixes #3641, where it was shown that the most recent tree will get
picked.

--parent is now implicitly ignored when --stdin is given.
2022-02-19 10:41:33 +01:00
MichaelEischer
4a2d5a146d Merge pull request #3507 from ahmgithubahm/document-AWS_PROFILE-support
Document AWS_PROFILE support
2022-02-18 23:40:25 +01:00
Michael Eischer
1efc26899d Update docs for AWS_PROFILE and AWS_SHARED_CREDENTIALS_FILE 2022-02-18 23:31:10 +01:00
MichaelEischer
8df246d0f3 Merge pull request #3628 from gum3ng/issue_3127
[#issue 3127] Add xattr support for Solaris
2022-02-17 22:07:39 +01:00
gum3ng
dd30083c2b [#issue 3127] Add xattr support for Solaris 2022-02-13 14:24:37 +05:30
MichaelEischer
fb4c5af5c4 Merge pull request #3642 from gco/master
Fix test failures on Solaris
2022-02-12 22:07:14 +01:00
MichaelEischer
18ec49ddfa Merge pull request #3644 from duritong/centos-epel-repo
add a note about installation via epel for RHEL / CentOS Stream 8 & 9
2022-02-12 21:41:46 +01:00
Michael Eischer
5ec312ca06 sftp: Implement atomic uploads
Create a temporary file with a sufficiently random name to essentially
avoid any chance of conflicts. Once the upload has finished remove the
temporary suffix. Interrupted upload thus will be ignored by restic.
2022-02-12 20:30:49 +01:00
Michael Eischer
aebd24e414 Add changelog 2022-02-12 20:21:58 +01:00
Michael Eischer
d72181c8c1 Ensure that the lock cleanup handler is run after the global one
cleanup handlers run in the order in which they are added. As Go calls
init() functions in lexical order, the cleanup handler from global.go
was registered before that from lock.go, which is the correct order.

Make this order explicit to ensure that this won't break accidentally.
2022-02-12 20:21:58 +01:00
Michael Eischer
c6fd13425b remember the refreshed lock file even if removal failed
This ensures that restic won't create lots of new lock files without
deleting them later on.

In some cases a Delete operation on a backend can return a "File does
not exist" error even though the Delete operation succeeded. This can
for example be caused by request retries. This caused restic to forget
about the new lock file and continue trying to remove the old (already
deleted) lock file.
2022-02-12 20:21:58 +01:00
MichaelEischer
cc90f2ba6b Merge pull request #2816 from greatroar/noatime
Set O_NOATIME flag on Linux
2022-02-07 21:38:31 +01:00
MichaelEischer
d8f58fb7bf Merge pull request #3592 from vgt/jsondiff
Add json output for diff command
2022-02-07 21:33:30 +01:00
duritong
a4786dda5a Update doc/020_installation.rst
Co-authored-by: greatroar <61184462+greatroar@users.noreply.github.com>
2022-02-06 21:31:18 +01:00
Michael Eischer
aaa7f94139 Add changelog for O_NOATIME 2022-02-06 15:00:37 +01:00
Michael Eischer
6b17a7110c backup: Set O_NOATIME in the right place
The archiver uses FS.OpenFile, where FS is an instance of the FS
interface. This is different from fs.OpenFile, which uses the OpenFile
method provided by the fs package.
2022-02-06 15:00:37 +01:00
greatroar
7080fed7ae Set O_NOATIME flag on Linux
Citing Kerrisk, The Linux Programming Interface:

    The O_NOATIME flag is intended for use by indexing and backup
    programs. Its use can significantly reduce the amount of disk
    activity, because repeated disk seeks back and forth across the
    disk are not required to read the contents of a file and to update
    the last access time in the file’s i-node[.]

restic used to do this, but the functionality was removed along with the
fadvise call in #670.
2022-02-06 15:00:34 +01:00
Michael Eischer
74f29ad09b diff: add basic test for json output format 2022-02-06 11:44:15 +01:00
Michael Eischer
5f34ad523f diff: fix test failure and add remark on quiet to changelog 2022-02-05 23:02:07 +01:00
MichaelEischer
58236ead12 Merge pull request #3619 from aneeshusa/avoid-time-travel-paradoxes-when-finding-parents
Avoid choosing parent snapshot newer than time of current snapshot
2022-02-05 22:51:24 +01:00
Michael Eischer
8ae4d86a84 rename snapshot timestamp filter variable 2022-02-05 22:42:38 +01:00
mh
3f0184ba2a add a note about installation via epel 2022-02-05 22:18:33 +01:00
rawtaz
90473ea9ff Merge pull request #3638 from leogott/patch-1
Documentation: Update msys2 wiki url after move
2022-01-27 17:07:45 +01:00
Leona 'leo' Gottfried
4e84e8ab3f Update msys2 wiki url after move
https://github.com/msys2/msys2/wiki/Porting was permanently moved and redirects to https://www.msys2.org/
I substituted the new location of the /wiki/Porting page in the docs
2022-01-27 11:09:50 +01:00
Greg
2e9180638e Fix test failures on Solaris
Add exceptions for symlinks, sticky bits, and device nodes in the same places where the BSDSs and/or Darwin have them.
2022-01-25 18:05:56 -08:00
Aneesh Agrawal
058dfc20da Avoid choosing parent snapshot newer than time of current snapshot
Currently, `restic backup` (if a `--parent` is not provided)
will choose the most recent matching snapshot as the parent snapshot.
This makes sense in the usual case,
where we tag the snapshot-being-created with the current time.

However, this doesn't make sense if the user has passed `--time`
and is currently creating a snapshot older than the latest snapshot.
Instead, choose the most recent snapshot
which is not newer than the snapshot-being-created's timestamp,
to avoid any time travel.

Impetus for this change:
I'm using restic for the first time!
I have a number of existing BTRFS snapshots
I am backing up via restic to serve as my initial set of backups.
I initially `restic backup`'d the most recent snapshot to test,
then started backing up each of the other snapshots.
I noticed in `restic cat snapshot <id>` output
that all the remaining snapshots have the most recent as the parent.
2022-01-23 23:55:00 -05:00
Aneesh Agrawal
502fc3281c Add CONTRIBUTING.md docs to not edit man pages
Document this code review feedback I got for other contributors.
2022-01-23 23:54:24 -05:00
David Vogt
77c850148a Add json output for diff command 2022-01-23 23:22:26 +01:00
MichaelEischer
df89aa0087 Merge pull request #3623 from invine/issue-3464
Skip lock file creation on forget with --no-lock and --dry-run
2022-01-23 18:12:34 +01:00
Pavel Frolov
792523b28b [issue 3464] skip lock creation in case of dry-run 2022-01-23 06:44:41 +03:00
MichaelEischer
f0a8182493 Merge pull request #3626 from fkusche/issue-3620-doc-unreferenced-packs
Update documentation regarding unreferenced packs
2022-01-22 15:15:25 +01:00
Florian Kusche
6183d0be53 Update output of restic check 2022-01-16 12:34:20 +01:00
rawtaz
7f6fc78f95 Merge pull request #3544 from restic/fix-b2-delete-retry
b2: Successful delete if file does not exist
2022-01-13 23:28:30 +01:00
rawtaz
abfbacf3d3 Merge pull request #3591 from MichaelEischer/prune-fix-max-repack
prune: Handle --max-repack-size=0 as expected
2022-01-13 03:53:20 +01:00
Florian Kusche
b0c1d0f9cd Update documentation regarding unreferenced packs
Also removes an unnecessary space at the end of the last line.
2022-01-09 11:45:03 +01:00
rawtaz
8b6fe845d4 Merge pull request #3618 from mattxtaz/master
Add missing colon in prune stats output and realign the fields
2022-01-06 23:07:56 +01:00
mattxtaz
6ff32ee4d3 Add missing colon in prune stats output and change padding to 14 chars to align the fields 2022-01-06 21:15:15 +00:00
rawtaz
2ff3b7d69c Merge pull request #3615 from gum3ng/issue_3558
doc: Add a FAQ section for invalid Windows filenames
2022-01-02 18:06:02 +01:00
gum3ng
9589de16db [issue 3558]: Add a FAQ section for invalid Windows filenames 2022-01-02 22:24:00 +05:30
MichaelEischer
2c3e5d943d Merge pull request #3593 from DarkKirb/parallelize-restic-copy
Parallelize blob upload/download for restic copy
2021-12-29 22:31:54 +01:00
Charlotte 🦝 Delenk
e2bb384a60 Parallelize blob upload/download for restic copy
Currently restic copy will copy each blob from every snapshot serially,
which has performance implications on high-latency backends such as b2.

This commit introduces 8x parallelism for blob downloads/uploads which
can improve restic copy operations up to 8x for repositories with many
small blobs on b2.

This commit also addresses the TODO comment in the copyTree function.

Related work:

A more thorough improvement of the restic copy performance can be found
in PR #3513
2021-12-29 18:59:09 +01:00
MichaelEischer
e5985e0d63 Merge pull request #3602 from cqjjjzr/fix-rclone-sigint
fix: rclone receiving SIGINT prematurely on Windows causing restic hang forever (#3601)
2021-12-29 16:56:37 +01:00
MichaelEischer
8832837a8a Merge pull request #3607 from jtagcat/quieter_cleanup
logging: quiet 'removing n old cache dirs'
2021-12-29 16:06:35 +01:00
jtagcat
f92130d878 logging: quiet 'removing n old cache dirs'
Closes #3595

Choosing to include `stdoutIsTerminal()` as:
 - all other instances with `!opts.JSON` do so
 - this likely will not affect anything, especially when autorun
 - this seems to not be a meaningful enough summary
     to include in auto-backup reports

JSON is still likely not guaranteed to work and this is a suboptimal
  solution to this. Ideally, #1804 should refactor all print statements,
  and define+document(+handle) when stdoutIsTerminal() should be used.
  Else, it may end up more inconsistent and bulky
  (duplicate lines, longer files).
2021-12-29 01:08:29 +02:00
Charlie Jiang
a5b0e0bef4 fix: rclone receiving SIGINT prematurely on Windows causing restic hang forever
Co-authored-by: greatroar <61184462+greatroar@users.noreply.github.com>
2021-12-28 13:14:46 +08:00
rawtaz
e6e51b84ac Merge pull request #3605 from gum3ng/issue_3463
Improve clarity of error message in restic ls command
2021-12-27 21:45:23 +01:00
MichaelEischer
c5c3dfe10f Merge pull request #3590 from metalsp0rk/documentation-enhance
Document Safe Passwords and Clarify B2 App Key information
2021-12-27 20:21:43 +01:00
Kyle Brennan
19ec4d8f17 Document safe passwords. Fix #2238 2021-12-27 10:43:18 -08:00
Kyle Brennan
47ecd950b8 Enhance details about user application keys. Fix #2672 2021-12-27 10:43:15 -08:00
MichaelEischer
051cc7ce71 Merge pull request #3589 from metalsp0rk/copy-no-lock
Make Copy respect no lock
2021-12-27 19:11:02 +01:00
Michael Eischer
64e733f3d6 tweak copy --no-lock changelog 2021-12-27 18:22:25 +01:00
Gautam Menghani
017614c41a [#issue 3463] Improve clarity of error message in restic ls command 2021-12-27 22:42:27 +05:30
Michael Eischer
0cfdb82ea4 prune: Handle --max-repack-size=0 as expected
Previously the flag was ignored and `--max-repack-size=1` had to be
used.
2021-12-27 15:48:56 +01:00
Kyle Brennan
d5ed5da85c update changelog for copy --no-lock 2021-12-03 12:16:40 -08:00
Kyle Brennan
8eb83029a8 Make copy honor --no-lock 2021-12-03 09:50:28 -08:00
MichaelEischer
882d58abce Merge pull request #3163 from palbr/patch-1
Add PGP fingerprint to 020_installation.rst
2021-11-19 23:58:50 +01:00
Peter Albrecht
8de4401bb5 Changed URL for key-file
The keyfile provided by restic's own webserver (https://restic.net) should be
more stable than relying on public keyservers. So I changed the URL to the
GPG keyfile, as recommended by MichaelEischer.
2021-11-19 15:47:59 +01:00
MichaelEischer
f7a9b90eb9 Merge pull request #3573 from magandrez/feat/ls-report-symbolic-notation
Provide mode in symbolic notation for `restic ls --json`
2021-11-18 21:29:04 +01:00
MichaelEischer
aa214f99b4 Merge pull request #3565 from mathstuf/doc-missing-metadata
doc: mention metadata that is not backed up currently
2021-11-18 21:28:55 +01:00
MichaelEischer
4a25bbaed3 Merge pull request #3578 from adsultana/patch-1
docs: Fix link to "help wanted" issues
2021-11-18 21:20:30 +01:00
Michael Eischer
583edc39b8 doc: reorder backup metadata exceptions 2021-11-18 21:17:38 +01:00
Manuel González
212b2f651f Add file mode in symbolic notation to ls --json
This aligns `restic ls --json` with `restic find --json`, utilizing the same
naming.
2021-11-16 19:45:50 +02:00
Andrew Sultana
15ab96ecd6 docs: Fix link to "help wanted" issues 2021-11-16 00:03:34 +00:00
Ben Boeckel
d71afb3d32 doc: mention metadata that is not backed up currently
See: #3497
See: #1622
See: #2075
2021-11-13 18:40:26 -05:00
MichaelEischer
4bf05d91a1 Merge pull request #3571 from garrmcnu/blazer-unknown-authority
Update Backblaze B2 blazer module
2021-11-13 22:46:49 +01:00
MichaelEischer
de3afc1005 Merge pull request #3574 from gurjeet/rename_aws_s3_to_amazon_s3
Use S3's proper product name, Amazon S3
2021-11-13 22:30:32 +01:00
Michael Eischer
2ea998f70e Add PR link to changelog 2021-11-13 22:25:54 +01:00
Garry McNulty
e8fa3855e7 Update blazer
If a request fails with "x509: certificate signed by unknown authority",
the B2 backend now returns the error without retrying the request.

Closes #3556
Closes #2355
2021-11-13 22:25:54 +01:00
Gurjeet Singh
34a6a24544 Use S3's proper product name, Amazon S3
Per Amazon's product page [1], S3 is officially called "Amazon S3". The
restic project uses the phrase "AWS S3" in some places. This patch
corrects the product name.

[1]:https://aws.amazon.com/s3/
2021-11-13 22:21:06 +01:00
MichaelEischer
1d8a0b06cb Merge pull request #3575 from MichaelEischer/adjust-http2-canary-test
rest: Adjust http2-missing-eof test to golang >= 1.17.3
2021-11-13 22:13:24 +01:00
Michael Eischer
50053a85d3 rest: Adjust http2 missing eof test to golang >= 1.17.3, >= 1.16.10
The missing eof with http2 when a response included a content-length
header but no data, has been fixed in golang 1.17.3/1.16.10. Therefore
just drop the canary test and schedule it for removal once go 1.18 is
required as minimum version by restic.
2021-11-13 21:57:30 +01:00
MichaelEischer
f1cfb97237 Merge pull request #3514 from phcreery/rclone_timeout
rclone: extend timeout from 60s to 240s
2021-11-07 18:10:34 +01:00
MichaelEischer
cb81ee9396 Merge pull request #3474 from kitone/fix-issue-3382
Honor RESTIC_CACHE_DIR environment variable
2021-11-07 17:57:54 +01:00
Michael Eischer
b0e64deb27 rclone: Fix timeout calculation 2021-11-07 17:49:33 +01:00
phcreery
43d173b042 rclone: add timeout option and documentation 2021-11-07 17:49:21 +01:00
rawtaz
1b152a2c4d Merge pull request #3568 from MichaelEischer/fix-local-fd-leak
local: Fix fd leak when encountering files directly inside data/
2021-11-07 01:36:48 +01:00
MichaelEischer
15cc3c0e23 Merge pull request #3566 from mathstuf/check-progress-output-stomping
check: wait for progress bar output
2021-11-06 23:34:49 +01:00
kitone
5904f80cfa restic cache should display the name of the cache without shortening it in the case of the restic check 2021-11-06 20:18:51 +01:00
Ben Boeckel
4d579c4387 check: wait for progress bar output
Further code will also output to the terminal and the bar's cursor
positioning causes its output to overlap with the remaining output in a
racy way.

Fixes: #3344
2021-11-06 15:05:09 -04:00
Michael Eischer
15d7313387 local: Fix fd leak when encountering files directly inside data/ 2021-11-06 19:44:57 +01:00
MichaelEischer
6c84ea1412 Merge pull request #3548 from gum3ng/issue_3490
Support for specifying amount of data in read-data-subset
2021-11-05 23:28:11 +01:00
MichaelEischer
78c7dd53ef Merge pull request #3526 from greatroar/dump-refactor
Refactor internal/dump + concurrent load/write
2021-11-05 22:38:39 +01:00
MichaelEischer
a34bfa8269 Merge pull request #3562 from restic/rawtaz-mount-usage
mount: Improve usage information when mounted
2021-11-05 22:10:32 +01:00
kitone
0425a30420 add changelog issue-3382 2021-11-04 15:13:48 +01:00
kitone
1b23675f21 cache --cleanup should handle directories created by restic check.
Because there is no guarantee that a cleanup of these directories will occur
after the "restic check", we extend the behavior to detect and manage these
specific cache directories and allow their cleanup too.
2021-11-04 15:10:38 +01:00
Gautam Menghani
836fbb9133 [#issue 3490] Support for specifying file size in read-data-subset 2021-11-02 15:25:46 +05:30
greatroar
c71729dfc4 Refactor internal/dump + concurrent load/write
Package internal/dump has been reworked so its API consists of a single
type Dumper that handles tar and zip formats. Tree loading and node
writing happen concurrently.
2021-11-01 23:01:55 +01:00
Leo R. Lundgren
711ceb0109 mount: Improve usage information when mounted 2021-11-01 20:59:20 +01:00
MichaelEischer
829c0a67af Merge pull request #3520 from nxt-engineering/Docker
Docker Multistage build
2021-10-24 17:37:54 +02:00
Alexander Neumann
fb5d9345a7 Merge pull request #3510 from MichaelEischer/fix-archiver-early-on-abort
archiver: Fix TestArchiverAbortEarlyOnError test
2021-10-16 15:37:41 +02:00
kitone
95eb859b54 Honor RESTIC_CACHE_DIR environment variable
Fix #3382: restic check doesn't obey the RESTIC_CACHE_DIR environment variable
2021-10-10 16:00:02 +02:00
Michael Eischer
257740b0cc b2: Successful delete if file does not exist
When deleting a file, B2 sometimes returns a "500 Service Unavailable"
error but nevertheless correctly deletes the file. Due to retries in
the B2 library blazer, we sometimes also see a "400 File not present"
error. The retries of restic for the delete request then fail with
"404 File with such name does not exist.".

As we have to rely on request retries in a distributed system to handle
temporary errors, also consider a delete request to be successful if the
file is reported as not existing. This should be safe as B2 claims to
provide a strongly consistent bucket listing and thus a missing file
shouldn't mysteriously show up again later on.
2021-10-09 23:51:12 +02:00
MichaelEischer
46d08d9404 Merge pull request #3535 from jtagcat/writingverbose
Change: selfupdate: 'writing restic to' as verbose
2021-10-09 22:41:51 +02:00
jtagcat
a7853057ab Change: selfupdate: 'writing restic to' as verbose
Running restic self-update --quiet no longer
prints "writing restic to /usr/local/bin/restic".

The only output printed with -q is failures or
"successfully updated restic to version 0.12.1"

https://github.com/restic/restic/pull/3535

fix test fail: changelog title can't end with `.`

shorten changelog title
2021-10-09 23:24:18 +03:00
MichaelEischer
eb282532dc Merge pull request #3534 from jtagcat/patch-1
docs/example: useradd: restic now system account
2021-10-09 20:42:43 +02:00
MichaelEischer
f2a3b3b4a1 Merge pull request #3537 from greatroar/dump-lru
Check cap instead of len in bloblru
2021-10-09 20:00:27 +02:00
MichaelEischer
58e8b34633 Merge pull request #3532 from garrmcnu/s3-credentials-config
s3: Add warning if key ID or secret is empty
2021-10-09 19:32:32 +02:00
MichaelEischer
a02cea6e83 Merge pull request #3539 from jnoxon/fix-ec2-metadata-v2
get iam roles working with ec2 metadata v2
2021-10-07 23:00:27 +02:00
Garry McNulty
708d7a2574 s3: Add warning if key ID or secret is empty
Also add debug message if no credential types are available.

Closes #2388
2021-10-06 23:13:40 +01:00
Jeff Noxon
6f4b5ab8d1 get iam roles working with ec2 metadata v2 2021-10-03 20:06:55 -05:00
greatroar
634a9c162d Check cap instead of len in bloblru
restic dump uses bloblru.Cache to keep buffers alive, but also reuses
evicted buffers. That means large buffers may be used to store small
blobs, causing the cache to think it's using less memory than it
actually does.
2021-10-03 09:34:17 +02:00
jtagcat
632ca2ef52 docs/example: useradd: restic now system account
and: use /sbin/nologin
and: use long flags in the useradd command
and: download v0.12.1 instead of v0.9.6
2021-10-03 00:51:06 +03:00
MichaelEischer
24088f8307 Merge pull request #3528 from greatroar/tree-insert
internal/restic: Don't allocate in Tree.Insert
2021-09-27 22:22:03 +02:00
greatroar
c892c0bab9 internal/restic: Don't allocate in Tree.Insert
name         old time/op    new time/op    delta
BuildTree-8    34.6µs ± 4%     7.0µs ± 3%  -79.68%  (p=0.000 n=18+19)

name         old alloc/op   new alloc/op   delta
BuildTree-8    34.0kB ± 0%     0.9kB ± 0%  -97.37%  (p=0.000 n=20+20)

name         old allocs/op  new allocs/op  delta
BuildTree-8       108 ± 0%         1 ± 0%  -99.07%  (p=0.000 n=20+20)
2021-09-26 18:08:48 +02:00
MichaelEischer
78dac2fd48 Merge pull request #3523 from greatroar/swift-v2
Upgrade ncw/swift to v2
2021-09-24 21:41:36 +02:00
Michael Eischer
5ea8bba1a1 swift: restore context err check for list() 2021-09-24 21:19:46 +02:00
MichaelEischer
a5e103a212 Merge pull request #3522 from greatroar/dump-lru
Use LRU cache in restic dump
2021-09-24 20:33:58 +02:00
greatroar
e7ec0453b1 Upgrade ncw/swift to v2 2021-09-24 19:08:37 +02:00
Uli Martens
1ebcb1d097 Add changelog entry for PR #3508 2021-09-24 15:45:09 +02:00
greatroar
fe04d024c7 Use LRU cache in restic dump 2021-09-24 15:45:08 +02:00
Uli Martens
718966a81a Move Blobcache into dedicated internal package 2021-09-24 15:45:00 +02:00
greatroar
4f33eca634 Remove unused Writer arg to internal/dump.writeDump 2021-09-24 15:40:42 +02:00
MichaelEischer
cc110c42e6 Merge pull request #2657 from mansam/add-skip-tls-verification-flag
Add --insecure-tls flag to disable SSL cert verification
2021-09-22 21:40:01 +02:00
Sam Lucidi
897d8e662c Add --insecure-tls flag to disable SSL cert verification
Signed-off-by: Sam Lucidi <slucidi@redhat.com>
2021-09-21 10:52:40 -04:00
cimnine
4a95af5290 GitHub action for Docker build 2021-09-21 08:23:33 +02:00
cimnine
f28c8bc1c2 Multistage Docker build 2021-09-21 08:23:33 +02:00
MichaelEischer
1827b16ade Merge pull request #3519 from greatroar/maphash
Replace siphash by hash/maphash
2021-09-19 19:46:03 +02:00
greatroar
8b758c78a3 Require Go 1.14 to build 2021-09-19 16:18:19 +02:00
greatroar
8d2996eaaa Replace siphash by hash/maphash
In Go 1.17.1, maphash has become quite a bit faster than siphash, so we
can drop one third-party dependency. maphash is just an interface to the
standard Go map's hash function, which we already trust for other use
cases.

Benchmark results on linux/amd64, -benchtime=3s:

name                                             old time/op    new time/op    delta
IndexHasUnknown-8                                  50.6ns ±10%    41.0ns ±19%  -18.92%  (p=0.000 n=9+10)
IndexHasKnown-8                                    52.6ns ±12%    41.5ns ±12%  -21.13%  (p=0.000 n=9+10)
IndexMapHash-8                                     3.64µs ± 1%    2.00µs ± 0%  -45.09%  (p=0.000 n=10+9)
IndexAlloc-8                                        700ms ± 1%     601ms ± 6%  -14.18%  (p=0.000 n=8+10)
IndexAllocParallel-8                                205ms ± 5%     192ms ± 8%   -6.18%  (p=0.043 n=10+10)
MasterIndexAlloc-8                                  319ms ± 1%     279ms ± 5%  -12.58%  (p=0.000 n=10+10)
MasterIndexLookupSingleIndex-8                      156ns ± 8%     147ns ± 6%   -5.46%  (p=0.023 n=10+10)
MasterIndexLookupMultipleIndex-8                    150ns ± 7%     142ns ± 8%   -5.69%  (p=0.007 n=10+10)
MasterIndexLookupSingleIndexUnknown-8              74.4ns ± 6%    72.0ns ± 9%     ~     (p=0.175 n=10+9)
MasterIndexLookupMultipleIndexUnknown-8            67.4ns ± 9%    65.5ns ± 7%     ~     (p=0.340 n=9+9)
MasterIndexLookupParallel/known,indices=25-8        461ns ± 2%     445ns ± 2%   -3.49%  (p=0.000 n=10+10)
MasterIndexLookupParallel/unknown,indices=25-8      408ns ±11%     378ns ± 5%   -7.22%  (p=0.035 n=10+9)
MasterIndexLookupParallel/known,indices=50-8        479ns ± 1%     437ns ± 4%   -8.82%  (p=0.000 n=10+10)
MasterIndexLookupParallel/unknown,indices=50-8      406ns ± 8%     343ns ±15%  -15.44%  (p=0.001 n=10+10)
MasterIndexLookupParallel/known,indices=100-8       480ns ± 1%     455ns ± 5%   -5.15%  (p=0.000 n=8+10)
MasterIndexLookupParallel/unknown,indices=100-8     391ns ±18%     382ns ± 8%     ~     (p=0.315 n=10+10)
MasterIndexLookupBlobSize-8                        71.0ns ± 8%    57.2ns ±11%  -19.36%  (p=0.000 n=9+10)
PackerManager-8                                     254ms ± 1%     254ms ± 1%     ~     (p=0.285 n=15+15)

name                                             old speed      new speed      delta
IndexMapHash-8                                   1.12GB/s ± 1%  2.05GB/s ± 0%  +82.13%  (p=0.000 n=10+9)
PackerManager-8                                   208MB/s ± 1%   207MB/s ± 1%     ~     (p=0.281 n=15+15)

name                                             old alloc/op   new alloc/op   delta
IndexMapHash-8                                      0.00B          0.00B          ~     (all equal)
IndexAlloc-8                                        400MB ± 0%     400MB ± 0%     ~     (p=1.000 n=9+10)
IndexAllocParallel-8                                401MB ± 0%     401MB ± 0%   +0.00%  (p=0.000 n=10+10)
MasterIndexAlloc-8                                  258MB ± 0%     262MB ± 0%   +1.42%  (p=0.000 n=9+10)
PackerManager-8                                    73.1kB ± 0%    73.1kB ± 0%     ~     (p=0.382 n=13+13)

name                                             old allocs/op  new allocs/op  delta
IndexMapHash-8                                       0.00           0.00          ~     (all equal)
IndexAlloc-8                                         907k ± 0%      907k ± 0%   -0.00%  (p=0.000 n=10+10)
IndexAllocParallel-8                                 907k ± 0%      907k ± 0%   +0.00%  (p=0.009 n=10+10)
MasterIndexAlloc-8                                   327k ± 0%      317k ± 0%   -3.06%  (p=0.000 n=10+10)
PackerManager-8                                       744 ± 0%       744 ± 0%     ~     (all equal)
2021-09-19 16:05:18 +02:00
MichaelEischer
58efe21eca Merge pull request #3264 from amozoss/upstream-master
Refactor backup progress
2021-09-19 14:54:42 +02:00
MichaelEischer
71fcf48533 Merge pull request #2594 from greatroar/concurrent-restore-verify
Concurrent restore --verify
2021-09-19 14:52:31 +02:00
Michael Eischer
921e328b56 restore: Fix linting error 2021-09-19 14:41:07 +02:00
Michael Eischer
e62d4f622f Reword changelog 2021-09-19 14:41:07 +02:00
Michael Eischer
2cdc0719af restorer: Sanitize verify errors 2021-09-19 14:01:26 +02:00
greatroar
bdcdfaf6b4 restore --verify: buffer reuse consistency and comment 2021-09-19 13:11:27 +02:00
greatroar
2b94742ca5 Replace no-op closures in restorer by nil check 2021-09-19 13:11:16 +02:00
greatroar
d357744104 Handle canceled context in restore --verify properly 2021-09-19 13:11:05 +02:00
greatroar
d4225ec803 Simplify buffer growing in Restorer.verifyFile
Suggested-by: Igor Fedorenko <igor@ifedorenko.com>
2021-09-19 13:10:55 +02:00
greatroar
de8521ae56 Report number of successes from Restorer.VerifyFiles
Previously, the number of attempts was reported.
2021-09-19 13:10:44 +02:00
greatroar
bb066cf7d3 Concurrent Restorer.VerifyFiles
Time to verify a 2GB snapshot down from 9.726s to 4.645s (-52%).
2021-09-19 13:10:41 +02:00
greatroar
556424d61b Reuse buffer in Restorer.VerifyFiles
Time to verify a 2GB snapshot down from 11.568s to 9.726s (-16%).
2021-09-19 12:55:31 +02:00
greatroar
92ae951ffa Report timing from restic restore --verify 2021-09-19 12:53:09 +02:00
greatroar
973fa921cb Test and document Restorer.VerifyFiles 2021-09-19 12:52:11 +02:00
Michael Eischer
e0d615c264 archiver: Fix TestArchiverAbortEarlyOnError test
This can be caused when the test has uploaded four blobs, then queues
two blobs for upload which are delayed. Then a seventh file can be
opened which lead to a test failure.
2021-09-12 22:17:17 +02:00
MichaelEischer
ef5672a902 Merge pull request #3509 from ajaspers/patch-1
Update instructions for reproducing build
2021-09-12 21:40:37 +02:00
ajaspers
c0eddc9969 Update instructions for reproducing build
Dependencies are fetched at build time and stored in the GOPATH. These paths end up being in the final binary.

Bump restic version to latest and go version to the 1.16.6, which was used to build restic 0.12.1.
2021-09-12 09:02:57 -07:00
Michael Eischer
fbb0e6499a ui: consolidate backup ui in ui/backup package 2021-09-12 16:20:15 +02:00
Michael Eischer
503d4c3e2f Add changelog 2021-09-12 16:00:49 +02:00
Michael Eischer
cccb0d4064 ui: assert that ProgressPrinter interface is implemented 2021-09-12 15:26:40 +02:00
Michael Eischer
a144c986f2 backup: Reenable JSON status updates with redirected output
After the refactoring status updates were no longer printed in quiet
mode or when the output is not an interactive terminal. However, the
JSON output is often piped to e.g. another program. Thus, don't set the
update frequency to 0 in that case. The status updates are still
disabled for backup --quiet.

This also reduces the status update frequency to 60fps compared to a
potentially much higher value before the refactoring.
2021-09-12 15:26:40 +02:00
Michael Eischer
d62bfed65d ui: move SetDryRun to ProgressReporter 2021-09-12 15:25:58 +02:00
Michael Eischer
77b129ec74 ui: remove unused Summary field 2021-09-12 15:25:58 +02:00
Michael Eischer
3024239e40 ui/json: restore summary output 2021-09-12 15:25:58 +02:00
Michael Eischer
5ccf583b8a ui: restore a few comments 2021-09-12 15:25:58 +02:00
Michael Eischer
80cbaf6d38 ui: Simplify ReportTotal methods 2021-09-12 15:25:58 +02:00
Dan Willoughby
448419990c Refactor backup progress
Move the shared logic into the progress

Allows logic to be shared with forth coming restore status
2021-09-12 15:25:58 +02:00
Andy
7baa9a570d document AWS_PROFILE support
Since restic (or whatever library it is using) seems to respect/use AWS_PROFILE, it's worth documenting this.
2021-09-10 14:06:46 +01:00
Alexander Neumann
bf9c8771a4 Merge pull request #3482 from MichaelEischer/changelog-3429
Add changelog for #3246 and #3429
2021-09-08 09:15:40 +02:00
Michael Eischer
5e84f38f31 Add changelog for #3246 2021-09-07 21:18:11 +02:00
Alexander Neumann
8fe122d675 Merge pull request #3488 from MichaelEischer/rebuild-broken-index
Fix `rebuild-index` for damaged index
2021-09-07 17:00:23 +02:00
Alexander Neumann
74c47f1f12 Merge pull request #3502 from restic/rawtaz-issue-519
doc: Add note about scheduling to backup section
2021-09-07 16:58:17 +02:00
MichaelEischer
fa5ca8af81 Merge pull request #2856 from aawsome/remove-readahead
Simplify cache logic
2021-09-04 20:25:49 +02:00
MichaelEischer
b45d88e124 Merge pull request #3496 from juergenhoetzel/imporove-snapshot-filter-warning-message
Improve snapshot filter warning message
2021-09-03 23:42:27 +02:00
MichaelEischer
bc4cbd775b Merge pull request #2880 from aawsome/enhance-recover
Improve recover command
2021-09-03 23:40:43 +02:00
MichaelEischer
a29777f467 Merge pull request #3501 from greatroar/printprogress
Streamline progress printing in cmd/restic
2021-09-03 23:34:36 +02:00
Alexander Weiss
bce87922c0 Improve recover
- only save directories not referenced by any snapshot
- dont't write empty snapshot
- use progress bar
2021-09-03 21:36:57 +02:00
Alexander Weiss
81876d5c1b Simplify cache logic 2021-09-03 21:01:00 +02:00
greatroar
7f0aa49f45 cmd/restic: Streamline progress printing
* PrintProgress no longer does unnecessary Sprintf calls, and performs
  fewer allocations in general
* newProgressMax's callback checks whether the terminal supports
  line updates once instead of once per call
* the callback looks up the terminal width once per call instead of
  twice (on Windows)
* the status shortening now uses the Unicode-aware version from
  internal/ui/termstatus (future-proofing)
2021-09-03 11:48:22 +02:00
greatroar
5aaa3e93c1 internal/ui/termstatus: Optimize and publish Truncate
name               old time/op  new time/op  delta
TruncateASCII-8     347ns ± 1%    69ns ± 1%  -80.02%  (p=0.000 n=9+10)
TruncateUnicode-8   447ns ± 3%   348ns ± 1%  -22.04%  (p=0.000 n=10+10)
2021-09-03 11:48:22 +02:00
MichaelEischer
ec2e3b260e Merge pull request #3499 from greatroar/wrappedconn-pointer
rclone: Return one fewer value from run
2021-08-31 21:48:36 +02:00
rawtaz
26914abe62 doc: Add note about scheduling to backup section
Explains that restic doesn't have built-in scheduling
and mentions a few keywords one can search for.
2021-08-29 22:03:22 +02:00
greatroar
950b818274 rclone: Return one fewer value from run 2021-08-26 18:12:08 +02:00
Juergen Hoetzel
defe19fdf6 Quote snapshot prefix in error string 2021-08-25 16:11:28 +02:00
Juergen Hoetzel
409e4936af Improve snapshot filter warning message
Include the root-cause why the snapshot prefix is ignored.
2021-08-25 15:46:21 +02:00
MichaelEischer
10b39d7591 Merge pull request #3487 from andreaso/rest-proto-readme-link
Fix REST protocol link in README file
2021-08-22 18:47:32 +02:00
Michael Eischer
194ed19557 Add changelog 2021-08-22 18:29:58 +02:00
Michael Eischer
877fc9f352 rebuild-index: test that invalid indexes are skipped and removed 2021-08-22 18:24:19 +02:00
Michael Eischer
64258a2c2a rebuild-index: Ignore broken index files
Previously, an e.g. truncated index could prevent rebuild-index from
working.
2021-08-22 18:23:47 +02:00
Andreas Olsson
c520672982 Fix REST protocol link in README file
1. All other document links goes to the _Read the Docs_ site
2. The GitHub reStructuredText renderer doesn't appear to do includes,
   making for a rather empty read.
2021-08-22 13:16:51 +02:00
Alexander Neumann
9374c3ce81 Merge pull request #3485 from lbausch/fix_typo
Fix typo in changelog
2021-08-21 13:16:20 +02:00
Lorenz Bausch
4d56b34096 Fix typo in changelog 2021-08-21 12:16:04 +02:00
Alexander Neumann
66382b2861 Update Go to 1.17 2021-08-17 21:38:34 +02:00
Michael Eischer
1fab5892b5 Add changelog for #3429 2021-08-16 17:42:16 +02:00
MichaelEischer
c898f7a6bf Merge pull request #3479 from BUFU1610/patch-1
fixed --keep-within-X options in example
2021-08-15 20:06:26 +02:00
BUFU
7659790923 fixed --keep-within-X options in example
The order of the words was wrong in the example, i.e. fixed to --keep-within-hourly from --keep-hourly-within.
2021-08-15 19:56:22 +02:00
MichaelEischer
ecf34783ef Merge pull request #3480 from MichaelEischer/fix-rest-tests
Fix rest backend tests
2021-08-15 18:59:02 +02:00
Michael Eischer
68370feeee backends: Remove TestSaveFilenames test
Filenames are expected to match the sha256 sum of the file content. This
rule is now enforced by the rest server thus making this test useless.
2021-08-15 18:24:16 +02:00
Michael Eischer
574c83e47f rest: Fix test to use paths which are the sha256 sum of the data 2021-08-15 18:19:43 +02:00
Michael Eischer
e6a5801155 rest: Fix test backend url
The rest config normally uses prepareURL to sanitize URLs and ensures
that the URL ends with a slash. However, the test used an URL without a
trailing slash, which after the rest server changes causes test
failures.
2021-08-15 18:16:17 +02:00
Alexander Neumann
d90efd7704 Fix test 2021-08-09 10:30:10 +02:00
Alexander Neumann
9fe5a87785 Merge pull request #3429 from MichaelEischer/safe-key-change
key: Check that new key works before deleting the old one
2021-08-09 10:07:15 +02:00
Alexander Neumann
7f1608dc77 Merge pull request #3246 from restic/content-hash-for-upload
Calculate content hashes during upload
2021-08-08 17:24:08 +02:00
Michael Eischer
f4c5dec05d backend: test that a wrong hash fails an upload 2021-08-04 22:17:46 +02:00
Michael Eischer
7c1903e1ee panic if hash returns an error
Add a sanity check that the interface contract is honoured.
2021-08-04 22:17:46 +02:00
Michael Eischer
51b7e3119b mem: calculate md5 content hash for uploads
The mem backend is primarily used for testing. This ensures that the
upload hash calculation gets appropriate test coverage.
2021-08-04 22:17:46 +02:00
Michael Eischer
a009b39e4c gs/swift: calculate md5 content hash for upload 2021-08-04 22:17:46 +02:00
Michael Eischer
1d3e99f475 azure: check upload using md5 content hash
For files below 256MB this uses the md5 hash calculated while assembling
the pack file. For larger files the hash for each 100MB part is
calculated on the fly. That hash is also reused as temporary filename.
As restic only uploads encrypted data which includes among others a
random initialization vector, the file hash shouldn't be susceptible to
md5 collision attacks (even though the algorithm is broken).
2021-08-04 22:17:46 +02:00
Michael Eischer
9aa2eff384 Add plumbing to calculate backend specific file hash for upload
This enables the backends to request the calculation of a
backend-specific hash. For the currently supported backends this will
always be MD5. The hash calculation happens as early as possible, for
pack files this is during assembly of the pack file. That way the hash
would even capture corruptions of the temporary pack file on disk.
2021-08-04 22:17:46 +02:00
Michael Eischer
ee2f14eaf0 s3: enable content hash calculation for uploads 2021-08-04 22:12:12 +02:00
MichaelEischer
553ea36ca6 Merge pull request #2838 from greatroar/cache-conflicts
Make cache concurrency-safe
2021-08-04 22:11:50 +02:00
greatroar
6586e90acf Modernize internal/cache error handling 2021-08-04 22:02:42 +02:00
greatroar
ea04f40eb3 Save cached files to a temporary location first 2021-08-04 22:02:42 +02:00
greatroar
f9b6f8fd45 Replace duplicate type checking in cache with a function 2021-08-04 22:02:42 +02:00
MichaelEischer
1b1a2115fa Merge pull request #3436 from greatroar/local-save-tmp
Save files under temporary name in local backend
2021-08-04 22:01:53 +02:00
Michael Eischer
65908647e3 Add changelog 2021-08-04 21:51:53 +02:00
greatroar
81e2499d19 Sync directory to get durable writes in local backend 2021-08-04 21:51:53 +02:00
greatroar
195a5cf996 Save files under temporary name in local backend
Fixes #3435.
2021-08-04 21:51:53 +02:00
MichaelEischer
bc97a3d1f9 Merge pull request #3300 from aawsome/backup-dryrun
backup: add --dry-run/-n flag
2021-08-04 21:50:38 +02:00
Michael Eischer
702cff636f Add use case to changelog 2021-08-04 21:19:29 +02:00
Alexander Weiss
780e11b7e2 Adapt changelog 2021-08-04 21:19:29 +02:00
erin
4126435663 resolve rawtaz's review comments
make majority of suggestions from review by @rawtaz verbatim, with one clarification on my part in changelog
2021-08-04 21:19:29 +02:00
Alexander Weiss
d107a2cfdf Separate dry run tests 2021-08-04 21:19:29 +02:00
Alexander Weiss
38a8a48a25 Simplify dry run backend 2021-08-04 21:19:29 +02:00
Ryan Hitchman
77bf148460 backup: add --dry-run/-n flag to show what would happen.
This can be used to check how large a backup is or validate exclusions.
It does not actually write any data to the underlying backend. This is
implemented as a simple overlay backend that accepts writes without
forwarding them, passes through reads, and generally does the minimal
necessary to pretend that progress is actually happening.

Fixes #1542

Example usage:

$ restic -vv --dry-run . | grep add
new       /changelog/unreleased/issue-1542, saved in 0.000s (350 B added)
modified  /cmd/restic/cmd_backup.go, saved in 0.000s (16.543 KiB added)
modified  /cmd/restic/global.go, saved in 0.000s (0 B added)
new       /internal/backend/dry/dry_backend_test.go, saved in 0.000s (3.866 KiB added)
new       /internal/backend/dry/dry_backend.go, saved in 0.000s (3.744 KiB added)
modified  /internal/backend/test/tests.go, saved in 0.000s (0 B added)
modified  /internal/repository/repository.go, saved in 0.000s (20.707 KiB added)
modified  /internal/ui/backup.go, saved in 0.000s (9.110 KiB added)
modified  /internal/ui/jsonstatus/status.go, saved in 0.001s (11.055 KiB added)
modified  /restic, saved in 0.131s (25.542 MiB added)
Would add to the repo: 25.892 MiB
2021-08-04 21:19:29 +02:00
MichaelEischer
533ac4fd95 Merge pull request #3467 from greatroar/wrappedconn-pointer
Use rclone.wrappedConn by pointer
2021-08-04 21:18:53 +02:00
Alexander Neumann
7049f1cbfc Set development version for 0.12.1 2021-08-03 11:45:39 +02:00
Alexander Neumann
dc7a8aab24 Add version for 0.12.1 2021-08-03 11:45:36 +02:00
Alexander Neumann
94983a1f36 Update manpages and auto-completion 2021-08-03 11:45:36 +02:00
Alexander Neumann
a92faca10e Generate CHANGELOG.md for 0.12.1 2021-08-03 11:45:19 +02:00
Alexander Neumann
b19cd8c50f Prepare changelog for 0.12.1 2021-08-03 11:45:19 +02:00
Alexander Neumann
b862732318 Merge pull request #3468 from restic/rework-changelog
Reword changelog entries
2021-08-03 10:06:56 +02:00
Leo R. Lundgren
cb844e7136 Polish changelog entries 2021-08-03 00:01:09 +02:00
Alexander Neumann
b7fe1fe6b4 Reword changelog entries 2021-08-03 00:01:09 +02:00
MichaelEischer
c98bbdcdbe Merge pull request #3457 from systemmonkey42/feature/untagged
Feature to match untagged snapshots only when listing or forgetting
2021-08-02 23:19:39 +02:00
David le Blanc
326fefcd80 Allow --tag and --keep-tag to match untagged snapshots 2021-08-02 23:06:20 +02:00
greatroar
fa3eed1998 Use rclone.wrappedConn by pointer
This shaves a kilobyte off the Linux binary by not generating a
non-pointer interface implementation.
2021-08-01 09:11:50 +02:00
MichaelEischer
5571c3f7fd Merge pull request #3453 from MichaelEischer/http2-zero-length-workaround
rest: Workaround Http2 zero-length reply bug
2021-07-31 20:30:06 +02:00
Alexander Neumann
d8ea10db8c rest: Rework handling HTTP2 zero-length replies bug
Add comment that the check is based on the stdlib HTTP2 client. Refactor
the checks into a function. Return an error if the value in the
Content-Length header cannot be parsed.
2021-07-31 17:12:24 +02:00
Alexander Neumann
77551597b2 Merge pull request #3416 from torfason/keep-hourly-within
Keep hourly within
2021-07-30 10:36:41 +02:00
Alexander Neumann
92f293cd0b Merge pull request #3427 from MichaelEischer/find-packs-from-index
find: List missing pack files based on the index
2021-07-30 10:33:02 +02:00
Magnus Thor Torfason
2081bd12fb forget: Ensure future snapshots do not affect --keep-within-*
Ensure that only snapshots made in the past are taken into account when running restic forget with the within switches (--keep-within, --keep-within- hourly, and friends)
2021-07-24 16:14:43 +00:00
Magnus Thor Torfason
74ebc650ab forget: Add --keep-within-hourly (and friends)
Allow keeping hourly/daily/weekly/monthly/yearly snapshots for a given time period.

This adds the following flags/parameters to restic forget:
  --keep-within-hourly duration
  --keep-within-daily duration
  --keep-within-weekly duration
  --keep-within-monthly duration
  --keep-within-yearly duration

Includes following changes:
  - Add tests for --keep-within-hourly (and friends)
  - Add documentation for --keep-within-hourly (and friends)
  - Add changelog for --keep-within-hourly (and friends)
2021-07-24 16:14:43 +00:00
Alexander Neumann
c707d71b72 Merge pull request #3401 from MichaelEischer/goroutine-shutdown-cleanups
Goroutine shutdown cleanups
2021-07-11 16:32:28 +02:00
Alexander Neumann
691866ce43 Merge pull request #3452 from MichaelEischer/add-s390x-releases
Add release binaries for linux/s390x
2021-07-11 16:30:25 +02:00
Alexander Neumann
efd918c59e Merge pull request #3454 from MichaelEischer/update-dependencies
Update dependencies
2021-07-11 16:29:31 +02:00
Michael Eischer
7d28006e2e Add changelog 2021-07-10 22:39:01 +02:00
Michael Eischer
0880afe67b Use our generate command instead of cobra's complete command 2021-07-10 19:44:18 +02:00
Michael Eischer
100baf74c0 Update cobra
The most noteworthy change seems to be

https://github.com/spf13/cobra/pull/1192
Have Cobra create 'completion' command automatically
2021-07-10 19:43:14 +02:00
Michael Eischer
c733ae6b16 Readd and update indirect dependencies
The azure-sdk-for-go is the only remaining module without a go.mod file.
Thus we only need indirect dependencies for it. Remove all other
indirect dependencies.
2021-07-10 19:21:43 +02:00
Michael Eischer
989b398fee Misc library updates
The most noteworthy change is the xattr update which includes
https://github.com/pkg/xattr/pull/54 that adds xattr support for solaris
and illumos
2021-07-10 19:04:47 +02:00
Michael Eischer
bbc8146934 Update minio/sha256-simd
Apparently the standard Go sha256 implementation is now faster than the
assembly implementation. The library now only adds support for SHA
extensions available in some processors.

See https://github.com/minio/sha256-simd/pull/57 for more details.
2021-07-10 18:34:16 +02:00
Michael Eischer
aa22ebac69 Update backend dependencies
Possibly interesting changes:

https://github.com/Azure/azure-sdk-for-go/pull/14521
Retry on HTTP client errors

https://github.com/minio/minio-go/pull/1452
fix: make sure getObject returns error on truncated responses

f5854403a9
http2: close Transport connection on write errors
2021-07-10 18:27:13 +02:00
Michael Eischer
097ed659b2 rest: test that zero-length replies over HTTP2 work correctly
The first test function ensures that the workaround works as expected.
And the second test function is intended to fail as soon as the issue
has been fixed in golang to allow us to eventually remove the
workaround.
2021-07-10 17:22:42 +02:00
Michael Eischer
185a55026b rest: workaround for HTTP2 zero-length replies bug
The golang http client does not return an error when a HTTP2 reply
includes a non-zero content length but does not return any data at all.
This scenario can occur e.g. when using rclone when a file stored in a
backend seems to be accessible but then fails to download.
2021-07-10 16:59:01 +02:00
Michael Eischer
495831d53c add release binaries for linux/s390x 2021-07-10 00:10:24 +02:00
Michael Eischer
3442dc87fb find: Address review comments 2021-07-06 21:04:34 +02:00
Michael Eischer
a81f34ae47 Add changelog 2021-07-06 21:04:34 +02:00
Michael Eischer
95b44490a0 find: search blob ids for pack in index if pack is missing
If a pack file is missing try to determine the contained pack ids based
on the repository index. This helps with assessing the damage to a
repository before running `rebuild-index`.
2021-07-06 21:04:34 +02:00
Michael Eischer
3caab3c7ac find: Print not found pack files 2021-07-06 21:04:34 +02:00
Michael Eischer
40745b4f82 find: stop file listing after resolving all pack files 2021-07-06 21:04:34 +02:00
Michael Eischer
6c01078f3d find: support resolving multiple pack ids to blobs
Just passing the list of blobs to packsToBlobs would also work in most
cases, however, it could cause unexpected results when multiple pack
files have the same prefix. Forget found prefixes to prevent this.
2021-07-06 21:04:34 +02:00
Leo R. Lundgren
790294dc26 contributing: Change freenode to libera 2021-07-04 11:05:36 +02:00
Alexander Neumann
30d968b0e4 Merge pull request #3449 from MichaelEischer/fix-restore-retries
restore: Correctly handle partial pack download errors
2021-07-04 10:55:09 +02:00
Michael Eischer
43b82d69b4 Add changelog 2021-06-29 21:27:00 +02:00
Michael Eischer
bd316d3893 restore: Test partial pack downloads in filerestorer 2021-06-29 21:11:30 +02:00
Michael Eischer
e8bbb05328 restore: Correctly handle partial pack download errors
Failed pack/blob downloads should be retried. For blobs that fail
decryption assume that the pack file is really damaged and try to
restore the remaining blobs.
2021-06-29 20:54:16 +02:00
rawtaz
58be5172ff Merge pull request #3437 from MichaelEischer/fix-prune-output
prune: Add missing newlines in error descriptions
2021-06-28 00:56:51 +02:00
MichaelEischer
cb6fd281a0 Merge pull request #3442 from restic/rawtaz-doc-verbose
doc: Correct position of --verbose in backup docs
2021-06-27 18:19:37 +02:00
rawtaz
eb61de7b3a Merge pull request #3287 from tjrana/clean-up-pr-template
Clean up PR template
2021-06-25 21:06:50 +02:00
rawtaz
98a88b483d doc: Correct position of --verbose in backup docs 2021-06-25 19:26:46 +02:00
MichaelEischer
6a4c1ed50d Merge pull request #3438 from MichaelEischer/fast-fuse-changelog
Add changelog for #3426
2021-06-20 15:05:19 +02:00
Michael Eischer
409306db2b Add changelog for #3426 2021-06-20 14:46:37 +02:00
Michael Eischer
aad8864835 prune: Add missing newlines in error descriptions 2021-06-20 14:25:40 +02:00
MichaelEischer
c1eb7ac1a1 Merge pull request #3420 from greatroar/local-errs
Modernize error handling in local backend
2021-06-20 14:20:40 +02:00
greatroar
e5f0f67ba0 Modernize error handling in local backend
* Stop prepending the operation name: it's already part of os.PathError,
  leading to repetitive errors like "Chmod: chmod /foo/bar: operation not
  permitted".

* Use errors.Is to check for specific errors.
2021-06-18 11:13:27 +02:00
Alexander Neumann
45eb30388f Merge pull request #3426 from MichaelEischer/fast-fuse
mount: enable fuse readahead
2021-06-14 10:27:41 +02:00
Michael Eischer
454b6d608e key: Add test that failed key saves don't break the repository 2021-06-13 13:46:48 +02:00
TJ Rana
a61a0255a8 Modify pull request template 2021-06-12 19:34:46 -04:00
Michael Eischer
6add186867 key: Check that a new key file actually works 2021-06-12 23:09:19 +02:00
MichaelEischer
a476752962 Merge pull request #3421 from greatroar/s3-fileinfo
Return s3.fileInfos by pointer
2021-06-12 18:55:18 +02:00
MichaelEischer
e8d20ea32c Merge pull request #3409 from greatroar/lchown-mknod
Make restic.{lchown,mknod} regular functions
2021-06-12 18:22:38 +02:00
Michael Eischer
fe43f53528 mount: enable fuse readahead
Apparently readahead was disabled by default. Enable readahead with the
Linux default size of 128kB. Larger values seem to have no effect.
This can speed up reading from the fuse mount by at least factor 5.

Speedup for a 1G random file stored in a local repository:
(Only one result shown, but times were quite stable, restarted restic
after each command)
$ dd if=/dev/urandom bs=1M count=1024 of=rand
$ shasum -a 256 tmp/rand
75dd9b374e712577d64672a05b8ceee40dfc45dce6321082d2c2fd51d60c6c2d  tmp/rand

before: $ time shasum -a 256 fuse/snapshots/latest/tmp/rand
75dd9b374e712577d64672a05b8ceee40dfc45dce6321082d2c2fd51d60c6c2d  fuse/snapshots/latest/tmp/rand

real    0m18.294s
user    0m4.522s
sys     0m3.305s

before: $ time cat fuse/snapshots/latest/tmp/rand > /dev/null
real    0m14.924s
user    0m0.000s
sys     0m4.625s

after:  $ time shasum -a 256 fuse/snapshots/latest/tmp/rand
75dd9b374e712577d64672a05b8ceee40dfc45dce6321082d2c2fd51d60c6c2d  fuse/snapshots/latest/tmp/rand

real    0m6.106s
user    0m3.115s
sys     0m0.182s

after:  $ time cat fuse/snapshots/latest/tmp/rand > /dev/null
real    0m3.096s
user    0m0.017s
sys     0m0.241s
2021-06-12 17:07:30 +02:00
greatroar
0d4f16b6ba Return s3.fileInfos by pointer
Since the fileInfos are returned in a []interface, they're already
allocated on the heap. Making them pointers explicitly means the
compiler doesn't need to generate fileInfo and *fileInfo versions of the
methods on this type. The binary becomes about 7KiB smaller on
Linux/amd64.
2021-06-07 19:48:43 +02:00
greatroar
0666c4d244 Make restic.{lchown,mknod} regular functions
This way, they can be inlined and dead code can be removed on Windows.
Also fixed some comments.
2021-05-27 22:51:40 +02:00
rawtaz
fdbd65485e Merge pull request #3402 from MichaelEischer/misc-fixes
Various small code cleanups
2021-05-24 11:30:31 +02:00
rawtaz
2daf033156 Merge pull request #3403 from MichaelEischer/fast-cat
cat: only load index if really necessary
2021-05-24 11:02:10 +02:00
MichaelEischer
5dad45f005 Merge pull request #3312 from soraxas/master
Bump cobra and add completions for fish
2021-05-23 13:58:27 +02:00
MichaelEischer
7eb6372123 Merge pull request #3257 from greatroar/ls-json-empty
ls: print "size":0 for empty files in JSON
2021-05-23 13:49:56 +02:00
Michael Eischer
61b368ddea cat: only load index if really necessary 2021-05-23 13:11:55 +02:00
Michael Eischer
fd8bce8184 backup: cleanly shutdown goroutines on error 2021-05-23 13:02:44 +02:00
greatroar
d7322a5f36 restic ls --json: print "size":0 for empty files 2021-05-21 21:06:00 +02:00
Tin Lai
9cc1ecdd45 bump cobra and add completions for fish
Signed-off-by: Tin Lai <oscar@tinyiu.com>
2021-05-21 13:47:52 +10:00
Alexander Neumann
af3de702c7 Merge pull request #3332 from restic/debug-1999
Merge `debug examine` to salvage damaged pack files
2021-05-18 09:38:40 +02:00
Alexander Neumann
226cd8d4d1 Merge pull request #3310 from MichaelEischer/copy-unstable-tree
`copy` raw tree blobs
2021-05-18 09:36:51 +02:00
Alexander Neumann
4cabad8c34 Merge pull request #3325 from MichaelEischer/fix-mintty-output
Fix windows terminal output for mintty
2021-05-18 09:29:24 +02:00
Michael Eischer
cf92c58460 Properly wrap errors in readerat helper 2021-05-17 21:08:23 +02:00
Michael Eischer
5767c65c62 delete: properly close fileChan if context is canceled 2021-05-17 21:05:54 +02:00
Michael Eischer
75c990504d azure/gs: Fix default value in connections help text 2021-05-17 20:56:51 +02:00
Michael Eischer
5a87a0ba0a find: use Str() to format short ids 2021-05-17 20:56:49 +02:00
Alexander Neumann
38ccddc84f Merge pull request #3399 from oli-h/oli-h-patch-060_forget.rst
Sudden find: fix within first "forget" example
2021-05-17 11:39:36 +02:00
Alexander Neumann
d7b5061aa5 Merge pull request #3394 from MichaelEischer/apple-m1-build
Build darwin/arm64 binaries for Apple M1
2021-05-17 09:56:44 +02:00
rawtaz
27141ae87f Merge pull request #2999 from restic/issue-2109
backup: Improve wording for --one-file-system description
2021-05-15 00:23:36 +02:00
Leo R. Lundgren
90d75651e6 backup: Improve wording for --one-file-system description 2021-05-15 00:06:27 +02:00
rawtaz
2a915069a8 Merge pull request #3393 from MichaelEischer/fix-filter-crash
filter: Fix crash for '**' pattern
2021-05-14 23:58:44 +02:00
Michael Eischer
55bea6e7a6 filter: Fix crash for '**' pattern 2021-05-14 23:50:31 +02:00
Michael Eischer
af6f6fba15 Build darwin/arm64 binaries for Apple M1 2021-05-14 23:04:45 +02:00
MichaelEischer
6d8ceefd67 Merge pull request #3373 from greatroar/simplify-limiter
Simplify internal/limiter
2021-05-14 20:27:31 +02:00
MichaelEischer
7349c6d338 Merge pull request #3167 from renard/limit-snapshots-list
Add option to limit snapshots list
2021-05-13 20:26:33 +02:00
Sébastien Gross
2a92b68e65 cmd/snapshots: Add option to limit snapshots list
This patch adds a `--latest` option to limit snapshots list to the n
last snapshots. It is very similar to the `--last` one but does not
limit to one entry. It also deprecates the `--last` flag usage in
favor of `--latest 1`

Output example:

    $ restic snapshots --latest 2
    repository 0d3eb989 opened successfully, password is correct
    ID        Time                 Host        Tags        Paths
    ------------------------------------------------------------
    5a33bdcc  2020-12-14 12:30:00  local                   /home
    73887d8e  2020-12-15 12:30:00  local                   /home
    ------------------------------------------------------------
    2 snapshots

Signed-off-by: Sébastien Gross <seb•ɑƬ•chezwam•ɖɵʈ•org>
2021-05-13 20:18:23 +02:00
MichaelEischer
64b00d28b1 Merge pull request #3345 from greatroar/sftp-enospc
Check for ENOSPC and remove broken files in SFTP
2021-05-13 20:09:38 +02:00
oli-h
23f9cb838d Sudden find: fix within first "forget" example
I think this was a typo or copy/paste problem when created the doc.
Feel free to merge. Cheers
2021-05-10 08:55:42 +02:00
rawtaz
01261770bb Merge pull request #3386 from adrian5/doc-change
doc: Add direct link to GitHub issue
2021-05-09 00:12:21 +02:00
adrian5
a0f1c74000 doc: Add direct link to GitHub issue 2021-05-08 21:13:11 +00:00
Alexander Neumann
be6fc02c04 Merge pull request #3376 from restic/rawtaz-doc-exclude
doc: Polish exclude file documentation
2021-04-27 19:25:00 +02:00
Alexander Neumann
3ce5544796 Merge pull request #3321 from restic/doc-files-from
doc: Improve docs for --files-from et al
2021-04-27 19:15:31 +02:00
rawtaz
556caa326f doc: Polish exclude file documentation 2021-04-25 20:36:11 +02:00
greatroar
ae170e2b38 Simplify internal/limiter 2021-04-24 11:54:43 +02:00
rawtaz
f7316cea07 Merge pull request #3371 from cyounkins/max_unused_default
doc: default for --max-unused
2021-04-23 03:18:05 +02:00
Craig Younkins
32a84ab3e4 doc: default for --max-unused 2021-04-22 21:12:37 -04:00
rawtaz
6c7eabf08c Merge pull request #3363 from davegallant/design-doc-grammar-fix
doc: Fix grammar in design.rst
2021-04-15 11:01:38 +02:00
Dave Gallant
4aaf10c356 doc: Fix grammar in design.rst 2021-04-14 17:25:53 -04:00
Michael Eischer
f65e1211d9 Add changelog entry 2021-04-11 20:02:09 +02:00
Michael Eischer
7cb8ea69ba Add test to mintty pipe detection 2021-04-11 20:02:09 +02:00
Michael Eischer
80564a9bc9 Properly detect mintty output redirection
mintty on windows always uses pipes to connect stdout between processes
and for the terminal output. The previous implementation always assumed
that stdout connected to a pipe means that stdout is displayed on a
mintty terminal. However, this detection breaks when using pipes to
connect processes and for powershell which uses pipes when redirecting
to a file.

Now the pipe filename is queried and matched against the pattern used by
msys / cygwin when connected to the terminal. In all other cases assume
that a pipe is just a regular pipe.
2021-04-11 20:02:09 +02:00
Michael Eischer
5e6af77b7a Unify interactive terminal detection code
Previously the progress bar / status update interval used
stdoutIsTerminal to determine whether it is possible to update the
progress bar or not. However, its implementation differed from the
detection within the backup command which included additional checks to
detect the presence of mintty on Windows. mintty behaves like a terminal
but uses pipes for communication.

This adds stdoutCanUpdateStatus() which calls the same terminal detection
code used by backup. This ensures that all commands consistently switch
between interactive and non-interactive terminal mode.

stdoutIsTerminal() now also returns true whenever stdoutCanUpdateStatus()
does so. This is required to properly handle the special case of mintty.
2021-04-11 20:02:09 +02:00
MichaelEischer
cc254dfefe Merge pull request #3362 from greatroar/darwin-preallocate
Use FcntlFstore to preallocate on Mac
2021-04-10 22:47:56 +02:00
greatroar
23531be272 Use FcntlFstore to preallocate on Mac 2021-04-10 16:54:07 +02:00
Alexander Neumann
b922fc851b Merge pull request #3356 from restic/rawtaz-doc-dollar-sign
doc: Clarify dollar sign expansion in exclude files
2021-04-09 12:30:54 +02:00
rawtaz
ccfd5f1d4a Merge pull request #3298 from jniggemann/patch-1
doc: Add note about bash completion path
2021-04-09 00:17:17 +02:00
Jan Niggemann
9a1f685179 doc: Add note about bash completion path 2021-04-08 23:53:38 +02:00
rawtaz
b5e40b370c doc: Clarify dollar sign expansion in exclude files 2021-04-08 23:46:53 +02:00
rawtaz
74c0607c92 Merge pull request #3319 from MichaelEischer/skip-prealloc-test
restorer: Skip preallocate test if not supported by the filesystem
2021-04-07 18:59:06 +02:00
Leo R. Lundgren
5861bb031c doc: Improve docs for --files-from et al 2021-04-07 18:31:46 +02:00
rawtaz
c2569ff923 Merge pull request #3354 from greatroar/filechange-docs
docs: on Windows, the filesize must match for "unchanged"
2021-04-07 15:29:46 +02:00
greatroar
ecbe7f3a99 Backup docs: on Windows, the filesize must match for "unchanged" 2021-04-04 17:01:48 +02:00
Michael Eischer
88731d8c28 Add changelog 2021-03-27 17:08:35 +01:00
greatroar
dc88ca79b6 Handle lack of space and remove broken files in SFTP backend 2021-03-27 15:02:14 +01:00
MichaelEischer
efb10b3c40 Merge pull request #3347 from jazcap53/fix_ambiguous_warning
Change ambiguous Warning message
2021-03-27 12:54:07 +01:00
MichaelEischer
d456437ad1 Merge pull request #3343 from hluup/3334
Display 'created new cache in ' message only if output is terminal
2021-03-27 12:43:37 +01:00
MichaelEischer
4c61825249 Merge pull request #3335 from cyounkins/patch-1
prune --max-unused does not limit metadata repacking
2021-03-27 12:40:35 +01:00
Andrew Jarcho
9f44129c2f Change ambiguous Warning message
Fixes #3346

On branch fix_ambiguous_warning
Changes to be committed:
    modified:   cmd_backup.go
2021-03-24 10:44:47 -04:00
Hendrik Luup
5592c17e4a Display 'created new cache in ' message only if output is terminal 2021-03-22 11:31:08 +02:00
Craig Younkins
aa0a7b78a8 doc: prune --max-unused unlimited will still repack metadata
Adding minor clarification to documentation of `prune --max-unused unlimited` to indicate metadata will still be repacked. The referenced PR indicates `max-repack-size` CAN limit metadata repacking.

Ref: https://forum.restic.net/t/max-unused-unlimited-still-repacks/3661
Ref: https://github.com/restic/restic/pull/3228
2021-03-21 13:19:09 -04:00
Alexander Neumann
88a23521dd Merge pull request #3327 from MichaelEischer/fix-s3-sanity-check
s3: Fix sanity check
2021-03-11 13:13:46 +01:00
MichaelEischer
e678acafcf Merge pull request #3331 from greatroar/upgrade-sftp
Upgrade pkg/sftp to 1.13
2021-03-10 22:27:32 +01:00
Michael Eischer
54d58edacc debug: fix usage for examine command 2021-03-10 22:22:33 +01:00
Michael Eischer
5975ed61f3 debug: fix linter warning 2021-03-10 21:22:53 +01:00
Michael Eischer
dc62ec5933 debug: use Printf/Warnf for output 2021-03-10 21:20:21 +01:00
Michael Eischer
547d9b384d debug: cleanup repair code 2021-03-10 21:15:38 +01:00
greatroar
187a77fb27 Upgrade pkg/sftp to 1.13 and simplify SFTP.IsNotExist 2021-03-10 21:05:24 +01:00
Michael Eischer
fa7b9d5dfe debug: Cleanup pack size checks 2021-03-10 20:57:14 +01:00
Michael Eischer
6774fc6454 debug: check arguments and cleanup error handling 2021-03-10 20:43:22 +01:00
Michael Eischer
096f15db5c debug: extract examinePack function 2021-03-10 20:21:05 +01:00
Michael Eischer
84491ff40b debug: store repaired and correct blobs 2021-03-10 20:03:44 +01:00
Alexander Neumann
b3c3121622 debug: Save raw decrypt (disregarding signature) 2021-03-10 20:03:44 +01:00
Alexander Neumann
ce4b6d0874 examine: add byte repair mode 2021-03-10 20:03:44 +01:00
Alexander Neumann
52061e817c debug: fix percentage 2021-03-10 20:03:44 +01:00
Alexander Neumann
133ac42a0b debug: check file content hash 2021-03-10 20:03:44 +01:00
Alexander Neumann
90f975fa1c debug: make output less verbose 2021-03-10 20:03:44 +01:00
Alexander Neumann
086993bae1 debug: check packs not in index, implement repair 2021-03-10 20:03:44 +01:00
Alexander Neumann
d6f78163d4 Add 'debug examine' command to debug #1999 2021-03-10 20:03:44 +01:00
MichaelEischer
c9b4fadd91 Merge pull request #3294 from Achilleshiel/fix-copy-repofile
Add RepositoryFile2 Option for secondary repository
2021-03-08 22:50:31 +01:00
Michael Eischer
da458a55db Cleanup comments in secondary repo test 2021-03-08 22:41:13 +01:00
Wouter Horlings
9ccdba9df6 Add repositoryFile2 option
The `init` and `copy` commands can now use `--repository-file2` flag and
the `$RESTIC_REPOSITORY_FILE2` environment variable.

This also fixes the conflict with the `--repository-file` and `--repo2`
flag.

Tests are added for the initSecondaryGlobalOpts function.

This adds a NOK function to the test helper functions. This NOK tests if
err is not nil, and otherwise fail the test.

With the NOK function a couple of sad paths are tested in the
initSecondaryGlobalOpts function.

In total the tests checks wether the following are passed correct:
   - Password
   - PasswordFile
   - Repo
   - RepositoryFile

The following situation must return an error to pass the test:
   - no Repo or RepositoryFile defined
   - Repo and RepositoryFile defined both
2021-03-08 22:41:13 +01:00
MichaelEischer
a0f9d73d44 Merge pull request #3305 from juergenhoetzel/terminal-race-condition
backup: In case of error also Wait() for terminal goroutine
2021-03-08 22:20:18 +01:00
Michael Eischer
427781928f copy: test that trees with unstable json encoding are properly copied 2021-03-08 22:16:48 +01:00
Michael Eischer
2fc7abac35 copy: copy raw bytes of tree blobs
This avoids problems when for some reason the JSON encoding changes.
This also ensures forward compatibility with future restic versions
which might e.g. add new fields to the tree metadata.
2021-03-08 22:16:48 +01:00
Michael Eischer
f9c581f219 Tweak changelog 2021-03-08 22:11:34 +01:00
Juergen Hoetzel
18fccb5995 backup: In case of error also Wait() for terminal goroutine
This prevents a race condition where the final summary message can get lost.
2021-03-08 22:11:34 +01:00
Michael Eischer
2a9f0f19b6 s3: Fix sanity check
The sanity check shouldn't replace the error message if there is already
one.
2021-03-08 20:23:57 +01:00
rawtaz
d686fa25de Merge pull request #3323 from restic/doc-setcap
doc: Clarify setcap applying only to current binary
2021-03-08 19:32:11 +01:00
rawtaz
f000f41c91 doc: Clarify setcap applying only to current binary 2021-03-08 19:21:06 +01:00
MichaelEischer
64fe733fa0 Merge pull request #3320 from MichaelEischer/fix-ci
Update to Go 1.16 and pin go version used for golangcil-lint
2021-03-08 19:18:10 +01:00
Michael Eischer
781378a65e Bump go version to 1.16 2021-03-04 23:24:26 +01:00
Michael Eischer
64a9272c9a go mod tidy for go 1.16 2021-03-04 23:18:53 +01:00
Michael Eischer
ee89e33f12 CI: Pin go version used for golint to 1.15.x
This prevents unexpected lint failures when a new go version is
released.
2021-03-04 23:12:48 +01:00
Michael Eischer
f0e9068ef2 go mod tidy for go 1.16 2021-03-04 22:56:08 +01:00
Michael Eischer
f70aca6f6f restorer: Skip preallocate test if not supported by the filesystem 2021-03-04 20:33:46 +01:00
MichaelEischer
814a399e4c Merge pull request #3283 from dennypage/master
Treat an empty password as a fatal error for repository init.
2021-02-28 00:49:21 +01:00
Michael Eischer
13730e3844 Add link to pr to changelog 2021-02-28 00:40:52 +01:00
Denny Page
0f41e99ea7 Treat an empty password as a fatal error for repository init. 2021-02-28 00:40:00 +01:00
MichaelEischer
6712c6de73 Merge pull request #3309 from MichaelEischer/fix-random-check-crash
check: Fix crash of --read-data-subset=x% on empty repository
2021-02-27 16:24:56 +01:00
Michael Eischer
9e852af5be check: Fix crash of --read-data-subset=x% on empty repository
Rounding up to at least one pack file should only be done if at least a
pack file exists.
2021-02-27 16:05:36 +01:00
MichaelEischer
4baebdc6f5 Merge pull request #3308 from MichaelEischer/fix-not-a-directory
local: Ignore files in intermediate folders
2021-02-27 15:34:36 +01:00
Michael Eischer
a293fd9aef local: Ignore files in intermediate folders
For example the data folder of a repository might contain hidden files
which caused list operations to fail.
2021-02-27 13:47:55 +01:00
TJ Rana
91e8f0e486 Clean up writing style 2021-02-16 22:19:11 -05:00
TJ Rana
d7d562b287 Clarify checklist items 2021-02-16 22:11:22 -05:00
TJ Rana
76e1ba6fd0 Clean up pull request template 2021-02-16 22:03:33 -05:00
rawtaz
8eb6a5805b Merge pull request #3286 from MichaelEischer/fix-quiet-backup
backup: Correctly handle --quiet flag
2021-02-15 22:35:04 +01:00
Michael Eischer
65bd2a9a49 backup: Correctly handle --quiet flag
The quiet flag changes the backup output to assume a non-interactive
terminal. However, the output progress interval was not set to 0 by
default.
2021-02-15 22:14:58 +01:00
Alexander Neumann
12f0ccc237 helpers: Also push versioned image to Docker 2021-02-15 20:08:40 +01:00
Alexander Neumann
bb53fcfc0d Set development version for 0.12.0 2021-02-14 11:44:24 +01:00
Peter Albrecht
cd25e36811 Add PGP fingerprint to 020_installation.rst
I like the idea of verifying the integrity of applications, I download from the internet. So I was very happy to see that restic does provide SHA256-checksums which are signed with the maintainers PGP key.

The only thing I miss: I could not find a direct way to download the used PGP key and verify the keys fingerprint.

Doing some searches, I found:
* https://github.com/restic/rest-server/issues/121
* https://restic.net/blog/2015-09-16/verifying-code-archive-integrity/

To help other restic users, I think you should add information about your PGP key/fingerprint to this installation doc, too. To save you some precious time, I created a draft, how this doc might be expanded, in this pull-request. You are free to accept it or change the text to your liking.

I copied the key/fingerprint text from: ``restic/restic/master/doc/090_participating.rst``

Thank you for your work in restic!
2020-12-13 17:00:00 +01:00
279 changed files with 10475 additions and 4162 deletions

12
.dockerignore Normal file
View File

@@ -0,0 +1,12 @@
# Folders
.git/
.github/
changelog/
doc/
docker/
helpers/
# Files
.gitignore
.golangci.yml
*.md

View File

@@ -1,13 +1,7 @@
<!--
Thank you very much for contributing code or documentation to restic! Please
fill out the following questions to make it easier for us to review your
changes.
You do not need to check all the boxes below all at once, feel free to take
your time and add more commits. If you're done and ready for review, please
check the last box.
-->
What does this PR change? What problem does it solve?
@@ -17,8 +11,8 @@ What does this PR change? What problem does it solve?
Describe the changes and their purpose here, as detailed as needed.
-->
Was the change discussed in an issue or in the forum before?
------------------------------------------------------------
Was the change previously discussed in an issue or on the forum?
----------------------------------------------------------------
<!--
Link issues and relevant forum posts here.
@@ -30,11 +24,17 @@ is closed automatically when this PR is merged.
Checklist
---------
- [ ] I have read the [Contribution Guidelines](https://github.com/restic/restic/blob/master/CONTRIBUTING.md#providing-patches)
- [ ] I have enabled [maintainer edits for this PR](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/allowing-changes-to-a-pull-request-branch-created-from-a-fork)
- [ ] I have added tests for all changes in this PR
- [ ] I have added documentation for the changes (in the manual)
- [ ] There's a new file in `changelog/unreleased/` that describes the changes for our users (template [here](https://github.com/restic/restic/blob/master/changelog/TEMPLATE))
- [ ] I have run `gofmt` on the code in all commits
- [ ] All commit messages are formatted in the same style as [the other commits in the repo](https://github.com/restic/restic/blob/master/CONTRIBUTING.md#git-commits)
- [ ] I'm done, this Pull Request is ready for review
<!--
You do not need to check all the boxes below all at once. Feel free to take
your time and add more commits. If you're done and ready for review, please
check the last box. Enable a checkbox by replacing [ ] with [x].
-->
- [ ] I have read the [contribution guidelines](https://github.com/restic/restic/blob/master/CONTRIBUTING.md#providing-patches).
- [ ] I have [enabled maintainer edits](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/allowing-changes-to-a-pull-request-branch-created-from-a-fork).
- [ ] I have added tests for all code changes.
- [ ] I have added documentation for relevant changes (in the manual).
- [ ] There's a new file in `changelog/unreleased/` that describes the changes for our users (see [template](https://github.com/restic/restic/blob/master/changelog/TEMPLATE)).
- [ ] I have run `gofmt` on the code in all commits.
- [ ] All commit messages are formatted in the same style as [the other commits in the repo](https://github.com/restic/restic/blob/master/CONTRIBUTING.md#git-commits).
- [ ] I'm done! This pull request is ready for review.

View File

@@ -8,6 +8,10 @@ on:
# run tests for all pull requests
pull_request:
env:
latest_go: "1.18.x"
GO111MODULE: on
jobs:
test:
strategy:
@@ -15,30 +19,47 @@ jobs:
# list of jobs to run:
include:
- job_name: Windows
go: 1.15.x
go: 1.18.x
os: windows-latest
install_verb: install
- job_name: macOS
go: 1.15.x
go: 1.18.x
os: macOS-latest
test_fuse: false
install_verb: install
- job_name: Linux
go: 1.15.x
go: 1.18.x
os: ubuntu-latest
test_cloud_backends: true
test_fuse: true
check_changelog: true
install_verb: install
- job_name: Linux
go: 1.17.x
os: ubuntu-latest
test_fuse: true
install_verb: install
- job_name: Linux
go: 1.16.x
os: ubuntu-latest
test_fuse: true
install_verb: get
- job_name: Linux
go: 1.15.x
os: ubuntu-latest
test_fuse: true
install_verb: get
- job_name: Linux
go: 1.14.x
os: ubuntu-latest
test_fuse: true
- job_name: Linux
go: 1.13.x
os: ubuntu-latest
test_fuse: true
install_verb: get
name: ${{ matrix.job_name }} Go ${{ matrix.go }}
runs-on: ${{ matrix.os }}
@@ -55,7 +76,7 @@ jobs:
- name: Get programs (Linux/macOS)
run: |
echo "build Go tools"
go get github.com/restic/rest-server/...
go ${{ matrix.install_verb }} github.com/restic/rest-server/cmd/rest-server@latest
echo "install minio server"
mkdir $HOME/bin
@@ -87,7 +108,7 @@ jobs:
$ProgressPreference = 'SilentlyContinue'
echo "build Go tools"
go get github.com/restic/rest-server/...
go ${{ matrix.install_verb }} github.com/restic/rest-server/...
echo "install minio server"
mkdir $Env:USERPROFILE/bin
@@ -177,7 +198,7 @@ jobs:
- name: Check changelog files with calens
run: |
echo "install calens"
go get github.com/restic/calens
go install github.com/restic/calens@latest
echo "check changelog files"
calens
@@ -190,18 +211,17 @@ jobs:
matrix:
# run cross-compile in two batches parallel so the overall tests run faster
targets:
- "linux/386 linux/amd64 linux/arm linux/arm64 linux/ppc64le linux/mips linux/mipsle linux/mips64 linux/mips64le \
- "linux/386 linux/amd64 linux/arm linux/arm64 linux/ppc64le linux/mips linux/mipsle linux/mips64 linux/mips64le linux/s390x \
openbsd/386 openbsd/amd64"
- "freebsd/386 freebsd/amd64 freebsd/arm \
aix/ppc64 \
darwin/amd64 \
darwin/amd64 darwin/arm64 \
netbsd/386 netbsd/amd64 \
windows/386 windows/amd64 \
solaris/amd64"
env:
go: 1.15.x
GOPROXY: https://proxy.golang.org
runs-on: ubuntu-latest
@@ -209,17 +229,17 @@ jobs:
name: Cross Compile for ${{ matrix.targets }}
steps:
- name: Set up Go ${{ env.go }}
- name: Set up Go ${{ env.latest_go }}
uses: actions/setup-go@v2
with:
go-version: ${{ env.go }}
- name: Check out code
uses: actions/checkout@v2
go-version: ${{ env.latest_go }}
- name: Install gox
run: |
go get github.com/mitchellh/gox
go install github.com/mitchellh/gox@latest
- name: Check out code
uses: actions/checkout@v2
- name: Cross-compile with gox for ${{ matrix.targets }}
env:
@@ -234,6 +254,11 @@ jobs:
name: lint
runs-on: ubuntu-latest
steps:
- name: Set up Go ${{ env.latest_go }}
uses: actions/setup-go@v2
with:
go-version: ${{ env.latest_go }}
- name: Check out code
uses: actions/checkout@v2
@@ -241,10 +266,11 @@ jobs:
uses: golangci/golangci-lint-action@v2
with:
# Required: the version of golangci-lint is required and must be specified without patch version: we always use the latest patch version.
version: v1.36
version: v1.45
# Optional: show only new issues if it's a pull request. The default value is `false`.
only-new-issues: true
args: --verbose --timeout 5m
skip-go-installation: true
# only run golangci-lint for pull requests, otherwise ALL hints get
# reported. We need to slowly address all issues until we can enable
@@ -256,3 +282,44 @@ jobs:
echo "check if go.mod and go.sum are up to date"
go mod tidy
git diff --exit-code go.mod go.sum
docker:
name: docker
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v2
- name: Docker meta
id: meta
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
restic/restic
# generate Docker tags based on the following events/attributes
tags: |
type=schedule
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: false
context: .
file: docker/Dockerfile
pull: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

View File

@@ -1,3 +1,654 @@
Changelog for restic 0.13.0 (2022-03-26)
=======================================
The following sections list the changes in restic 0.13.0 relevant to
restic users. The changes are ordered by importance.
Summary
-------
* Fix #1106: Never lock repository for `list locks`
* Fix #2345: Make cache crash-resistant and usable by multiple concurrent processes
* Fix #2452: Improve error handling of repository locking
* Fix #2738: Don't print progress for `backup --json --quiet`
* Fix #3382: Make `check` command honor `RESTIC_CACHE_DIR` environment variable
* Fix #3518: Make `copy` command honor `--no-lock` for source repository
* Fix #3556: Fix hang with Backblaze B2 on SSL certificate authority error
* Fix #3601: Fix rclone backend prematurely exiting when receiving SIGINT on Windows
* Fix #3667: The `mount` command now reports symlinks sizes
* Fix #3488: `rebuild-index` failed if an index file was damaged
* Fix #3591: Fix handling of `prune --max-repack-size=0`
* Fix #3619: Avoid choosing parent snapshots newer than time of new snapshot
* Chg #3641: Ignore parent snapshot for `backup --stdin`
* Chg #3519: Require Go 1.14 or newer
* Enh #1542: Add `--dry-run`/`-n` option to `backup` command
* Enh #2202: Add upload checksum for Azure, GS, S3 and Swift backends
* Enh #233: Support negative include/exclude patterns
* Enh #2388: Add warning for S3 if partial credentials are provided
* Enh #2508: Support JSON output and quiet mode for the `diff` command
* Enh #2656: Add flag to disable TLS verification for self-signed certificates
* Enh #3003: Atomic uploads for the SFTP backend
* Enh #3127: Add xattr (extended attributes) support for Solaris
* Enh #3464: Skip lock creation on `forget` if `--no-lock` and `--dry-run`
* Enh #3490: Support random subset by size in `check --read-data-subset`
* Enh #3541: Improve handling of temporary B2 delete errors
* Enh #3542: Add file mode in symbolic notation to `ls --json`
* Enh #2594: Speed up the `restore --verify` command
* Enh #2816: The `backup` command no longer updates file access times on Linux
* Enh #2880: Make `recover` collect only unreferenced trees
* Enh #3429: Verify that new or modified keys are stored correctly
* Enh #3436: Improve local backend's resilience to (system) crashes
* Enh #3508: Cache blobs read by the `dump` command
* Enh #3511: Support configurable timeout for the rclone backend
* Enh #3593: Improve `copy` performance by parallelizing IO
Details
-------
* Bugfix #1106: Never lock repository for `list locks`
The `list locks` command previously locked to the repository by default. This had the problem
that it wouldn't work for an exclusively locked repository and that the command would also
display its own lock file which can be confusing.
Now, the `list locks` command never locks the repository.
https://github.com/restic/restic/issues/1106
https://github.com/restic/restic/pull/3665
* Bugfix #2345: Make cache crash-resistant and usable by multiple concurrent processes
The restic cache directory (`RESTIC_CACHE_DIR`) could end up in a broken state in the event of
restic (or the OS) crashing. This is now less likely to occur as files are downloaded to a
temporary location before being moved to their proper location.
This also allows multiple concurrent restic processes to operate on a single repository
without conflicts. Previously, concurrent operations could cause segfaults because the
processes saw each other's partially downloaded files.
https://github.com/restic/restic/issues/2345
https://github.com/restic/restic/pull/2838
* Bugfix #2452: Improve error handling of repository locking
Previously, when the lock refresh failed to delete the old lock file, it forgot about the newly
created one. Instead it continued trying to delete the old (usually no longer existing) lock
file and thus over time lots of lock files accumulated. This has now been fixed.
https://github.com/restic/restic/issues/2452
https://github.com/restic/restic/issues/2473
https://github.com/restic/restic/issues/2562
https://github.com/restic/restic/pull/3512
* Bugfix #2738: Don't print progress for `backup --json --quiet`
Unlike the text output, the `--json` output format still printed progress information even in
`--quiet` mode. This has now been fixed by always disabling the progress output in quiet mode.
https://github.com/restic/restic/issues/2738
https://github.com/restic/restic/pull/3264
* Bugfix #3382: Make `check` command honor `RESTIC_CACHE_DIR` environment variable
Previously, the `check` command didn't honor the `RESTIC_CACHE_DIR` environment variable,
which caused problems in certain system/usage configurations. This has now been fixed.
https://github.com/restic/restic/issues/3382
https://github.com/restic/restic/pull/3474
* Bugfix #3518: Make `copy` command honor `--no-lock` for source repository
The `copy` command previously did not respect the `--no-lock` option for the source
repository, causing failures with read-only storage backends. This has now been fixed such
that the option is now respected.
https://github.com/restic/restic/issues/3518
https://github.com/restic/restic/pull/3589
* Bugfix #3556: Fix hang with Backblaze B2 on SSL certificate authority error
Previously, if a request failed with an SSL unknown certificate authority error, the B2
backend retried indefinitely and restic would appear to hang.
This has now been fixed and restic instead fails with an error message.
https://github.com/restic/restic/issues/3556
https://github.com/restic/restic/issues/2355
https://github.com/restic/restic/pull/3571
* Bugfix #3601: Fix rclone backend prematurely exiting when receiving SIGINT on Windows
Previously, pressing Ctrl+C in a Windows console where restic was running with rclone as the
backend would cause rclone to exit prematurely due to getting a `SIGINT` signal at the same time
as restic. Restic would then wait for a long time for time with "unexpected EOF" and "rclone
stdio connection already closed" errors.
This has now been fixed by restic starting the rclone process detached from the console restic
runs in (similar to starting processes in a new process group on Linux), which enables restic to
gracefully clean up rclone (which now never gets the `SIGINT`).
https://github.com/restic/restic/issues/3601
https://github.com/restic/restic/pull/3602
* Bugfix #3667: The `mount` command now reports symlinks sizes
Symlinks used to have size zero in restic mountpoints, confusing some third-party tools. They
now have a size equal to the byte length of their target path, as required by POSIX.
https://github.com/restic/restic/issues/3667
https://github.com/restic/restic/pull/3668
* Bugfix #3488: `rebuild-index` failed if an index file was damaged
Previously, the `rebuild-index` command would fail with an error if an index file was damaged
or truncated. This has now been fixed.
On older restic versions, a (slow) workaround is to use `rebuild-index --read-all-packs` or
to manually delete the damaged index.
https://github.com/restic/restic/pull/3488
* Bugfix #3591: Fix handling of `prune --max-repack-size=0`
Restic ignored the `--max-repack-size` option when passing a value of 0. This has now been
fixed.
As a workaround, `--max-repack-size=1` can be used with older versions of restic.
https://github.com/restic/restic/pull/3591
* Bugfix #3619: Avoid choosing parent snapshots newer than time of new snapshot
The `backup` command, when a `--parent` was not provided, previously chose the most recent
matching snapshot as the parent snapshot. However, this didn't make sense when the user passed
`--time` to create a new snapshot older than the most recent snapshot.
Instead, `backup` now chooses the most recent snapshot which is not newer than the
snapshot-being-created's timestamp, to avoid any time travel.
https://github.com/restic/restic/pull/3619
* Change #3641: Ignore parent snapshot for `backup --stdin`
Restic uses a parent snapshot to speed up directory scanning when performing backups, but this
only wasted time and memory when the backup source is stdin (using the `--stdin` option of the
`backup` command), since no directory scanning is performed in this case.
Snapshots made with `backup --stdin` no longer have a parent snapshot, which allows restic to
skip some startup operations and saves a bit of resources.
The `--parent` option is still available for `backup --stdin`, but is now ignored.
https://github.com/restic/restic/issues/3641
https://github.com/restic/restic/pull/3645
* Change #3519: Require Go 1.14 or newer
Restic now requires Go 1.14 to build. This allows it to use new standard library features
instead of an external dependency.
https://github.com/restic/restic/issues/3519
* Enhancement #1542: Add `--dry-run`/`-n` option to `backup` command
Testing exclude filters and other configuration options was error prone as wrong filters
could cause files to be uploaded unintentionally. It was also not possible to estimate
beforehand how much data would be uploaded.
The `backup` command now has a `--dry-run`/`-n` option, which performs all the normal steps of
a backup without actually writing anything to the repository.
Passing -vv will log information about files that would be added, allowing for verification of
source and exclusion options before running the real backup.
https://github.com/restic/restic/issues/1542
https://github.com/restic/restic/pull/2308
https://github.com/restic/restic/pull/3210
https://github.com/restic/restic/pull/3300
* Enhancement #2202: Add upload checksum for Azure, GS, S3 and Swift backends
Previously only the B2 and partially the Swift backends verified the integrity of uploaded
(encrypted) files. The verification works by informing the backend about the expected hash of
the uploaded file. The backend then verifies the upload and thereby rules out any data
corruption during upload.
We have now added upload checksums for the Azure, GS, S3 and Swift backends, which besides
integrity checking for uploads also means that restic can now be used to store backups in S3
buckets which have Object Lock enabled.
https://github.com/restic/restic/issues/2202
https://github.com/restic/restic/issues/2700
https://github.com/restic/restic/issues/3023
https://github.com/restic/restic/pull/3246
* Enhancement #233: Support negative include/exclude patterns
If a pattern starts with an exclamation mark and it matches a file that was previously matched by
a regular pattern, the match is cancelled. Notably, this can be used with `--exclude-file` to
cancel the exclusion of some files.
It works similarly to `.gitignore`, with the same limitation; Once a directory is excluded, it
is not possible to include files inside the directory.
Example of use as an exclude pattern for the `backup` command:
$HOME/**/* !$HOME/Documents !$HOME/code !$HOME/.emacs.d !$HOME/games # [...]
node_modules *~ *.o *.lo *.pyc # [...] $HOME/code/linux/* !$HOME/code/linux/.git # [...]
https://github.com/restic/restic/issues/233
https://github.com/restic/restic/pull/2311
* Enhancement #2388: Add warning for S3 if partial credentials are provided
Previously restic did not notify about incomplete credentials when using the S3 backend,
instead just reporting access denied.
Restic now checks that both the AWS key ID and secret environment variables are set before
connecting to the remote server, and reports an error if not.
https://github.com/restic/restic/issues/2388
https://github.com/restic/restic/pull/3532
* Enhancement #2508: Support JSON output and quiet mode for the `diff` command
The `diff` command now supports outputting machine-readable output in JSON format. To enable
this, pass the `--json` option to the command. To only print the summary and suppress detailed
output, pass the `--quiet` option.
https://github.com/restic/restic/issues/2508
https://github.com/restic/restic/pull/3592
* Enhancement #2656: Add flag to disable TLS verification for self-signed certificates
There is now an `--insecure-tls` global option in restic, which disables TLS verification for
self-signed certificates in order to support some development workflows.
https://github.com/restic/restic/issues/2656
https://github.com/restic/restic/pull/2657
* Enhancement #3003: Atomic uploads for the SFTP backend
The SFTP backend did not upload files atomically. An interrupted upload could leave an
incomplete file behind which could prevent restic from accessing the repository. This has now
been fixed and uploads in the SFTP backend are done atomically.
https://github.com/restic/restic/issues/3003
https://github.com/restic/restic/pull/3524
* Enhancement #3127: Add xattr (extended attributes) support for Solaris
Restic now supports xattr for the Solaris operating system.
https://github.com/restic/restic/issues/3127
https://github.com/restic/restic/pull/3628
* Enhancement #3464: Skip lock creation on `forget` if `--no-lock` and `--dry-run`
Restic used to silently ignore the `--no-lock` option of the `forget` command.
It now skips creation of lock file in case both `--dry-run` and `--no-lock` are specified. If
`--no-lock` option is specified without `--dry-run`, restic prints a warning message to
stderr.
https://github.com/restic/restic/issues/3464
https://github.com/restic/restic/pull/3623
* Enhancement #3490: Support random subset by size in `check --read-data-subset`
The `--read-data-subset` option of the `check` command now supports a third way of specifying
the subset to check, namely `nS` where `n` is a size in bytes with suffix `S` as k/K, m/M, g/G or
t/T.
https://github.com/restic/restic/issues/3490
https://github.com/restic/restic/pull/3548
* Enhancement #3541: Improve handling of temporary B2 delete errors
Deleting files on B2 could sometimes fail temporarily, which required restic to retry the
delete operation. In some cases the file was deleted nevertheless, causing the retries and
ultimately the restic command to fail. This has now been fixed.
https://github.com/restic/restic/issues/3541
https://github.com/restic/restic/pull/3544
* Enhancement #3542: Add file mode in symbolic notation to `ls --json`
The `ls --json` command now provides the file mode in symbolic notation (using the
`permissions` key), aligned with `find --json`.
https://github.com/restic/restic/issues/3542
https://github.com/restic/restic/pull/3573
https://forum.restic.net/t/restic-ls-understanding-file-mode-with-json/4371
* Enhancement #2594: Speed up the `restore --verify` command
The `--verify` option lets the `restore` command verify the file content after it has restored
a snapshot. The performance of this operation has now been improved by up to a factor of two.
https://github.com/restic/restic/pull/2594
* Enhancement #2816: The `backup` command no longer updates file access times on Linux
When reading files during backup, restic used to cause the operating system to update the
files' access times. Note that this did not apply to filesystems with disabled file access
times.
Restic now instructs the operating system not to update the file access time, if the user
running restic is the file owner or has root permissions.
https://github.com/restic/restic/pull/2816
* Enhancement #2880: Make `recover` collect only unreferenced trees
Previously, the `recover` command used to generate a snapshot containing *all* root trees,
even those which were already referenced by a snapshot.
This has been improved such that it now only processes trees not already referenced by any
snapshot.
https://github.com/restic/restic/pull/2880
* Enhancement #3429: Verify that new or modified keys are stored correctly
When adding a new key or changing the password of a key, restic used to just create the new key (and
remove the old one, when changing the password). There was no verification that the new key was
stored correctly and works properly. As the repository cannot be decrypted without a valid key
file, this could in rare cases cause the repository to become inaccessible.
Restic now checks that new key files actually work before continuing. This can protect against
some (rare) cases of hardware or storage problems.
https://github.com/restic/restic/pull/3429
* Enhancement #3436: Improve local backend's resilience to (system) crashes
Restic now ensures that files stored using the `local` backend are created atomically (that
is, files are either stored completely or not at all). This ensures that no incomplete files are
left behind even if restic is terminated while writing a file.
In addition, restic now tries to ensure that the directory in the repository which contains a
newly uploaded file is also written to disk. This can prevent missing files if the system
crashes or the disk is not properly unmounted.
https://github.com/restic/restic/pull/3436
* Enhancement #3508: Cache blobs read by the `dump` command
When dumping a file using the `dump` command, restic did not cache blobs in any way, so even
consecutive runs of the same blob were loaded from the repository again and again, slowing down
the dump.
Now, the caching mechanism already used by the `fuse` command is also used by the `dump`
command. This makes dumping much faster, especially for sparse files.
https://github.com/restic/restic/pull/3508
* Enhancement #3511: Support configurable timeout for the rclone backend
A slow rclone backend could cause restic to time out while waiting for the repository to open.
Restic now offers an `-o rclone.timeout` option to make this timeout configurable.
https://github.com/restic/restic/issues/3511
https://github.com/restic/restic/pull/3514
* Enhancement #3593: Improve `copy` performance by parallelizing IO
Restic copy previously only used a single thread for copying blobs between repositories,
which resulted in limited performance when copying small blobs to/from a high latency backend
(i.e. any remote backend, especially b2).
Copying will now use 8 parallel threads to increase the throughput of the copy operation.
https://github.com/restic/restic/pull/3593
Changelog for restic 0.12.1 (2021-08-03)
=======================================
The following sections list the changes in restic 0.12.1 relevant to
restic users. The changes are ordered by importance.
Summary
-------
* Fix #2742: Improve error handling for rclone and REST backend over HTTP2
* Fix #3111: Fix terminal output redirection for PowerShell
* Fix #3214: Treat an empty password as a fatal error for repository init
* Fix #3267: `copy` failed to copy snapshots in rare cases
* Fix #3184: `backup --quiet` no longer prints status information
* Fix #3296: Fix crash of `check --read-data-subset=x%` run for an empty repository
* Fix #3302: Fix `fdopendir: not a directory` error for local backend
* Fix #3334: Print `created new cache` message only on a terminal
* Fix #3380: Fix crash of `backup --exclude='**'`
* Fix #3305: Fix possibly missing backup summary of JSON output in case of error
* Fix #3439: Correctly handle download errors during `restore`
* Chg #3247: Empty files now have size of 0 in `ls --json` output
* Enh #2780: Add release binaries for s390x architecture on Linux
* Enh #3293: Add `--repository-file2` option to `init` and `copy` command
* Enh #3312: Add auto-completion support for fish
* Enh #3336: SFTP backend now checks for disk space
* Enh #3377: Add release binaries for Apple Silicon
* Enh #3414: Add `--keep-within-hourly` option to restic forget
* Enh #3456: Support filtering and specifying untagged snapshots
* Enh #3167: Allow specifying limit of `snapshots` list
* Enh #3426: Optimize read performance of mount command
* Enh #3427: `find --pack` fallback to index if data file is missing
Details
-------
* Bugfix #2742: Improve error handling for rclone and REST backend over HTTP2
When retrieving data from the rclone / REST backend while also using HTTP2 restic did not detect
when no data was returned at all. This could cause for example the `check` command to report the
following error:
Pack ID does not match, want [...], got e3b0c442
This has been fixed by correctly detecting and retrying the incomplete download.
https://github.com/restic/restic/issues/2742
https://github.com/restic/restic/pull/3453
https://forum.restic.net/t/http2-stream-closed-connection-reset-context-canceled/3743/10
* Bugfix #3111: Fix terminal output redirection for PowerShell
When redirecting the output of restic using PowerShell on Windows, the output contained
terminal escape characters. This has been fixed by properly detecting the terminal type.
In addition, the mintty terminal now shows progress output for the backup command.
https://github.com/restic/restic/issues/3111
https://github.com/restic/restic/pull/3325
* Bugfix #3214: Treat an empty password as a fatal error for repository init
When attempting to initialize a new repository, if an empty password was supplied, the
repository would be created but the init command would return an error with a stack trace. Now,
if an empty password is provided, it is treated as a fatal error, and no repository is created.
https://github.com/restic/restic/issues/3214
https://github.com/restic/restic/pull/3283
* Bugfix #3267: `copy` failed to copy snapshots in rare cases
The `copy` command could in rare cases fail with the error message `SaveTree(...) returned
unexpected id ...`. This has been fixed.
On Linux/BSDs, the error could be caused by backing up symlinks with non-UTF-8 target paths.
Note that, due to limitations in the repository format, these are not stored properly and
should be avoided if possible.
https://github.com/restic/restic/issues/3267
https://github.com/restic/restic/pull/3310
* Bugfix #3184: `backup --quiet` no longer prints status information
A regression in the latest restic version caused the output of `backup --quiet` to contain
large amounts of backup progress information when run using an interactive terminal. This is
fixed now.
A workaround for this bug is to run restic as follows: `restic backup --quiet [..] | cat -`.
https://github.com/restic/restic/issues/3184
https://github.com/restic/restic/pull/3186
* Bugfix #3296: Fix crash of `check --read-data-subset=x%` run for an empty repository
The command `restic check --read-data-subset=x%` crashed when run for an empty repository.
This has been fixed.
https://github.com/restic/restic/issues/3296
https://github.com/restic/restic/pull/3309
* Bugfix #3302: Fix `fdopendir: not a directory` error for local backend
The `check`, `list packs`, `prune` and `rebuild-index` commands failed for the local backend
when the `data` folder in the repository contained files. This has been fixed.
https://github.com/restic/restic/issues/3302
https://github.com/restic/restic/pull/3308
* Bugfix #3334: Print `created new cache` message only on a terminal
The message `created new cache` was printed even when the output wasn't a terminal. That broke
piping `restic dump` output to tar or zip if cache directory didn't exist. The message is now
only printed on a terminal.
https://github.com/restic/restic/issues/3334
https://github.com/restic/restic/pull/3343
* Bugfix #3380: Fix crash of `backup --exclude='**'`
The exclude filter `**`, which excludes all files, caused restic to crash. This has been
corrected.
https://github.com/restic/restic/issues/3380
https://github.com/restic/restic/pull/3393
* Bugfix #3305: Fix possibly missing backup summary of JSON output in case of error
When using `--json` output it happened from time to time that the summary output was missing in
case an error occurred. This has been fixed.
https://github.com/restic/restic/pull/3305
* Bugfix #3439: Correctly handle download errors during `restore`
Due to a regression in restic 0.12.0, the `restore` command in some cases did not retry download
errors and only printed a warning. This has been fixed by retrying incomplete data downloads.
https://github.com/restic/restic/issues/3439
https://github.com/restic/restic/pull/3449
* Change #3247: Empty files now have size of 0 in `ls --json` output
The `ls --json` command used to omit the sizes of empty files in its output. It now reports a size
of zero explicitly for regular files, while omitting the size field for all other types.
https://github.com/restic/restic/issues/3247
https://github.com/restic/restic/pull/3257
* Enhancement #2780: Add release binaries for s390x architecture on Linux
We've added release binaries for Linux using the s390x architecture.
https://github.com/restic/restic/issues/2780
https://github.com/restic/restic/pull/3452
* Enhancement #3293: Add `--repository-file2` option to `init` and `copy` command
The `init` and `copy` command can now be used with the `--repository-file2` option or the
`$RESTIC_REPOSITORY_FILE2` environment variable. These to options are in addition to the
`--repo2` flag and allow you to read the destination repository from a file.
Using both `--repository-file` and `--repo2` options resulted in an error for the `copy` or
`init` command. The handling of this combination of options has been fixed. A workaround for
this issue is to only use `--repo` or `-r` and `--repo2` for `init` or `copy`.
https://github.com/restic/restic/issues/3293
https://github.com/restic/restic/pull/3294
* Enhancement #3312: Add auto-completion support for fish
The `generate` command now supports fish auto completion.
https://github.com/restic/restic/pull/3312
* Enhancement #3336: SFTP backend now checks for disk space
Backing up over SFTP previously spewed multiple generic "failure" messages when the remote
disk was full. It now checks for disk space before writing a file and fails immediately with a "no
space left on device" message.
https://github.com/restic/restic/issues/3336
https://github.com/restic/restic/pull/3345
* Enhancement #3377: Add release binaries for Apple Silicon
We've added release binaries for macOS on Apple Silicon (M1).
https://github.com/restic/restic/issues/3377
https://github.com/restic/restic/pull/3394
* Enhancement #3414: Add `--keep-within-hourly` option to restic forget
The `forget` command allowed keeping a given number of hourly backups or to keep all backups
within a given interval, but it was not possible to specify keeping hourly backups within a
given interval.
The new `--keep-within-hourly` option now offers this functionality. Similar options for
daily/weekly/monthly/yearly are also implemented, the new options are:
--keep-within-hourly <1y2m3d4h> --keep-within-daily <1y2m3d4h> --keep-within-weekly
<1y2m3d4h> --keep-within-monthly <1y2m3d4h> --keep-within-yearly <1y2m3d4h>
https://github.com/restic/restic/issues/3414
https://github.com/restic/restic/pull/3416
https://forum.restic.net/t/forget-policy/4014/11
* Enhancement #3456: Support filtering and specifying untagged snapshots
It was previously not possible to specify an empty tag with the `--tag` and `--keep-tag`
options. This has now been fixed, such that `--tag ''` and `--keep-tag ''` now matches
snapshots without tags. This allows e.g. the `snapshots` and `forget` commands to only
operate on untagged snapshots.
https://github.com/restic/restic/issues/3456
https://github.com/restic/restic/pull/3457
* Enhancement #3167: Allow specifying limit of `snapshots` list
The `--last` option allowed limiting the output of the `snapshots` command to the latest
snapshot for each host. The new `--latest n` option allows limiting the output to the latest `n`
snapshots.
This change deprecates the option `--last` in favour of `--latest 1`.
https://github.com/restic/restic/pull/3167
* Enhancement #3426: Optimize read performance of mount command
Reading large files in a mounted repository may be up to five times faster. This improvement
primarily applies to repositories stored at a backend that can be accessed with low latency,
like e.g. the local backend.
https://github.com/restic/restic/pull/3426
* Enhancement #3427: `find --pack` fallback to index if data file is missing
When investigating a repository with missing data files, it might be useful to determine
affected snapshots before running `rebuild-index`. Previously, `find --pack pack-id`
returned no data as it required accessing the data file. Now, if the necessary data is still
available in the repository index, it gets retrieved from there.
The command now also supports looking up multiple pack files in a single `find` run.
https://github.com/restic/restic/pull/3427
https://forum.restic.net/t/missing-packs-not-found/2600
Changelog for restic 0.12.0 (2021-02-14)
=======================================
@@ -2837,7 +3488,7 @@ Summary
* Enh #1055: Create subdirs below `data/` for local/sftp backends
* Enh #1067: Allow loading credentials for s3 from IAM
* Enh #1073: Add `migrate` cmd to migrate from `s3legacy` to `default` layout
* Enh #1081: Clarify semantic for `--tasg` for the `forget` command
* Enh #1081: Clarify semantic for `--tag` for the `forget` command
* Enh #1080: Ignore chmod() errors on filesystems which do not support it
* Enh #1082: Print stats on SIGINFO on Darwin and FreeBSD (ctrl+t)
@@ -2881,7 +3532,7 @@ Details
https://github.com/restic/restic/issues/1073
https://github.com/restic/restic/pull/1075
* Enhancement #1081: Clarify semantic for `--tasg` for the `forget` command
* Enhancement #1081: Clarify semantic for `--tag` for the `forget` command
https://github.com/restic/restic/issues/1081
https://github.com/restic/restic/pull/1090

View File

@@ -13,11 +13,10 @@ bug fixes are most welcome. However even "minor" details as fixing spelling
errors, improving documentation or pointing out usability issues are a great
help also.
The restic project uses the GitHub infrastructure (see the
[project page](https://github.com/restic/restic)) for all related discussions
as well as the [forum](https://forum.restic.net/) and the `#restic` channel
on [irc.freenode.net](https://kiwiirc.com/nextclient/irc.freenode.net/restic).
on [irc.libera.chat](https://kiwiirc.com/nextclient/#ircs://irc.libera.chat:6697/#restic).
If you want to find an area that currently needs improving have a look at the
open issues listed at the
@@ -67,7 +66,7 @@ Development Environment
The repository contains the code written for restic in the directories
`cmd/` and `internal/`.
Restic requires Go version 1.13 or later for compiling. Clone the repo (without
Restic requires Go version 1.14 or later for compiling. Clone the repo (without
having `$GOPATH` set) and `cd` into the directory:
$ unset GOPATH
@@ -124,7 +123,10 @@ down to the following steps:
writing, ask yourself: If I were the user, what would I need to be aware
of with this change?
8. Once your code looks good and passes all the tests, we'll merge it. Thanks
8. Do not edit the man pages under `doc/man` or `doc/manual_rest.rst` -
these are autogenerated before new releases.
9. Once your code looks good and passes all the tests, we'll merge it. Thanks
a lot for your contribution!
Please provide the patches for each bug or feature in a separate branch and

View File

@@ -46,8 +46,8 @@ Therefore, restic supports the following backends for storing backups natively:
- [Local directory](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#local)
- [sftp server (via SSH)](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#sftp)
- [HTTP REST server](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#rest-server) ([protocol](doc/100_references.rst#rest-backend), [rest-server](https://github.com/restic/rest-server))
- [AWS S3](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#amazon-s3) (either from Amazon or using the [Minio](https://minio.io) server)
- [HTTP REST server](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#rest-server) ([protocol](https://restic.readthedocs.io/en/latest/100_references.html#rest-backend), [rest-server](https://github.com/restic/rest-server))
- [Amazon S3](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#amazon-s3) (either from Amazon or using the [Minio](https://minio.io) server)
- [OpenStack Swift](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#openstack-swift)
- [BackBlaze B2](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#backblaze-b2)
- [Microsoft Azure Blob Storage](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#microsoft-azure-blob-storage)

View File

@@ -1 +1 @@
0.12.0
0.13.0

View File

@@ -35,6 +35,7 @@
// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//go:build ignore_build_go
// +build ignore_build_go
package main
@@ -58,7 +59,7 @@ var config = Config{
Main: "./cmd/restic", // package name for the main package
DefaultBuildTags: []string{"selfupdate"}, // specify build tags which are always used
Tests: []string{"./..."}, // tests to run
MinVersion: GoVersion{Major: 1, Minor: 11, Patch: 0}, // minimum Go version supported
MinVersion: GoVersion{Major: 1, Minor: 14, Patch: 0}, // minimum Go version supported
}
// Config configures the build.
@@ -123,17 +124,8 @@ func printEnv(env []string) {
// build runs "go build args..." with GOPATH set to gopath.
func build(cwd string, env map[string]string, args ...string) error {
a := []string{"build"}
// try to remove all absolute paths from resulting binary
if goVersion.AtLeast(GoVersion{1, 13, 0}) {
// use the new flag introduced by Go 1.13
a = append(a, "-trimpath")
} else {
// otherwise try to trim as many paths as possible
a = append(a, "-asmflags", fmt.Sprintf("all=-trimpath=%s", cwd))
a = append(a, "-gcflags", fmt.Sprintf("all=-trimpath=%s", cwd))
}
// -trimpath removes all absolute paths from the binary.
a := []string{"build", "-trimpath"}
if enablePIE {
a = append(a, "-buildmode=pie")

View File

@@ -0,0 +1,13 @@
Bugfix: Improve error handling for rclone and REST backend over HTTP2
When retrieving data from the rclone / REST backend while also using HTTP2
restic did not detect when no data was returned at all. This could cause
for example the `check` command to report the following error:
Pack ID does not match, want [...], got e3b0c442
This has been fixed by correctly detecting and retrying the incomplete download.
https://github.com/restic/restic/issues/2742
https://github.com/restic/restic/pull/3453
https://forum.restic.net/t/http2-stream-closed-connection-reset-context-canceled/3743/10

View File

@@ -0,0 +1,6 @@
Enhancement: Add release binaries for s390x architecture on Linux
We've added release binaries for Linux using the s390x architecture.
https://github.com/restic/restic/issues/2780
https://github.com/restic/restic/pull/3452

View File

@@ -0,0 +1,11 @@
Bugfix: Fix terminal output redirection for PowerShell
When redirecting the output of restic using PowerShell on Windows, the
output contained terminal escape characters. This has been fixed by
properly detecting the terminal type.
In addition, the mintty terminal now shows progress output for the backup
command.
https://github.com/restic/restic/issues/3111
https://github.com/restic/restic/pull/3325

View File

@@ -0,0 +1,9 @@
Bugfix: Treat an empty password as a fatal error for repository init
When attempting to initialize a new repository, if an empty password was
supplied, the repository would be created but the init command would return
an error with a stack trace. Now, if an empty password is provided, it is
treated as a fatal error, and no repository is created.
https://github.com/restic/restic/issues/3214
https://github.com/restic/restic/pull/3283

View File

@@ -0,0 +1,8 @@
Change: Empty files now have size of 0 in `ls --json` output
The `ls --json` command used to omit the sizes of empty files in its
output. It now reports a size of zero explicitly for regular files,
while omitting the size field for all other types.
https://github.com/restic/restic/issues/3247
https://github.com/restic/restic/pull/3257

View File

@@ -0,0 +1,11 @@
Bugfix: `copy` failed to copy snapshots in rare cases
The `copy` command could in rare cases fail with the error message `SaveTree(...)
returned unexpected id ...`. This has been fixed.
On Linux/BSDs, the error could be caused by backing up symlinks with non-UTF-8
target paths. Note that, due to limitations in the repository format, these are
not stored properly and should be avoided if possible.
https://github.com/restic/restic/issues/3267
https://github.com/restic/restic/pull/3310

View File

@@ -0,0 +1,11 @@
Bugfix: `backup --quiet` no longer prints status information
A regression in the latest restic version caused the output of `backup --quiet`
to contain large amounts of backup progress information when run using an
interactive terminal. This is fixed now.
A workaround for this bug is to run restic as follows:
`restic backup --quiet [..] | cat -`.
https://github.com/restic/restic/issues/3184
https://github.com/restic/restic/pull/3186

View File

@@ -0,0 +1,14 @@
Enhancement: Add `--repository-file2` option to `init` and `copy` command
The `init` and `copy` command can now be used with the `--repository-file2`
option or the `$RESTIC_REPOSITORY_FILE2` environment variable.
These to options are in addition to the `--repo2` flag and allow you to read
the destination repository from a file.
Using both `--repository-file` and `--repo2` options resulted in an error for
the `copy` or `init` command. The handling of this combination of options has
been fixed. A workaround for this issue is to only use `--repo` or `-r` and
`--repo2` for `init` or `copy`.
https://github.com/restic/restic/issues/3293
https://github.com/restic/restic/pull/3294

View File

@@ -0,0 +1,7 @@
Bugfix: Fix crash of `check --read-data-subset=x%` run for an empty repository
The command `restic check --read-data-subset=x%` crashed when run for an empty
repository. This has been fixed.
https://github.com/restic/restic/issues/3296
https://github.com/restic/restic/pull/3309

View File

@@ -0,0 +1,8 @@
Bugfix: Fix `fdopendir: not a directory` error for local backend
The `check`, `list packs`, `prune` and `rebuild-index` commands failed
for the local backend when the `data` folder in the repository contained
files. This has been fixed.
https://github.com/restic/restic/issues/3302
https://github.com/restic/restic/pull/3308

View File

@@ -0,0 +1,5 @@
Enhancement: Add auto-completion support for fish
The `generate` command now supports fish auto completion.
https://github.com/restic/restic/pull/3312

View File

@@ -0,0 +1,8 @@
Bugfix: Print `created new cache` message only on a terminal
The message `created new cache` was printed even when the output wasn't a
terminal. That broke piping `restic dump` output to tar or zip if cache
directory didn't exist. The message is now only printed on a terminal.
https://github.com/restic/restic/issues/3334
https://github.com/restic/restic/pull/3343

View File

@@ -0,0 +1,8 @@
Enhancement: SFTP backend now checks for disk space
Backing up over SFTP previously spewed multiple generic "failure" messages
when the remote disk was full. It now checks for disk space before writing
a file and fails immediately with a "no space left on device" message.
https://github.com/restic/restic/issues/3336
https://github.com/restic/restic/pull/3345

View File

@@ -0,0 +1,6 @@
Enhancement: Add release binaries for Apple Silicon
We've added release binaries for macOS on Apple Silicon (M1).
https://github.com/restic/restic/issues/3377
https://github.com/restic/restic/pull/3394

View File

@@ -0,0 +1,7 @@
Bugfix: Fix crash of `backup --exclude='**'`
The exclude filter `**`, which excludes all files, caused restic to crash. This
has been corrected.
https://github.com/restic/restic/issues/3380
https://github.com/restic/restic/pull/3393

View File

@@ -0,0 +1,20 @@
Enhancement: Add `--keep-within-hourly` option to restic forget
The `forget` command allowed keeping a given number of hourly
backups or to keep all backups within a given interval, but it
was not possible to specify keeping hourly backups within a given
interval.
The new `--keep-within-hourly` option now offers this functionality.
Similar options for daily/weekly/monthly/yearly are also implemented,
the new options are:
--keep-within-hourly <1y2m3d4h>
--keep-within-daily <1y2m3d4h>
--keep-within-weekly <1y2m3d4h>
--keep-within-monthly <1y2m3d4h>
--keep-within-yearly <1y2m3d4h>
https://github.com/restic/restic/issues/3414
https://github.com/restic/restic/pull/3416
https://forum.restic.net/t/forget-policy/4014/11

View File

@@ -0,0 +1,9 @@
Enhancement: Support filtering and specifying untagged snapshots
It was previously not possible to specify an empty tag with the `--tag` and
`--keep-tag` options. This has now been fixed, such that `--tag ''` and
`--keep-tag ''` now matches snapshots without tags. This allows e.g. the
`snapshots` and `forget` commands to only operate on untagged snapshots.
https://github.com/restic/restic/issues/3456
https://github.com/restic/restic/pull/3457

View File

@@ -0,0 +1,9 @@
Enhancement: Allow specifying limit of `snapshots` list
The `--last` option allowed limiting the output of the `snapshots`
command to the latest snapshot for each host. The new `--latest n`
option allows limiting the output to the latest `n` snapshots.
This change deprecates the option `--last` in favour of `--latest 1`.
https://github.com/restic/restic/pull/3167

View File

@@ -0,0 +1,6 @@
Bugfix: Fix possibly missing backup summary of JSON output in case of error
When using `--json` output it happened from time to time that the summary
output was missing in case an error occurred. This has been fixed.
https://github.com/restic/restic/pull/3305

View File

@@ -0,0 +1,7 @@
Enhancement: Optimize read performance of mount command
Reading large files in a mounted repository may be up to five times faster.
This improvement primarily applies to repositories stored at a backend that can
be accessed with low latency, like e.g. the local backend.
https://github.com/restic/restic/pull/3426

View File

@@ -0,0 +1,13 @@
Enhancement: `find --pack` fallback to index if data file is missing
When investigating a repository with missing data files, it might be useful to
determine affected snapshots before running `rebuild-index`. Previously,
`find --pack pack-id` returned no data as it required accessing the data file.
Now, if the necessary data is still available in the repository index, it gets
retrieved from there.
The command now also supports looking up multiple pack files in a single `find`
run.
https://github.com/restic/restic/pull/3427
https://forum.restic.net/t/missing-packs-not-found/2600

View File

@@ -0,0 +1,8 @@
Bugfix: Correctly handle download errors during `restore`
Due to a regression in restic 0.12.0, the `restore` command in some cases did
not retry download errors and only printed a warning. This has been fixed by
retrying incomplete data downloads.
https://github.com/restic/restic/issues/3439
https://github.com/restic/restic/pull/3449

View File

@@ -0,0 +1,10 @@
Bugfix: Never lock repository for `list locks`
The `list locks` command previously locked to the repository by default. This
had the problem that it wouldn't work for an exclusively locked repository and
that the command would also display its own lock file which can be confusing.
Now, the `list locks` command never locks the repository.
https://github.com/restic/restic/issues/1106
https://github.com/restic/restic/pull/3665

View File

@@ -0,0 +1,16 @@
Enhancement: Add `--dry-run`/`-n` option to `backup` command
Testing exclude filters and other configuration options was error prone as
wrong filters could cause files to be uploaded unintentionally. It was also
not possible to estimate beforehand how much data would be uploaded.
The `backup` command now has a `--dry-run`/`-n` option, which performs all the
normal steps of a backup without actually writing anything to the repository.
Passing -vv will log information about files that would be added, allowing for
verification of source and exclusion options before running the real backup.
https://github.com/restic/restic/issues/1542
https://github.com/restic/restic/pull/2308
https://github.com/restic/restic/pull/3210
https://github.com/restic/restic/pull/3300

View File

@@ -0,0 +1,15 @@
Enhancement: Add upload checksum for Azure, GS, S3 and Swift backends
Previously only the B2 and partially the Swift backends verified the integrity
of uploaded (encrypted) files. The verification works by informing the backend
about the expected hash of the uploaded file. The backend then verifies the
upload and thereby rules out any data corruption during upload.
We have now added upload checksums for the Azure, GS, S3 and Swift backends,
which besides integrity checking for uploads also means that restic can now be
used to store backups in S3 buckets which have Object Lock enabled.
https://github.com/restic/restic/issues/2202
https://github.com/restic/restic/issues/2700
https://github.com/restic/restic/issues/3023
https://github.com/restic/restic/pull/3246

View File

@@ -0,0 +1,29 @@
Enhancement: Support negative include/exclude patterns
If a pattern starts with an exclamation mark and it matches a file that was
previously matched by a regular pattern, the match is cancelled. Notably,
this can be used with `--exclude-file` to cancel the exclusion of some files.
It works similarly to `.gitignore`, with the same limitation; Once a directory
is excluded, it is not possible to include files inside the directory.
Example of use as an exclude pattern for the `backup` command:
$HOME/**/*
!$HOME/Documents
!$HOME/code
!$HOME/.emacs.d
!$HOME/games
# [...]
node_modules
*~
*.o
*.lo
*.pyc
# [...]
$HOME/code/linux/*
!$HOME/code/linux/.git
# [...]
https://github.com/restic/restic/issues/233
https://github.com/restic/restic/pull/2311

View File

@@ -0,0 +1,13 @@
Bugfix: Make cache crash-resistant and usable by multiple concurrent processes
The restic cache directory (`RESTIC_CACHE_DIR`) could end up in a broken state
in the event of restic (or the OS) crashing. This is now less likely to occur
as files are downloaded to a temporary location before being moved to their
proper location.
This also allows multiple concurrent restic processes to operate on a single
repository without conflicts. Previously, concurrent operations could cause
segfaults because the processes saw each other's partially downloaded files.
https://github.com/restic/restic/issues/2345
https://github.com/restic/restic/pull/2838

View File

@@ -0,0 +1,10 @@
Enhancement: Add warning for S3 if partial credentials are provided
Previously restic did not notify about incomplete credentials when using the
S3 backend, instead just reporting access denied.
Restic now checks that both the AWS key ID and secret environment variables are
set before connecting to the remote server, and reports an error if not.
https://github.com/restic/restic/issues/2388
https://github.com/restic/restic/pull/3532

View File

@@ -0,0 +1,11 @@
Bugfix: Improve error handling of repository locking
Previously, when the lock refresh failed to delete the old lock file, it forgot
about the newly created one. Instead it continued trying to delete the old
(usually no longer existing) lock file and thus over time lots of lock files
accumulated. This has now been fixed.
https://github.com/restic/restic/issues/2452
https://github.com/restic/restic/issues/2473
https://github.com/restic/restic/issues/2562
https://github.com/restic/restic/pull/3512

View File

@@ -0,0 +1,8 @@
Enhancement: Support JSON output and quiet mode for the `diff` command
The `diff` command now supports outputting machine-readable output in JSON
format. To enable this, pass the `--json` option to the command. To only print
the summary and suppress detailed output, pass the `--quiet` option.
https://github.com/restic/restic/issues/2508
https://github.com/restic/restic/pull/3592

View File

@@ -0,0 +1,8 @@
Enhancement: Add flag to disable TLS verification for self-signed certificates
There is now an `--insecure-tls` global option in restic, which disables TLS
verification for self-signed certificates in order to support some development
workflows.
https://github.com/restic/restic/issues/2656
https://github.com/restic/restic/pull/2657

View File

@@ -0,0 +1,8 @@
Bugfix: Don't print progress for `backup --json --quiet`
Unlike the text output, the `--json` output format still printed progress
information even in `--quiet` mode. This has now been fixed by always
disabling the progress output in quiet mode.
https://github.com/restic/restic/issues/2738
https://github.com/restic/restic/pull/3264

View File

@@ -0,0 +1,9 @@
Enhancement: Atomic uploads for the SFTP backend
The SFTP backend did not upload files atomically. An interrupted upload could
leave an incomplete file behind which could prevent restic from accessing the
repository. This has now been fixed and uploads in the SFTP backend are done
atomically.
https://github.com/restic/restic/issues/3003
https://github.com/restic/restic/pull/3524

View File

@@ -0,0 +1,6 @@
Enhancement: Add xattr (extended attributes) support for Solaris
Restic now supports xattr for the Solaris operating system.
https://github.com/restic/restic/issues/3127
https://github.com/restic/restic/pull/3628

View File

@@ -0,0 +1,8 @@
Bugfix: Make `check` command honor `RESTIC_CACHE_DIR` environment variable
Previously, the `check` command didn't honor the `RESTIC_CACHE_DIR` environment
variable, which caused problems in certain system/usage configurations. This
has now been fixed.
https://github.com/restic/restic/issues/3382
https://github.com/restic/restic/pull/3474

View File

@@ -0,0 +1,10 @@
Enhancement: Skip lock creation on `forget` if `--no-lock` and `--dry-run`
Restic used to silently ignore the `--no-lock` option of the `forget` command.
It now skips creation of lock file in case both `--dry-run` and `--no-lock`
are specified. If `--no-lock` option is specified without `--dry-run`, restic
prints a warning message to stderr.
https://github.com/restic/restic/issues/3464
https://github.com/restic/restic/pull/3623

View File

@@ -0,0 +1,8 @@
Enhancement: Support random subset by size in `check --read-data-subset`
The `--read-data-subset` option of the `check` command now supports a third way
of specifying the subset to check, namely `nS` where `n` is a size in bytes with
suffix `S` as k/K, m/M, g/G or t/T.
https://github.com/restic/restic/issues/3490
https://github.com/restic/restic/pull/3548

View File

@@ -0,0 +1,8 @@
Bugfix: Make `copy` command honor `--no-lock` for source repository
The `copy` command previously did not respect the `--no-lock` option for the
source repository, causing failures with read-only storage backends. This has
now been fixed such that the option is now respected.
https://github.com/restic/restic/issues/3518
https://github.com/restic/restic/pull/3589

View File

@@ -0,0 +1,9 @@
Enhancement: Improve handling of temporary B2 delete errors
Deleting files on B2 could sometimes fail temporarily, which required restic to
retry the delete operation. In some cases the file was deleted nevertheless,
causing the retries and ultimately the restic command to fail. This has now been
fixed.
https://github.com/restic/restic/issues/3541
https://github.com/restic/restic/pull/3544

View File

@@ -0,0 +1,8 @@
Enhancement: Add file mode in symbolic notation to `ls --json`
The `ls --json` command now provides the file mode in symbolic notation (using
the `permissions` key), aligned with `find --json`.
https://github.com/restic/restic/issues/3542
https://github.com/restic/restic/pull/3573
https://forum.restic.net/t/restic-ls-understanding-file-mode-with-json/4371

View File

@@ -0,0 +1,10 @@
Bugfix: Fix hang with Backblaze B2 on SSL certificate authority error
Previously, if a request failed with an SSL unknown certificate authority
error, the B2 backend retried indefinitely and restic would appear to hang.
This has now been fixed and restic instead fails with an error message.
https://github.com/restic/restic/issues/3556
https://github.com/restic/restic/issues/2355
https://github.com/restic/restic/pull/3571

View File

@@ -0,0 +1,15 @@
Bugfix: Fix rclone backend prematurely exiting when receiving SIGINT on Windows
Previously, pressing Ctrl+C in a Windows console where restic was running with
rclone as the backend would cause rclone to exit prematurely due to getting a
`SIGINT` signal at the same time as restic. Restic would then wait for a long
time for time with "unexpected EOF" and "rclone stdio connection already closed"
errors.
This has now been fixed by restic starting the rclone process detached from the
console restic runs in (similar to starting processes in a new process group on
Linux), which enables restic to gracefully clean up rclone (which now never gets
the `SIGINT`).
https://github.com/restic/restic/issues/3601
https://github.com/restic/restic/pull/3602

View File

@@ -0,0 +1,14 @@
Change: Ignore parent snapshot for `backup --stdin`
Restic uses a parent snapshot to speed up directory scanning when performing
backups, but this only wasted time and memory when the backup source is stdin
(using the `--stdin` option of the `backup` command), since no directory scanning
is performed in this case.
Snapshots made with `backup --stdin` no longer have a parent snapshot, which allows
restic to skip some startup operations and saves a bit of resources.
The `--parent` option is still available for `backup --stdin`, but is now ignored.
https://github.com/restic/restic/issues/3641
https://github.com/restic/restic/pull/3645

View File

@@ -0,0 +1,8 @@
Bugfix: The `mount` command now reports symlinks sizes
Symlinks used to have size zero in restic mountpoints, confusing some
third-party tools. They now have a size equal to the byte length of their
target path, as required by POSIX.
https://github.com/restic/restic/issues/3667
https://github.com/restic/restic/pull/3668

View File

@@ -0,0 +1,7 @@
Enhancement: Speed up the `restore --verify` command
The `--verify` option lets the `restore` command verify the file content
after it has restored a snapshot. The performance of this operation has
now been improved by up to a factor of two.
https://github.com/restic/restic/pull/2594

View File

@@ -0,0 +1,10 @@
Enhancement: The `backup` command no longer updates file access times on Linux
When reading files during backup, restic used to cause the operating system to
update the files' access times. Note that this did not apply to filesystems with
disabled file access times.
Restic now instructs the operating system not to update the file access time,
if the user running restic is the file owner or has root permissions.
https://github.com/restic/restic/pull/2816

View File

@@ -0,0 +1,9 @@
Enhancement: Make `recover` collect only unreferenced trees
Previously, the `recover` command used to generate a snapshot containing *all*
root trees, even those which were already referenced by a snapshot.
This has been improved such that it now only processes trees not already
referenced by any snapshot.
https://github.com/restic/restic/pull/2880

View File

@@ -0,0 +1,12 @@
Enhancement: Verify that new or modified keys are stored correctly
When adding a new key or changing the password of a key, restic used to just
create the new key (and remove the old one, when changing the password). There
was no verification that the new key was stored correctly and works properly.
As the repository cannot be decrypted without a valid key file, this could in
rare cases cause the repository to become inaccessible.
Restic now checks that new key files actually work before continuing. This
can protect against some (rare) cases of hardware or storage problems.
https://github.com/restic/restic/pull/3429

View File

@@ -0,0 +1,12 @@
Enhancement: Improve local backend's resilience to (system) crashes
Restic now ensures that files stored using the `local` backend are created
atomically (that is, files are either stored completely or not at all). This
ensures that no incomplete files are left behind even if restic is terminated
while writing a file.
In addition, restic now tries to ensure that the directory in the repository
which contains a newly uploaded file is also written to disk. This can prevent
missing files if the system crashes or the disk is not properly unmounted.
https://github.com/restic/restic/pull/3436

View File

@@ -0,0 +1,9 @@
Bugfix: `rebuild-index` failed if an index file was damaged
Previously, the `rebuild-index` command would fail with an error if an index
file was damaged or truncated. This has now been fixed.
On older restic versions, a (slow) workaround is to use
`rebuild-index --read-all-packs` or to manually delete the damaged index.
https://github.com/restic/restic/pull/3488

View File

@@ -0,0 +1,10 @@
Enhancement: Cache blobs read by the `dump` command
When dumping a file using the `dump` command, restic did not cache blobs in any
way, so even consecutive runs of the same blob were loaded from the repository
again and again, slowing down the dump.
Now, the caching mechanism already used by the `fuse` command is also used by
the `dump` command. This makes dumping much faster, especially for sparse files.
https://github.com/restic/restic/pull/3508

View File

@@ -0,0 +1,8 @@
Enhancement: Support configurable timeout for the rclone backend
A slow rclone backend could cause restic to time out while waiting for the
repository to open. Restic now offers an `-o rclone.timeout` option to make
this timeout configurable.
https://github.com/restic/restic/issues/3511
https://github.com/restic/restic/pull/3514

View File

@@ -0,0 +1,6 @@
Change: Require Go 1.14 or newer
Restic now requires Go 1.14 to build. This allows it to use new
standard library features instead of an external dependency.
https://github.com/restic/restic/issues/3519

View File

@@ -0,0 +1,8 @@
Bugfix: Fix handling of `prune --max-repack-size=0`
Restic ignored the `--max-repack-size` option when passing a value of 0. This
has now been fixed.
As a workaround, `--max-repack-size=1` can be used with older versions of restic.
https://github.com/restic/restic/pull/3591

View File

@@ -0,0 +1,10 @@
Enhancement: Improve `copy` performance by parallelizing IO
Restic copy previously only used a single thread for copying blobs between
repositories, which resulted in limited performance when copying small blobs
to/from a high latency backend (i.e. any remote backend, especially b2).
Copying will now use 8 parallel threads to increase the throughput of the copy
operation.
https://github.com/restic/restic/pull/3593

View File

@@ -0,0 +1,11 @@
Bugfix: Avoid choosing parent snapshots newer than time of new snapshot
The `backup` command, when a `--parent` was not provided, previously chose the
most recent matching snapshot as the parent snapshot. However, this didn't make
sense when the user passed `--time` to create a new snapshot older than the most
recent snapshot.
Instead, `backup` now chooses the most recent snapshot which is not newer than
the snapshot-being-created's timestamp, to avoid any time travel.
https://github.com/restic/restic/pull/3619

View File

@@ -1,4 +1,4 @@
Enhancement: Clarify semantic for `--tasg` for the `forget` command
Enhancement: Clarify semantic for `--tag` for the `forget` command
https://github.com/restic/restic/issues/1081
https://github.com/restic/restic/pull/1090

View File

@@ -58,7 +58,7 @@ func RunCleanupHandlers() {
func CleanupHandler(c <-chan os.Signal) {
for s := range c {
debug.Log("signal %v received, cleaning up", s)
Warnf("%ssignal %v received, cleaning up\n", ClearLine(), s)
Warnf("%ssignal %v received, cleaning up\n", clearLine(0), s)
code := 0

View File

@@ -24,8 +24,7 @@ import (
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/textfile"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/ui/json"
"github.com/restic/restic/internal/ui/backup"
"github.com/restic/restic/internal/ui/termstatus"
)
@@ -60,11 +59,11 @@ Exit status is 3 if some source data could not be read (incomplete snapshot crea
t.Go(func() error { term.Run(t.Context(globalOptions.ctx)); return nil })
err := runBackup(backupOptions, globalOptions, term, args)
if err != nil {
return err
}
t.Kill(nil)
return t.Wait()
if werr := t.Wait(); werr != nil {
panic(fmt.Sprintf("term.Run() returned err: %v", err))
}
return err
},
}
@@ -92,24 +91,25 @@ type BackupOptions struct {
IgnoreInode bool
IgnoreCtime bool
UseFsSnapshot bool
DryRun bool
}
var backupOptions BackupOptions
// ErrInvalidSourceData is used to report an incomplete backup
var ErrInvalidSourceData = errors.New("failed to read all source data during backup")
var ErrInvalidSourceData = errors.New("at least one source file could not be read")
func init() {
cmdRoot.AddCommand(cmdBackup)
f := cmdBackup.Flags()
f.StringVar(&backupOptions.Parent, "parent", "", "use this parent `snapshot` (default: last snapshot in the repo that has the same target files/directories)")
f.StringVar(&backupOptions.Parent, "parent", "", "use this parent `snapshot` (default: last snapshot in the repo that has the same target files/directories, and is not newer than the snapshot time)")
f.BoolVarP(&backupOptions.Force, "force", "f", false, `force re-reading the target files/directories (overrides the "parent" flag)`)
f.StringArrayVarP(&backupOptions.Excludes, "exclude", "e", nil, "exclude a `pattern` (can be specified multiple times)")
f.StringArrayVar(&backupOptions.InsensitiveExcludes, "iexclude", nil, "same as --exclude `pattern` but ignores the casing of filenames")
f.StringArrayVar(&backupOptions.ExcludeFiles, "exclude-file", nil, "read exclude patterns from a `file` (can be specified multiple times)")
f.StringArrayVar(&backupOptions.InsensitiveExcludeFiles, "iexclude-file", nil, "same as --exclude-file but ignores casing of `file`names in patterns")
f.BoolVarP(&backupOptions.ExcludeOtherFS, "one-file-system", "x", false, "exclude other file systems")
f.BoolVarP(&backupOptions.ExcludeOtherFS, "one-file-system", "x", false, "exclude other file systems, don't cross filesystem boundaries and subvolumes")
f.StringArrayVar(&backupOptions.ExcludeIfPresent, "exclude-if-present", nil, "takes `filename[:header]`, exclude contents of directories containing filename (except filename itself) if header of that file is as provided (can be specified multiple times)")
f.BoolVar(&backupOptions.ExcludeCaches, "exclude-caches", false, `excludes cache directories that are marked with a CACHEDIR.TAG file. See https://bford.info/cachedir/ for the Cache Directory Tagging Standard`)
f.StringVar(&backupOptions.ExcludeLargerThan, "exclude-larger-than", "", "max `size` of the files to be backed up (allowed suffixes: k/K, m/M, g/G, t/T)")
@@ -132,6 +132,7 @@ func init() {
f.BoolVar(&backupOptions.WithAtime, "with-atime", false, "store the atime for all files and directories")
f.BoolVar(&backupOptions.IgnoreInode, "ignore-inode", false, "ignore inode number changes when checking for modified files")
f.BoolVar(&backupOptions.IgnoreCtime, "ignore-ctime", false, "ignore ctime changes when checking for modified files")
f.BoolVarP(&backupOptions.DryRun, "dry-run", "n", false, "do not upload or write any data, just show what would be done")
if runtime.GOOS == "windows" {
f.BoolVar(&backupOptions.UseFsSnapshot, "use-fs-snapshot", false, "use filesystem snapshot where possible (currently only Windows VSS)")
}
@@ -471,7 +472,7 @@ func collectTargets(opts BackupOptions, args []string) (targets []string, err er
// parent returns the ID of the parent snapshot. If there is none, nil is
// returned.
func findParentSnapshot(ctx context.Context, repo restic.Repository, opts BackupOptions, targets []string) (parentID *restic.ID, err error) {
func findParentSnapshot(ctx context.Context, repo restic.Repository, opts BackupOptions, targets []string, timeStampLimit time.Time) (parentID *restic.ID, err error) {
// Force using a parent
if !opts.Force && opts.Parent != "" {
id, err := restic.FindSnapshot(ctx, repo, opts.Parent)
@@ -484,7 +485,7 @@ func findParentSnapshot(ctx context.Context, repo restic.Repository, opts Backup
// Find last snapshot to set it as parent, if not already set
if !opts.Force && parentID == nil {
id, err := restic.FindLatestSnapshot(ctx, repo, targets, []restic.TagList{}, []string{opts.Host})
id, err := restic.FindLatestSnapshot(ctx, repo, targets, []restic.TagList{}, []string{opts.Host}, &timeStampLimit)
if err == nil {
parentID = &id
} else if err != restic.ErrNoSnapshotFound {
@@ -525,33 +526,17 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
return err
}
type ArchiveProgressReporter interface {
CompleteItem(item string, previous, current *restic.Node, s archiver.ItemStats, d time.Duration)
StartFile(filename string)
CompleteBlob(filename string, bytes uint64)
ScannerError(item string, fi os.FileInfo, err error) error
ReportTotal(item string, s archiver.ScanStats)
SetMinUpdatePause(d time.Duration)
Run(ctx context.Context) error
Error(item string, fi os.FileInfo, err error) error
Finish(snapshotID restic.ID)
// ui.StdioWrapper
Stdout() io.WriteCloser
Stderr() io.WriteCloser
// ui.Message
E(msg string, args ...interface{})
P(msg string, args ...interface{})
V(msg string, args ...interface{})
VV(msg string, args ...interface{})
}
var p ArchiveProgressReporter
var progressPrinter backup.ProgressPrinter
if gopts.JSON {
p = json.NewBackup(term, gopts.verbosity)
progressPrinter = backup.NewJSONProgress(term, gopts.verbosity)
} else {
p = ui.NewBackup(term, gopts.verbosity)
progressPrinter = backup.NewTextProgress(term, gopts.verbosity)
}
progressReporter := backup.NewProgress(progressPrinter)
if opts.DryRun {
repo.SetDryRun()
progressReporter.SetDryRun()
}
// use the terminal for stdout/stderr
@@ -559,14 +544,14 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
defer func() {
gopts.stdout, gopts.stderr = prevStdout, prevStderr
}()
gopts.stdout, gopts.stderr = p.Stdout(), p.Stderr()
gopts.stdout, gopts.stderr = progressPrinter.Stdout(), progressPrinter.Stderr()
p.SetMinUpdatePause(calculateProgressInterval())
progressReporter.SetMinUpdatePause(calculateProgressInterval(!gopts.Quiet, gopts.JSON))
t.Go(func() error { return p.Run(t.Context(gopts.ctx)) })
t.Go(func() error { return progressReporter.Run(t.Context(gopts.ctx)) })
if !gopts.JSON {
p.V("lock repository")
progressPrinter.V("lock repository")
}
lock, err := lockRepo(gopts.ctx, repo)
defer unlockRepo(lock)
@@ -587,23 +572,26 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
}
if !gopts.JSON {
p.V("load index files")
progressPrinter.V("load index files")
}
err = repo.LoadIndex(gopts.ctx)
if err != nil {
return err
}
parentSnapshotID, err := findParentSnapshot(gopts.ctx, repo, opts, targets)
if err != nil {
return err
}
var parentSnapshotID *restic.ID
if !opts.Stdin {
parentSnapshotID, err = findParentSnapshot(gopts.ctx, repo, opts, targets, timeStamp)
if err != nil {
return err
}
if !gopts.JSON {
if parentSnapshotID != nil {
p.P("using parent snapshot %v\n", parentSnapshotID.Str())
} else {
p.P("no parent snapshot found, will read all files\n")
if !gopts.JSON {
if parentSnapshotID != nil {
progressPrinter.P("using parent snapshot %v\n", parentSnapshotID.Str())
} else {
progressPrinter.P("no parent snapshot found, will read all files\n")
}
}
}
@@ -632,12 +620,12 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
}
errorHandler := func(item string, err error) error {
return p.Error(item, nil, err)
return progressReporter.Error(item, nil, err)
}
messageHandler := func(msg string, args ...interface{}) {
if !gopts.JSON {
p.P(msg, args...)
progressPrinter.P(msg, args...)
}
}
@@ -647,7 +635,7 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
}
if opts.Stdin {
if !gopts.JSON {
p.V("read data from stdin")
progressPrinter.V("read data from stdin")
}
filename := path.Join("/", opts.StdinFilename)
targetFS = &fs.Reader{
@@ -662,11 +650,11 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
sc := archiver.NewScanner(targetFS)
sc.SelectByName = selectByNameFilter
sc.Select = selectFilter
sc.Error = p.ScannerError
sc.Result = p.ReportTotal
sc.Error = progressReporter.ScannerError
sc.Result = progressReporter.ReportTotal
if !gopts.JSON {
p.V("start scan on %v", targets)
progressPrinter.V("start scan on %v", targets)
}
t.Go(func() error { return sc.Scan(t.Context(gopts.ctx), targets) })
@@ -677,11 +665,11 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
success := true
arch.Error = func(item string, fi os.FileInfo, err error) error {
success = false
return p.Error(item, fi, err)
return progressReporter.Error(item, fi, err)
}
arch.CompleteItem = p.CompleteItem
arch.StartFile = p.StartFile
arch.CompleteBlob = p.CompleteBlob
arch.CompleteItem = progressReporter.CompleteItem
arch.StartFile = progressReporter.StartFile
arch.CompleteBlob = progressReporter.CompleteBlob
if opts.IgnoreInode {
// --ignore-inode implies --ignore-ctime: on FUSE, the ctime is not
@@ -705,28 +693,30 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
}
if !gopts.JSON {
p.V("start backup on %v", targets)
progressPrinter.V("start backup on %v", targets)
}
_, id, err := arch.Snapshot(gopts.ctx, targets, snapshotOpts)
if err != nil {
return errors.Fatalf("unable to save snapshot: %v", err)
}
// cleanly shutdown all running goroutines
t.Kill(nil)
// let's see if one returned an error
err = t.Wait()
werr := t.Wait()
// return original error
if err != nil {
return errors.Fatalf("unable to save snapshot: %v", err)
}
// Report finished execution
p.Finish(id)
if !gopts.JSON {
p.P("snapshot %s saved\n", id.Str())
progressReporter.Finish(id)
if !gopts.JSON && !opts.DryRun {
progressPrinter.P("snapshot %s saved\n", id.Str())
}
if !success {
return ErrInvalidSourceData
}
// Return error if any
return err
return werr
}

View File

@@ -5,6 +5,7 @@ import (
"os"
"path/filepath"
"sort"
"strings"
"time"
"github.com/restic/restic/internal/cache"
@@ -140,8 +141,13 @@ func runCache(opts CacheOptions, gopts GlobalOptions, args []string) error {
size = fmt.Sprintf("%11s", formatBytes(uint64(bytes)))
}
name := entry.Name()
if !strings.HasPrefix(name, "restic-check-cache-") {
name = name[:10]
}
tab.AddRow(data{
entry.Name()[:10],
name,
fmt.Sprintf("%d days ago", uint(time.Since(entry.ModTime()).Hours()/24)),
old,
size,

View File

@@ -69,7 +69,6 @@ func runCat(gopts GlobalOptions, args []string) error {
}
}
// handle all types that don't need an index
switch tpe {
case "config":
buf, err := json.MarshalIndent(repo.Config(), "", " ")
@@ -142,15 +141,7 @@ func runCat(gopts GlobalOptions, args []string) error {
Println(string(buf))
return nil
}
// load index, handle all the other types
err = repo.LoadIndex(gopts.ctx)
if err != nil {
return err
}
switch tpe {
case "pack":
h := restic.Handle{Type: restic.PackFile, Name: id.String()}
buf, err := backend.LoadAll(gopts.ctx, nil, repo.Backend(), h)
@@ -167,6 +158,11 @@ func runCat(gopts GlobalOptions, args []string) error {
return err
case "blob":
err = repo.LoadIndex(gopts.ctx)
if err != nil {
return err
}
for _, t := range []restic.BlobType{restic.DataBlob, restic.TreeBlob} {
bh := restic.BlobHandle{ID: id, Type: t}
if !repo.Index().Has(bh) {

View File

@@ -5,10 +5,12 @@ import (
"math/rand"
"strconv"
"strings"
"sync"
"time"
"github.com/spf13/cobra"
"github.com/restic/restic/internal/cache"
"github.com/restic/restic/internal/checker"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/fs"
@@ -54,7 +56,7 @@ func init() {
f := cmdCheck.Flags()
f.BoolVar(&checkOptions.ReadData, "read-data", false, "read all data blobs")
f.StringVar(&checkOptions.ReadDataSubset, "read-data-subset", "", "read a `subset` of data packs, specified as 'n/t' for specific subset or either 'x%' or 'x.y%' for random subset")
f.StringVar(&checkOptions.ReadDataSubset, "read-data-subset", "", "read a `subset` of data packs, specified as 'n/t' for specific part, or either 'x%' or 'x.y%' or a size in bytes with suffixes k/K, m/M, g/G, t/T for a random subset")
f.BoolVar(&checkOptions.CheckUnused, "check-unused", false, "find unused blobs")
f.BoolVar(&checkOptions.WithCache, "with-cache", false, "use the cache")
}
@@ -65,7 +67,7 @@ func checkFlags(opts CheckOptions) error {
}
if opts.ReadDataSubset != "" {
dataSubset, err := stringToIntSlice(opts.ReadDataSubset)
argumentError := errors.Fatal("check flag --read-data-subset must have two positive integer values or a percentage, e.g. --read-data-subset=1/2 or --read-data-subset=2.5%%")
argumentError := errors.Fatal("check flag --read-data-subset has invalid value, please see documentation")
if err == nil {
if len(dataSubset) != 2 {
return argumentError
@@ -76,7 +78,7 @@ func checkFlags(opts CheckOptions) error {
if dataSubset[1] > totalBucketsMax {
return errors.Fatalf("check flag --read-data-subset=n/t t must be at most %d", totalBucketsMax)
}
} else {
} else if strings.HasSuffix(opts.ReadDataSubset, "%") {
percentage, err := parsePercentage(opts.ReadDataSubset)
if err != nil {
return argumentError
@@ -84,8 +86,19 @@ func checkFlags(opts CheckOptions) error {
if percentage <= 0.0 || percentage > 100.0 {
return errors.Fatal(
"check flag --read-data-subset=n% n must be above 0.0% and at most 100.0%")
"check flag --read-data-subset=x% x must be above 0.0% and at most 100.0%")
}
} else {
fileSize, err := parseSizeStr(opts.ReadDataSubset)
if err != nil {
return argumentError
}
if fileSize <= 0.0 {
return errors.Fatal(
"check flag --read-data-subset=n n must be above 0")
}
}
}
@@ -146,6 +159,9 @@ func prepareCheckCache(opts CheckOptions, gopts *GlobalOptions) (cleanup func())
}
cachedir := gopts.CacheDir
if cachedir == "" {
cachedir = cache.EnvDir()
}
// use a cache in a temporary directory
tempdir, err := ioutil.TempDir(cachedir, "restic-check-cache-")
@@ -241,7 +257,11 @@ func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
Verbosef("check snapshots, trees and blobs\n")
errChan = make(chan error)
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
bar := newProgressMax(!gopts.Quiet, 0, "snapshots")
defer bar.Done()
chkr.Structure(gopts.ctx, bar, errChan)
@@ -259,6 +279,11 @@ func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
}
}
// Wait for the progress bar to be complete before printing more below.
// Must happen after `errChan` is read from in the above loop to avoid
// deadlocking in the case of errors.
wg.Wait()
if opts.CheckUnused {
for _, id := range chkr.UnusedBlobs(gopts.ctx) {
Verbosef("unused blob %v\n", id)
@@ -294,10 +319,27 @@ func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
packs = selectPacksByBucket(chkr.GetPacks(), bucket, totalBuckets)
packCount := uint64(len(packs))
Verbosef("read group #%d of %d data packs (out of total %d packs in %d groups)\n", bucket, packCount, chkr.CountPacks(), totalBuckets)
} else if strings.HasSuffix(opts.ReadDataSubset, "%") {
percentage, err := parsePercentage(opts.ReadDataSubset)
if err == nil {
packs = selectRandomPacksByPercentage(chkr.GetPacks(), percentage)
Verbosef("read %.1f%% of data packs\n", percentage)
}
} else {
percentage, _ := parsePercentage(opts.ReadDataSubset)
packs = selectRandomPacksByPercentage(chkr.GetPacks(), percentage)
Verbosef("read %.1f%% of data packs\n", percentage)
repoSize := int64(0)
allPacks := chkr.GetPacks()
for _, size := range allPacks {
repoSize += size
}
if repoSize == 0 {
return errors.Fatal("Cannot read from a repository having size 0")
}
subsetSize, _ := parseSizeStr(opts.ReadDataSubset)
if subsetSize > repoSize {
subsetSize = repoSize
}
packs = selectRandomPacksByFileSize(chkr.GetPacks(), subsetSize, repoSize)
Verbosef("read %d bytes of data packs\n", subsetSize)
}
if packs == nil {
return errors.Fatal("internal error: failed to select packs to check")
@@ -331,7 +373,7 @@ func selectPacksByBucket(allPacks map[restic.ID]int64, bucket, totalBuckets uint
func selectRandomPacksByPercentage(allPacks map[restic.ID]int64, percentage float64) map[restic.ID]int64 {
packCount := len(allPacks)
packsToCheck := int(float64(packCount) * (percentage / 100.0))
if packsToCheck < 1 {
if packCount > 0 && packsToCheck < 1 {
packsToCheck = 1
}
timeNs := time.Now().UnixNano()
@@ -349,6 +391,11 @@ func selectRandomPacksByPercentage(allPacks map[restic.ID]int64, percentage floa
id := keys[idx[i]]
packs[id] = allPacks[id]
}
return packs
}
func selectRandomPacksByFileSize(allPacks map[restic.ID]int64, subsetSize int64, repoSize int64) map[restic.ID]int64 {
subsetPercentage := (float64(subsetSize) / float64(repoSize)) * 100.0
packs := selectRandomPacksByPercentage(allPacks, subsetPercentage)
return packs
}

View File

@@ -122,3 +122,44 @@ func TestSelectRandomPacksByPercentage(t *testing.T) {
rtest.Assert(t, ok, "Expected input and output to be equal")
}
}
func TestSelectNoRandomPacksByPercentage(t *testing.T) {
// that the a repository without pack files works
var testPacks = make(map[restic.ID]int64)
selectedPacks := selectRandomPacksByPercentage(testPacks, 10.0)
rtest.Assert(t, len(selectedPacks) == 0, "Expected 0 selected packs")
}
func TestSelectRandomPacksByFileSize(t *testing.T) {
var testPacks = make(map[restic.ID]int64)
for i := 1; i <= 10; i++ {
id := restic.NewRandomID()
// ensure unique ids
id[0] = byte(i)
testPacks[id] = 0
}
selectedPacks := selectRandomPacksByFileSize(testPacks, 10, 500)
rtest.Assert(t, len(selectedPacks) == 1, "Expected 1 selected packs")
selectedPacks = selectRandomPacksByFileSize(testPacks, 10240, 51200)
rtest.Assert(t, len(selectedPacks) == 2, "Expected 2 selected packs")
for pack := range selectedPacks {
_, ok := testPacks[pack]
rtest.Assert(t, ok, "Unexpected selection")
}
selectedPacks = selectRandomPacksByFileSize(testPacks, 500, 500)
rtest.Assert(t, len(selectedPacks) == 10, "Expected 10 selected packs")
for pack := range selectedPacks {
_, ok := testPacks[pack]
rtest.Assert(t, ok, "Unexpected item in selection")
}
}
func TestSelectNoRandomPacksByFileSize(t *testing.T) {
// that the a repository without pack files works
var testPacks = make(map[restic.ID]int64)
selectedPacks := selectRandomPacksByFileSize(testPacks, 10, 500)
rtest.Assert(t, len(selectedPacks) == 0, "Expected 0 selected packs")
}

View File

@@ -73,10 +73,12 @@ func runCopy(opts CopyOptions, gopts GlobalOptions, args []string) error {
return err
}
srcLock, err := lockRepo(ctx, srcRepo)
defer unlockRepo(srcLock)
if err != nil {
return err
if !gopts.NoLock {
srcLock, err := lockRepo(ctx, srcRepo)
defer unlockRepo(srcLock)
if err != nil {
return err
}
}
dstLock, err := lockRepo(ctx, dstRepo)
@@ -174,9 +176,12 @@ func similarSnapshots(sna *restic.Snapshot, snb *restic.Snapshot) bool {
return true
}
const numCopyWorkers = 8
func copyTree(ctx context.Context, srcRepo restic.Repository, dstRepo restic.Repository,
visitedTrees restic.IDSet, rootTreeID restic.ID) error {
idChan := make(chan restic.ID)
wg, ctx := errgroup.WithContext(ctx)
treeStream := restic.StreamTrees(ctx, wg, srcRepo, restic.IDs{rootTreeID}, func(treeID restic.ID) bool {
@@ -186,9 +191,9 @@ func copyTree(ctx context.Context, srcRepo restic.Repository, dstRepo restic.Rep
}, nil)
wg.Go(func() error {
defer close(idChan)
// reused buffer
var buf []byte
for tree := range treeStream {
if tree.Error != nil {
return fmt.Errorf("LoadTree(%v) returned error %v", tree.ID.Str(), tree.Error)
@@ -196,42 +201,57 @@ func copyTree(ctx context.Context, srcRepo restic.Repository, dstRepo restic.Rep
// Do we already have this tree blob?
if !dstRepo.Index().Has(restic.BlobHandle{ID: tree.ID, Type: restic.TreeBlob}) {
newTreeID, err := dstRepo.SaveTree(ctx, tree.Tree)
// copy raw tree bytes to avoid problems if the serialization changes
var err error
buf, err = srcRepo.LoadBlob(ctx, restic.TreeBlob, tree.ID, buf)
if err != nil {
return fmt.Errorf("SaveTree(%v) returned error %v", tree.ID.Str(), err)
return fmt.Errorf("LoadBlob(%v) for tree returned error %v", tree.ID, err)
}
// Assurance only.
if newTreeID != tree.ID {
return fmt.Errorf("SaveTree(%v) returned unexpected id %s", tree.ID.Str(), newTreeID.Str())
_, _, err = dstRepo.SaveBlob(ctx, restic.TreeBlob, buf, tree.ID, false)
if err != nil {
return fmt.Errorf("SaveBlob(%v) for tree returned error %v", tree.ID.Str(), err)
}
}
// TODO: parallelize blob down/upload
for _, entry := range tree.Nodes {
// Recursion into directories is handled by StreamTrees
// Copy the blobs for this file.
for _, blobID := range entry.Content {
// Do we already have this data blob?
if dstRepo.Index().Has(restic.BlobHandle{ID: blobID, Type: restic.DataBlob}) {
continue
}
debug.Log("Copying blob %s\n", blobID.Str())
var err error
buf, err = srcRepo.LoadBlob(ctx, restic.DataBlob, blobID, buf)
if err != nil {
return fmt.Errorf("LoadBlob(%v) returned error %v", blobID, err)
}
_, _, err = dstRepo.SaveBlob(ctx, restic.DataBlob, buf, blobID, false)
if err != nil {
return fmt.Errorf("SaveBlob(%v) returned error %v", blobID, err)
select {
case idChan <- blobID:
case <-ctx.Done():
return ctx.Err()
}
}
}
}
return nil
})
for i := 0; i < numCopyWorkers; i++ {
wg.Go(func() error {
// reused buffer
var buf []byte
for blobID := range idChan {
// Do we already have this data blob?
if dstRepo.Index().Has(restic.BlobHandle{ID: blobID, Type: restic.DataBlob}) {
continue
}
debug.Log("Copying blob %s\n", blobID.Str())
var err error
buf, err = srcRepo.LoadBlob(ctx, restic.DataBlob, blobID, buf)
if err != nil {
return fmt.Errorf("LoadBlob(%v) returned error %v", blobID, err)
}
_, _, err = dstRepo.SaveBlob(ctx, restic.DataBlob, buf, blobID, false)
if err != nil {
return fmt.Errorf("SaveBlob(%v) returned error %v", blobID, err)
}
}
return nil
})
}
return wg.Wait()
}

View File

@@ -4,12 +4,21 @@ package main
import (
"context"
"crypto/aes"
"crypto/cipher"
"encoding/json"
"fmt"
"io"
"os"
"runtime"
"sort"
"time"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/crypto"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/pack"
"github.com/restic/restic/internal/repository"
@@ -39,9 +48,17 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
},
}
var tryRepair bool
var repairByte bool
var extractPack bool
func init() {
cmdRoot.AddCommand(cmdDebug)
cmdDebug.AddCommand(cmdDebugDump)
cmdDebug.AddCommand(cmdDebugExamine)
cmdDebugExamine.Flags().BoolVar(&extractPack, "extract-pack", false, "write blobs to the current directory")
cmdDebugExamine.Flags().BoolVar(&tryRepair, "try-repair", false, "try to repair broken blobs with single bit flips")
cmdDebugExamine.Flags().BoolVar(&repairByte, "repair-byte", false, "try to repair broken blobs by trying bytes")
}
func prettyPrintJSON(wr io.Writer, item interface{}) error {
@@ -165,3 +182,367 @@ func runDebugDump(gopts GlobalOptions, args []string) error {
return errors.Fatalf("no such type %q", tpe)
}
}
var cmdDebugExamine = &cobra.Command{
Use: "examine pack-ID...",
Short: "Examine a pack file",
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runDebugExamine(globalOptions, args)
},
}
func tryRepairWithBitflip(ctx context.Context, key *crypto.Key, input []byte, bytewise bool) []byte {
if bytewise {
Printf(" trying to repair blob by finding a broken byte\n")
} else {
Printf(" trying to repair blob with single bit flip\n")
}
ch := make(chan int)
var wg errgroup.Group
done := make(chan struct{})
var fixed []byte
var found bool
workers := runtime.GOMAXPROCS(0)
Printf(" spinning up %d worker functions\n", runtime.GOMAXPROCS(0))
for i := 0; i < workers; i++ {
wg.Go(func() error {
// make a local copy of the buffer
buf := make([]byte, len(input))
copy(buf, input)
testFlip := func(idx int, pattern byte) bool {
// flip bits
buf[idx] ^= pattern
nonce, plaintext := buf[:key.NonceSize()], buf[key.NonceSize():]
plaintext, err := key.Open(plaintext[:0], nonce, plaintext, nil)
if err == nil {
Printf("\n")
Printf(" blob could be repaired by XORing byte %v with 0x%02x\n", idx, pattern)
Printf(" hash is %v\n", restic.Hash(plaintext))
close(done)
found = true
fixed = plaintext
return true
}
// flip bits back
buf[idx] ^= pattern
return false
}
for i := range ch {
if bytewise {
for j := 0; j < 255; j++ {
if testFlip(i, byte(j)) {
return nil
}
}
} else {
for j := 0; j < 7; j++ {
// flip each bit once
if testFlip(i, (1 << uint(j))) {
return nil
}
}
}
}
return nil
})
}
wg.Go(func() error {
defer close(ch)
start := time.Now()
info := time.Now()
for i := range input {
select {
case ch <- i:
case <-done:
Printf(" done after %v\n", time.Since(start))
return nil
}
if time.Since(info) > time.Second {
secs := time.Since(start).Seconds()
gps := float64(i) / secs
remaining := len(input) - i
eta := time.Duration(float64(remaining)/gps) * time.Second
Printf("\r%d byte of %d done (%.2f%%), %.0f byte per second, ETA %v",
i, len(input), float32(i)/float32(len(input))*100, gps, eta)
info = time.Now()
}
}
return nil
})
err := wg.Wait()
if err != nil {
panic("all go rountines can only return nil")
}
if !found {
Printf("\n blob could not be repaired\n")
}
return fixed
}
func decryptUnsigned(ctx context.Context, k *crypto.Key, buf []byte) []byte {
// strip signature at the end
l := len(buf)
nonce, ct := buf[:16], buf[16:l-16]
out := make([]byte, len(ct))
c, err := aes.NewCipher(k.EncryptionKey[:])
if err != nil {
panic(fmt.Sprintf("unable to create cipher: %v", err))
}
e := cipher.NewCTR(c, nonce)
e.XORKeyStream(out, ct)
return out
}
func loadBlobs(ctx context.Context, repo restic.Repository, pack restic.ID, list []restic.Blob) error {
be := repo.Backend()
h := restic.Handle{
Name: pack.String(),
Type: restic.PackFile,
}
for _, blob := range list {
Printf(" loading blob %v at %v (length %v)\n", blob.ID, blob.Offset, blob.Length)
buf := make([]byte, blob.Length)
err := be.Load(ctx, h, int(blob.Length), int64(blob.Offset), func(rd io.Reader) error {
n, err := io.ReadFull(rd, buf)
if err != nil {
return fmt.Errorf("read error after %d bytes: %v", n, err)
}
return nil
})
if err != nil {
Warnf("error read: %v\n", err)
continue
}
key := repo.Key()
nonce, plaintext := buf[:key.NonceSize()], buf[key.NonceSize():]
plaintext, err = key.Open(plaintext[:0], nonce, plaintext, nil)
if err != nil {
Warnf("error decrypting blob: %v\n", err)
var plain []byte
if tryRepair || repairByte {
plain = tryRepairWithBitflip(ctx, key, buf, repairByte)
}
var prefix string
if plain != nil {
id := restic.Hash(plain)
if !id.Equal(blob.ID) {
Printf(" repaired blob (length %v), hash is %v, ID does not match, wanted %v\n", len(plain), id, blob.ID)
prefix = "repaired-wrong-hash-"
} else {
Printf(" successfully repaired blob (length %v), hash is %v, ID matches\n", len(plain), id)
prefix = "repaired-"
}
} else {
plain = decryptUnsigned(ctx, key, buf)
prefix = "damaged-"
}
err = storePlainBlob(blob.ID, prefix, plain)
if err != nil {
return err
}
continue
}
id := restic.Hash(plaintext)
var prefix string
if !id.Equal(blob.ID) {
Printf(" successfully decrypted blob (length %v), hash is %v, ID does not match, wanted %v\n", len(plaintext), id, blob.ID)
prefix = "wrong-hash-"
} else {
Printf(" successfully decrypted blob (length %v), hash is %v, ID matches\n", len(plaintext), id)
prefix = "correct-"
}
if extractPack {
err = storePlainBlob(id, prefix, plaintext)
if err != nil {
return err
}
}
}
return nil
}
func storePlainBlob(id restic.ID, prefix string, plain []byte) error {
filename := fmt.Sprintf("%s%s.bin", prefix, id)
f, err := os.Create(filename)
if err != nil {
return err
}
_, err = f.Write(plain)
if err != nil {
_ = f.Close()
return err
}
err = f.Close()
if err != nil {
return err
}
Printf("decrypt of blob %v stored at %v\n", id, filename)
return nil
}
func runDebugExamine(gopts GlobalOptions, args []string) error {
ids := make([]restic.ID, 0)
for _, name := range args {
id, err := restic.ParseID(name)
if err != nil {
Warnf("error: %v\n", err)
continue
}
ids = append(ids, id)
}
if len(ids) == 0 {
return errors.Fatal("no pack files to examine")
}
repo, err := OpenRepository(gopts)
if err != nil {
return err
}
if !gopts.NoLock {
lock, err := lockRepo(gopts.ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
}
}
err = repo.LoadIndex(gopts.ctx)
if err != nil {
return err
}
for _, id := range ids {
err := examinePack(gopts.ctx, repo, id)
if err != nil {
Warnf("error: %v\n", err)
}
if err == context.Canceled {
break
}
}
return nil
}
func examinePack(ctx context.Context, repo restic.Repository, id restic.ID) error {
Printf("examine %v\n", id)
h := restic.Handle{
Type: restic.PackFile,
Name: id.String(),
}
fi, err := repo.Backend().Stat(ctx, h)
if err != nil {
return err
}
Printf(" file size is %v\n", fi.Size)
buf, err := backend.LoadAll(ctx, nil, repo.Backend(), h)
if err != nil {
return err
}
gotID := restic.Hash(buf)
if !id.Equal(gotID) {
Printf(" wanted hash %v, got %v\n", id, gotID)
} else {
Printf(" hash for file content matches\n")
}
Printf(" ========================================\n")
Printf(" looking for info in the indexes\n")
blobsLoaded := false
// examine all data the indexes have for the pack file
for _, idx := range repo.Index().(*repository.MasterIndex).All() {
idxIDs, err := idx.IDs()
if err != nil {
idxIDs = restic.IDs{}
}
blobs := idx.ListPack(id)
if len(blobs) == 0 {
continue
}
Printf(" index %v:\n", idxIDs)
// convert list of blobs to []restic.Blob
var list []restic.Blob
for _, b := range blobs {
list = append(list, b.Blob)
}
checkPackSize(list, fi.Size)
err = loadBlobs(ctx, repo, id, list)
if err != nil {
Warnf("error: %v\n", err)
} else {
blobsLoaded = true
}
}
Printf(" ========================================\n")
Printf(" inspect the pack itself\n")
blobs, _, err := pack.List(repo.Key(), restic.ReaderAt(ctx, repo.Backend(), h), fi.Size)
if err != nil {
return fmt.Errorf("pack %v: %v", id.Str(), err)
}
checkPackSize(blobs, fi.Size)
if !blobsLoaded {
return loadBlobs(ctx, repo, id, blobs)
}
return nil
}
func checkPackSize(blobs []restic.Blob, fileSize int64) {
// track current size and offset
var size, offset uint64
sort.Slice(blobs, func(i, j int) bool {
return blobs[i].Offset < blobs[j].Offset
})
for _, pb := range blobs {
Printf(" %v blob %v, offset %-6d, raw length %-6d\n", pb.Type, pb.ID, pb.Offset, pb.Length)
if offset != uint64(pb.Offset) {
Printf(" hole in file, want offset %v, got %v\n", offset, pb.Offset)
}
offset += uint64(pb.Length)
size += uint64(pb.Length)
}
// compute header size, per blob: 1 byte type, 4 byte length, 32 byte id
size += uint64(restic.CiphertextLength(len(blobs) * (1 + 4 + 32)))
// length in uint32 little endian
size += 4
if uint64(fileSize) != size {
Printf(" file sizes do not match: computed %v from index, file size is %v\n", size, fileSize)
} else {
Printf(" file sizes match\n")
}
}

View File

@@ -2,6 +2,7 @@ package main
import (
"context"
"encoding/json"
"path"
"reflect"
"sort"
@@ -62,15 +63,29 @@ func loadSnapshot(ctx context.Context, repo *repository.Repository, desc string)
// Comparer collects all things needed to compare two snapshots.
type Comparer struct {
repo restic.Repository
opts DiffOptions
repo restic.Repository
opts DiffOptions
printChange func(change *Change)
}
type Change struct {
MessageType string `json:"message_type"` // "change"
Path string `json:"path"`
Modifier string `json:"modifier"`
}
func NewChange(path string, mode string) *Change {
return &Change{MessageType: "change", Path: path, Modifier: mode}
}
// DiffStat collects stats for all types of items.
type DiffStat struct {
Files, Dirs, Others int
DataBlobs, TreeBlobs int
Bytes uint64
Files int `json:"files"`
Dirs int `json:"dirs"`
Others int `json:"others"`
DataBlobs int `json:"data_blobs"`
TreeBlobs int `json:"tree_blobs"`
Bytes uint64 `json:"bytes"`
}
// Add adds stats information for node to s.
@@ -113,21 +128,14 @@ func addBlobs(bs restic.BlobSet, node *restic.Node) {
}
}
// DiffStats collects the differences between two snapshots.
type DiffStats struct {
ChangedFiles int
Added DiffStat
Removed DiffStat
BlobsBefore, BlobsAfter, BlobsCommon restic.BlobSet
}
// NewDiffStats creates new stats for a diff run.
func NewDiffStats() *DiffStats {
return &DiffStats{
BlobsBefore: restic.NewBlobSet(),
BlobsAfter: restic.NewBlobSet(),
BlobsCommon: restic.NewBlobSet(),
}
type DiffStatsContainer struct {
MessageType string `json:"message_type"` // "statistics"
SourceSnapshot string `json:"source_snapshot"`
TargetSnapshot string `json:"target_snapshot"`
ChangedFiles int `json:"changed_files"`
Added DiffStat `json:"added"`
Removed DiffStat `json:"removed"`
BlobsBefore, BlobsAfter, BlobsCommon restic.BlobSet `json:"-"`
}
// updateBlobs updates the blob counters in the stats struct.
@@ -162,7 +170,7 @@ func (c *Comparer) printDir(ctx context.Context, mode string, stats *DiffStat, b
if node.Type == "dir" {
name += "/"
}
Printf("%-5s%v\n", mode, name)
c.printChange(NewChange(name, "+"))
stats.Add(node)
addBlobs(blobs, node)
@@ -221,7 +229,7 @@ func uniqueNodeNames(tree1, tree2 *restic.Tree) (tree1Nodes, tree2Nodes map[stri
return tree1Nodes, tree2Nodes, uniqueNames
}
func (c *Comparer) diffTree(ctx context.Context, stats *DiffStats, prefix string, id1, id2 restic.ID) error {
func (c *Comparer) diffTree(ctx context.Context, stats *DiffStatsContainer, prefix string, id1, id2 restic.ID) error {
debug.Log("diffing %v to %v", id1, id2)
tree1, err := c.repo.LoadTree(ctx, id1)
if err != nil {
@@ -265,7 +273,7 @@ func (c *Comparer) diffTree(ctx context.Context, stats *DiffStats, prefix string
}
if mod != "" {
Printf("%-5s%v\n", mod, name)
c.printChange(NewChange(name, mod))
}
if node1.Type == "dir" && node2.Type == "dir" {
@@ -284,7 +292,7 @@ func (c *Comparer) diffTree(ctx context.Context, stats *DiffStats, prefix string
if node1.Type == "dir" {
prefix += "/"
}
Printf("%-5s%v\n", "-", prefix)
c.printChange(NewChange(prefix, "-"))
stats.Removed.Add(node1)
if node1.Type == "dir" {
@@ -298,7 +306,7 @@ func (c *Comparer) diffTree(ctx context.Context, stats *DiffStats, prefix string
if node2.Type == "dir" {
prefix += "/"
}
Printf("%-5s%v\n", "+", prefix)
c.printChange(NewChange(prefix, "+"))
stats.Added.Add(node2)
if node2.Type == "dir" {
@@ -348,7 +356,9 @@ func runDiff(opts DiffOptions, gopts GlobalOptions, args []string) error {
return err
}
Verbosef("comparing snapshot %v to %v:\n\n", sn1.ID().Str(), sn2.ID().Str())
if !gopts.JSON {
Verbosef("comparing snapshot %v to %v:\n\n", sn1.ID().Str(), sn2.ID().Str())
}
if sn1.Tree == nil {
return errors.Errorf("snapshot %v has nil tree", sn1.ID().Str())
@@ -361,9 +371,33 @@ func runDiff(opts DiffOptions, gopts GlobalOptions, args []string) error {
c := &Comparer{
repo: repo,
opts: diffOptions,
printChange: func(change *Change) {
Printf("%-5s%v\n", change.Modifier, change.Path)
},
}
stats := NewDiffStats()
if gopts.JSON {
enc := json.NewEncoder(gopts.stdout)
c.printChange = func(change *Change) {
err := enc.Encode(change)
if err != nil {
Warnf("JSON encode failed: %v\n", err)
}
}
}
if gopts.Quiet {
c.printChange = func(change *Change) {}
}
stats := &DiffStatsContainer{
MessageType: "statistics",
SourceSnapshot: args[0],
TargetSnapshot: args[1],
BlobsBefore: restic.NewBlobSet(),
BlobsAfter: restic.NewBlobSet(),
BlobsCommon: restic.NewBlobSet(),
}
stats.BlobsBefore.Insert(restic.BlobHandle{Type: restic.TreeBlob, ID: *sn1.Tree})
stats.BlobsAfter.Insert(restic.BlobHandle{Type: restic.TreeBlob, ID: *sn2.Tree})
@@ -376,14 +410,21 @@ func runDiff(opts DiffOptions, gopts GlobalOptions, args []string) error {
updateBlobs(repo, stats.BlobsBefore.Sub(both).Sub(stats.BlobsCommon), &stats.Removed)
updateBlobs(repo, stats.BlobsAfter.Sub(both).Sub(stats.BlobsCommon), &stats.Added)
Printf("\n")
Printf("Files: %5d new, %5d removed, %5d changed\n", stats.Added.Files, stats.Removed.Files, stats.ChangedFiles)
Printf("Dirs: %5d new, %5d removed\n", stats.Added.Dirs, stats.Removed.Dirs)
Printf("Others: %5d new, %5d removed\n", stats.Added.Others, stats.Removed.Others)
Printf("Data Blobs: %5d new, %5d removed\n", stats.Added.DataBlobs, stats.Removed.DataBlobs)
Printf("Tree Blobs: %5d new, %5d removed\n", stats.Added.TreeBlobs, stats.Removed.TreeBlobs)
Printf(" Added: %-5s\n", formatBytes(uint64(stats.Added.Bytes)))
Printf(" Removed: %-5s\n", formatBytes(uint64(stats.Removed.Bytes)))
if gopts.JSON {
err := json.NewEncoder(gopts.stdout).Encode(stats)
if err != nil {
Warnf("JSON encode failed: %v\n", err)
}
} else {
Printf("\n")
Printf("Files: %5d new, %5d removed, %5d changed\n", stats.Added.Files, stats.Removed.Files, stats.ChangedFiles)
Printf("Dirs: %5d new, %5d removed\n", stats.Added.Dirs, stats.Removed.Dirs)
Printf("Others: %5d new, %5d removed\n", stats.Added.Others, stats.Removed.Others)
Printf("Data Blobs: %5d new, %5d removed\n", stats.Added.DataBlobs, stats.Removed.DataBlobs)
Printf("Tree Blobs: %5d new, %5d removed\n", stats.Added.TreeBlobs, stats.Removed.TreeBlobs)
Printf(" Added: %-5s\n", formatBytes(uint64(stats.Added.Bytes)))
Printf(" Removed: %-5s\n", formatBytes(uint64(stats.Removed.Bytes)))
}
return nil
}

View File

@@ -67,41 +67,31 @@ func splitPath(p string) []string {
return append(s, f)
}
func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.Repository, prefix string, pathComponents []string, writeDump dump.WriteDump) error {
if tree == nil {
return fmt.Errorf("called with a nil tree")
}
if repo == nil {
return fmt.Errorf("called with a nil repository")
}
l := len(pathComponents)
if l == 0 {
return fmt.Errorf("empty path components")
}
func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.Repository, prefix string, pathComponents []string, d *dump.Dumper) error {
// If we print / we need to assume that there are multiple nodes at that
// level in the tree.
if pathComponents[0] == "" {
if err := checkStdoutArchive(); err != nil {
return err
}
return writeDump(ctx, repo, tree, "/", os.Stdout)
return d.DumpTree(ctx, tree, "/")
}
item := filepath.Join(prefix, pathComponents[0])
l := len(pathComponents)
for _, node := range tree.Nodes {
// If dumping something in the highest level it will just take the
// first item it finds and dump that according to the switch case below.
if node.Name == pathComponents[0] {
switch {
case l == 1 && dump.IsFile(node):
return dump.GetNodeData(ctx, os.Stdout, repo, node)
return d.WriteNode(ctx, node)
case l > 1 && dump.IsDir(node):
subtree, err := repo.LoadTree(ctx, *node.Subtree)
if err != nil {
return errors.Wrapf(err, "cannot load subtree for %q", item)
}
return printFromTree(ctx, subtree, repo, item, pathComponents[1:], writeDump)
return printFromTree(ctx, subtree, repo, item, pathComponents[1:], d)
case dump.IsDir(node):
if err := checkStdoutArchive(); err != nil {
return err
@@ -110,7 +100,7 @@ func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.Repositor
if err != nil {
return err
}
return writeDump(ctx, repo, subtree, item, os.Stdout)
return d.DumpTree(ctx, subtree, item)
case l > 1:
return fmt.Errorf("%q should be a dir, but is a %q", item, node.Type)
case !dump.IsFile(node):
@@ -128,12 +118,8 @@ func runDump(opts DumpOptions, gopts GlobalOptions, args []string) error {
return errors.Fatal("no file and no snapshot ID specified")
}
var wd dump.WriteDump
switch opts.Archive {
case "tar":
wd = dump.WriteTar
case "zip":
wd = dump.WriteZip
case "tar", "zip":
default:
return fmt.Errorf("unknown archive format %q", opts.Archive)
}
@@ -166,7 +152,7 @@ func runDump(opts DumpOptions, gopts GlobalOptions, args []string) error {
var id restic.ID
if snapshotIDString == "latest" {
id, err = restic.FindLatestSnapshot(ctx, repo, opts.Paths, opts.Tags, opts.Hosts)
id, err = restic.FindLatestSnapshot(ctx, repo, opts.Paths, opts.Tags, opts.Hosts, nil)
if err != nil {
Exitf(1, "latest snapshot for criteria not found: %v Paths:%v Hosts:%v", err, opts.Paths, opts.Hosts)
}
@@ -187,7 +173,8 @@ func runDump(opts DumpOptions, gopts GlobalOptions, args []string) error {
Exitf(2, "loading tree for snapshot %q failed: %v", snapshotIDString, err)
}
err = printFromTree(ctx, tree, repo, "/", splittedPath, wd)
d := dump.New(opts.Archive, repo, os.Stdout)
err = printFromTree(ctx, tree, repo, "/", splittedPath, d)
if err != nil {
Exitf(2, "cannot dump file: %v", err)
}

View File

@@ -3,6 +3,7 @@ package main
import (
"context"
"encoding/json"
"sort"
"strings"
"time"
@@ -40,8 +41,6 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
},
}
const shortStr = 8 // Length of short IDs: 4 bytes as hex strings
// FindOptions bundles all options for the find command.
type FindOptions struct {
Oldest string
@@ -268,7 +267,7 @@ func (f *Finder) findInSnapshot(ctx context.Context, sn *restic.Snapshot) error
if err != nil {
debug.Log("Error loading tree %v: %v", parentTreeID, err)
Printf("Unable to load tree %s\n ... which belongs to snapshot %s.\n", parentTreeID, sn.ID())
Printf("Unable to load tree %s\n ... which belongs to snapshot %s\n", parentTreeID, sn.ID())
return false, walker.ErrSkipNode
}
@@ -352,7 +351,7 @@ func (f *Finder) findIDs(ctx context.Context, sn *restic.Snapshot) error {
if err != nil {
debug.Log("Error loading tree %v: %v", parentTreeID, err)
Printf("Unable to load tree %s\n ... which belongs to snapshot %s.\n", parentTreeID, sn.ID())
Printf("Unable to load tree %s\n ... which belongs to snapshot %s\n", parentTreeID, sn.ID())
return false, walker.ErrSkipNode
}
@@ -386,12 +385,12 @@ func (f *Finder) findIDs(ctx context.Context, sn *restic.Snapshot) error {
idStr := id.String()
if _, ok := f.blobIDs[idStr]; !ok {
// Look for short ID form
if _, ok := f.blobIDs[idStr[:shortStr]]; !ok {
if _, ok := f.blobIDs[id.Str()]; !ok {
continue
}
// Replace the short ID with the long one
f.blobIDs[idStr] = struct{}{}
delete(f.blobIDs, idStr[:shortStr])
delete(f.blobIDs, id.Str())
}
f.out.PrintObject("blob", idStr, nodepath, parentTreeID.String(), sn)
}
@@ -401,6 +400,8 @@ func (f *Finder) findIDs(ctx context.Context, sn *restic.Snapshot) error {
})
}
var errAllPacksFound = errors.New("all packs found")
// packsToBlobs converts the list of pack IDs to a list of blob IDs that
// belong to those packs.
func (f *Finder) packsToBlobs(ctx context.Context, packs []string) error {
@@ -412,20 +413,18 @@ func (f *Finder) packsToBlobs(ctx context.Context, packs []string) error {
f.blobIDs = make(map[string]struct{})
}
allPacksFound := false
packsFound := 0
debug.Log("Looking for packs...")
err := f.repo.List(ctx, restic.PackFile, func(id restic.ID, size int64) error {
if allPacksFound {
return nil
}
idStr := id.String()
if _, ok := packIDs[idStr]; !ok {
// Look for short ID form
if _, ok := packIDs[idStr[:shortStr]]; !ok {
if _, ok := packIDs[id.Str()]; !ok {
return nil
}
delete(packIDs, id.Str())
} else {
// forget found id
delete(packIDs, idStr)
}
debug.Log("Found pack %s", idStr)
blobs, _, err := f.repo.ListPack(ctx, id, size)
@@ -436,25 +435,75 @@ func (f *Finder) packsToBlobs(ctx context.Context, packs []string) error {
f.blobIDs[b.ID.String()] = struct{}{}
}
// Stop searching when all packs have been found
packsFound++
if packsFound >= len(packIDs) {
allPacksFound = true
if len(packIDs) == 0 {
return errAllPacksFound
}
return nil
})
if err != nil {
if err != nil && err != errAllPacksFound {
return err
}
if !allPacksFound {
return errors.Fatal("unable to find all specified pack(s)")
if err != errAllPacksFound {
// try to resolve unknown pack ids from the index
packIDs = f.indexPacksToBlobs(ctx, packIDs)
}
if len(packIDs) > 0 {
list := make([]string, 0, len(packIDs))
for h := range packIDs {
list = append(list, h)
}
sort.Strings(list)
return errors.Fatalf("unable to find pack(s): %v", list)
}
debug.Log("%d blobs found", len(f.blobIDs))
return nil
}
func (f *Finder) indexPacksToBlobs(ctx context.Context, packIDs map[string]struct{}) map[string]struct{} {
wctx, cancel := context.WithCancel(ctx)
defer cancel()
// remember which packs were found in the index
indexPackIDs := make(map[string]struct{})
for pb := range f.repo.Index().Each(wctx) {
idStr := pb.PackID.String()
// keep entry in packIDs as Each() returns individual index entries
matchingID := false
if _, ok := packIDs[idStr]; ok {
matchingID = true
} else {
if _, ok := packIDs[pb.PackID.Str()]; ok {
// expand id
delete(packIDs, pb.PackID.Str())
packIDs[idStr] = struct{}{}
matchingID = true
}
}
if matchingID {
f.blobIDs[pb.ID.String()] = struct{}{}
indexPackIDs[idStr] = struct{}{}
}
}
for id := range indexPackIDs {
delete(packIDs, id)
}
if len(indexPackIDs) > 0 {
list := make([]string, 0, len(indexPackIDs))
for h := range indexPackIDs {
list = append(list, h)
}
Warnf("some pack files are missing from the repository, getting their blobs from the repository index: %v\n\n", list)
}
return packIDs
}
func (f *Finder) findObjectPack(ctx context.Context, id string, t restic.BlobType) {
idx := f.repo.Index()
@@ -563,7 +612,7 @@ func runFind(opts FindOptions, gopts GlobalOptions, args []string) error {
}
if opts.PackID {
err := f.packsToBlobs(ctx, []string{f.pat.pattern[0]}) // TODO: support multiple packs
err := f.packsToBlobs(ctx, f.pat.pattern)
if err != nil {
return err
}

View File

@@ -5,6 +5,7 @@ import (
"encoding/json"
"io"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/restic"
"github.com/spf13/cobra"
)
@@ -15,8 +16,9 @@ var cmdForget = &cobra.Command{
Long: `
The "forget" command removes snapshots according to a policy. Please note that
this command really only deletes the snapshot object in the repository, which
is a reference to data stored there. In order to remove this (now unreferenced)
data after 'forget' was run successfully, see the 'prune' command.
is a reference to data stored there. In order to remove the unreferenced data
after "forget" was run successfully, see the "prune" command. Please also read
the documentation for "forget" to learn about important security considerations.
EXIT STATUS
===========
@@ -31,14 +33,19 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
// ForgetOptions collects all options for the forget command.
type ForgetOptions struct {
Last int
Hourly int
Daily int
Weekly int
Monthly int
Yearly int
Within restic.Duration
KeepTags restic.TagLists
Last int
Hourly int
Daily int
Weekly int
Monthly int
Yearly int
Within restic.Duration
WithinHourly restic.Duration
WithinDaily restic.Duration
WithinWeekly restic.Duration
WithinMonthly restic.Duration
WithinYearly restic.Duration
KeepTags restic.TagLists
Hosts []string
Tags restic.TagLists
@@ -64,6 +71,11 @@ func init() {
f.IntVarP(&forgetOptions.Monthly, "keep-monthly", "m", 0, "keep the last `n` monthly snapshots")
f.IntVarP(&forgetOptions.Yearly, "keep-yearly", "y", 0, "keep the last `n` yearly snapshots")
f.VarP(&forgetOptions.Within, "keep-within", "", "keep snapshots that are newer than `duration` (eg. 1y5m7d2h) relative to the latest snapshot")
f.VarP(&forgetOptions.WithinHourly, "keep-within-hourly", "", "keep hourly snapshots that are newer than `duration` (eg. 1y5m7d2h) relative to the latest snapshot")
f.VarP(&forgetOptions.WithinDaily, "keep-within-daily", "", "keep daily snapshots that are newer than `duration` (eg. 1y5m7d2h) relative to the latest snapshot")
f.VarP(&forgetOptions.WithinWeekly, "keep-within-weekly", "", "keep weekly snapshots that are newer than `duration` (eg. 1y5m7d2h) relative to the latest snapshot")
f.VarP(&forgetOptions.WithinMonthly, "keep-within-monthly", "", "keep monthly snapshots that are newer than `duration` (eg. 1y5m7d2h) relative to the latest snapshot")
f.VarP(&forgetOptions.WithinYearly, "keep-within-yearly", "", "keep yearly snapshots that are newer than `duration` (eg. 1y5m7d2h) relative to the latest snapshot")
f.Var(&forgetOptions.KeepTags, "keep-tag", "keep snapshots with this `taglist` (can be specified multiple times)")
f.StringArrayVar(&forgetOptions.Hosts, "host", nil, "only consider snapshots with the given `host` (can be specified multiple times)")
@@ -98,10 +110,16 @@ func runForget(opts ForgetOptions, gopts GlobalOptions, args []string) error {
return err
}
lock, err := lockRepoExclusive(gopts.ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
if gopts.NoLock && !opts.DryRun {
return errors.Fatal("--no-lock is only applicable in combination with --dry-run for forget command")
}
if !opts.DryRun || !gopts.NoLock {
lock, err := lockRepoExclusive(gopts.ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
}
}
ctx, cancel := context.WithCancel(gopts.ctx)
@@ -128,14 +146,19 @@ func runForget(opts ForgetOptions, gopts GlobalOptions, args []string) error {
}
policy := restic.ExpirePolicy{
Last: opts.Last,
Hourly: opts.Hourly,
Daily: opts.Daily,
Weekly: opts.Weekly,
Monthly: opts.Monthly,
Yearly: opts.Yearly,
Within: opts.Within,
Tags: opts.KeepTags,
Last: opts.Last,
Hourly: opts.Hourly,
Daily: opts.Daily,
Weekly: opts.Weekly,
Monthly: opts.Monthly,
Yearly: opts.Yearly,
Within: opts.Within,
WithinHourly: opts.WithinHourly,
WithinDaily: opts.WithinDaily,
WithinWeekly: opts.WithinWeekly,
WithinMonthly: opts.WithinMonthly,
WithinYearly: opts.WithinYearly,
Tags: opts.KeepTags,
}
if policy.Empty() && len(args) == 0 {

View File

@@ -10,10 +10,10 @@ import (
var cmdGenerate = &cobra.Command{
Use: "generate [flags]",
Short: "Generate manual pages and auto-completion files (bash, zsh)",
Short: "Generate manual pages and auto-completion files (bash, fish, zsh)",
Long: `
The "generate" command writes automatically generated files (like the man pages
and the auto-completion files for bash and zsh).
and the auto-completion files for bash, fish and zsh).
EXIT STATUS
===========
@@ -27,6 +27,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
type generateOptions struct {
ManDir string
BashCompletionFile string
FishCompletionFile string
ZSHCompletionFile string
}
@@ -37,6 +38,7 @@ func init() {
fs := cmdGenerate.Flags()
fs.StringVar(&genOpts.ManDir, "man", "", "write man pages to `directory`")
fs.StringVar(&genOpts.BashCompletionFile, "bash-completion", "", "write bash completion `file`")
fs.StringVar(&genOpts.FishCompletionFile, "fish-completion", "", "write fish completion `file`")
fs.StringVar(&genOpts.ZSHCompletionFile, "zsh-completion", "", "write zsh completion `file`")
}
@@ -63,6 +65,11 @@ func writeBashCompletion(file string) error {
return cmdRoot.GenBashCompletionFile(file)
}
func writeFishCompletion(file string) error {
Verbosef("writing fish completion file to %v\n", file)
return cmdRoot.GenFishCompletionFile(file, true)
}
func writeZSHCompletion(file string) error {
Verbosef("writing zsh completion file to %v\n", file)
return cmdRoot.GenZshCompletionFile(file)
@@ -83,6 +90,13 @@ func runGenerate(cmd *cobra.Command, args []string) error {
}
}
if genOpts.FishCompletionFile != "" {
err := writeFishCompletion(genOpts.FishCompletionFile)
if err != nil {
return err
}
}
if genOpts.ZSHCompletionFile != "" {
err := writeZSHCompletion(genOpts.ZSHCompletionFile)
if err != nil {

View File

@@ -53,11 +53,6 @@ func runInit(opts InitOptions, gopts GlobalOptions, args []string) error {
return err
}
be, err := create(repo, gopts.extended)
if err != nil {
return errors.Fatalf("create repository at %s failed: %v\n", location.StripPassword(gopts.Repo), err)
}
gopts.password, err = ReadPasswordTwice(gopts,
"enter password for new repository: ",
"enter password again: ")
@@ -65,6 +60,11 @@ func runInit(opts InitOptions, gopts GlobalOptions, args []string) error {
return err
}
be, err := create(repo, gopts.extended)
if err != nil {
return errors.Fatalf("create repository at %s failed: %v\n", location.StripPassword(gopts.Repo), err)
}
s := repository.New(be)
err = s.Init(gopts.ctx, gopts.password, chunkerPolynomial)

View File

@@ -131,6 +131,11 @@ func addKey(gopts GlobalOptions, repo *repository.Repository) error {
return errors.Fatalf("creating new key failed: %v\n", err)
}
err = switchToNewKeyAndRemoveIfBroken(gopts.ctx, repo, id, pw)
if err != nil {
return err
}
Verbosef("saved new key as %s\n", id)
return nil
@@ -161,8 +166,14 @@ func changePassword(gopts GlobalOptions, repo *repository.Repository) error {
if err != nil {
return errors.Fatalf("creating new key failed: %v\n", err)
}
oldID := repo.KeyName()
h := restic.Handle{Type: restic.KeyFile, Name: repo.KeyName()}
err = switchToNewKeyAndRemoveIfBroken(gopts.ctx, repo, id, pw)
if err != nil {
return err
}
h := restic.Handle{Type: restic.KeyFile, Name: oldID}
err = repo.Backend().Remove(gopts.ctx, h)
if err != nil {
return err
@@ -173,6 +184,19 @@ func changePassword(gopts GlobalOptions, repo *repository.Repository) error {
return nil
}
func switchToNewKeyAndRemoveIfBroken(ctx context.Context, repo *repository.Repository, key *repository.Key, pw string) error {
// Verify new key to make sure it really works. A broken key can render the
// whole repository inaccessible
err := repo.SearchKey(ctx, pw, 0, key.Name())
if err != nil {
// the key is invalid, try to remove it
h := restic.Handle{Type: restic.KeyFile, Name: key.Name()}
_ = repo.Backend().Remove(ctx, h)
return errors.Fatalf("failed to access repository with new key: %v", err)
}
return nil
}
func runKey(gopts GlobalOptions, args []string) error {
if len(args) < 1 || (args[0] == "remove" && len(args) != 2) || (args[0] != "remove" && len(args) != 1) {
return errors.Fatal("wrong number of arguments")

View File

@@ -39,7 +39,7 @@ func runList(cmd *cobra.Command, opts GlobalOptions, args []string) error {
return err
}
if !opts.NoLock {
if !opts.NoLock && args[0] != "locks" {
lock, err := lockRepo(opts.ctx, repo)
defer unlockRepo(lock)
if err != nil {

View File

@@ -61,9 +61,9 @@ func init() {
flags := cmdLs.Flags()
flags.BoolVarP(&lsOptions.ListLong, "long", "l", false, "use a long listing format showing size and mode")
flags.StringArrayVarP(&lsOptions.Hosts, "host", "H", nil, "only consider snapshots for this `host`, when no snapshot ID is given (can be specified multiple times)")
flags.Var(&lsOptions.Tags, "tag", "only consider snapshots which include this `taglist`, when no snapshot ID is given")
flags.StringArrayVar(&lsOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path`, when no snapshot ID is given")
flags.StringArrayVarP(&lsOptions.Hosts, "host", "H", nil, "only consider snapshots for this `host`, when snapshot ID \"latest\" is given (can be specified multiple times)")
flags.Var(&lsOptions.Tags, "tag", "only consider snapshots which include this `taglist`, when snapshot ID \"latest\" is given (can be specified multiple times)")
flags.StringArrayVar(&lsOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path`, when snapshot ID \"latest\" is given (can be specified multiple times)")
flags.BoolVar(&lsOptions.Recursive, "recursive", false, "include files in subfolders of the listed directories")
}
@@ -74,23 +74,49 @@ type lsSnapshot struct {
StructType string `json:"struct_type"` // "snapshot"
}
type lsNode struct {
Name string `json:"name"`
Type string `json:"type"`
Path string `json:"path"`
UID uint32 `json:"uid"`
GID uint32 `json:"gid"`
Size uint64 `json:"size,omitempty"`
Mode os.FileMode `json:"mode,omitempty"`
ModTime time.Time `json:"mtime,omitempty"`
AccessTime time.Time `json:"atime,omitempty"`
ChangeTime time.Time `json:"ctime,omitempty"`
StructType string `json:"struct_type"` // "node"
// Print node in our custom JSON format, followed by a newline.
func lsNodeJSON(enc *json.Encoder, path string, node *restic.Node) error {
n := &struct {
Name string `json:"name"`
Type string `json:"type"`
Path string `json:"path"`
UID uint32 `json:"uid"`
GID uint32 `json:"gid"`
Size *uint64 `json:"size,omitempty"`
Mode os.FileMode `json:"mode,omitempty"`
Permissions string `json:"permissions,omitempty"`
ModTime time.Time `json:"mtime,omitempty"`
AccessTime time.Time `json:"atime,omitempty"`
ChangeTime time.Time `json:"ctime,omitempty"`
StructType string `json:"struct_type"` // "node"
size uint64 // Target for Size pointer.
}{
Name: node.Name,
Type: node.Type,
Path: path,
UID: node.UID,
GID: node.GID,
size: node.Size,
Mode: node.Mode,
Permissions: node.Mode.String(),
ModTime: node.ModTime,
AccessTime: node.AccessTime,
ChangeTime: node.ChangeTime,
StructType: "node",
}
// Always print size for regular files, even when empty,
// but never for other types.
if node.Type == "file" {
n.Size = &n.size
}
return enc.Encode(n)
}
func runLs(opts LsOptions, gopts GlobalOptions, args []string) error {
if len(args) == 0 {
return errors.Fatal("no snapshot ID specified")
return errors.Fatal("no snapshot ID specified, specify snapshot ID or use special ID 'latest'")
}
// extract any specific directories to walk
@@ -159,7 +185,7 @@ func runLs(opts LsOptions, gopts GlobalOptions, args []string) error {
enc := json.NewEncoder(gopts.stdout)
printSnapshot = func(sn *restic.Snapshot) {
err = enc.Encode(lsSnapshot{
err := enc.Encode(lsSnapshot{
Snapshot: sn,
ID: sn.ID(),
ShortID: sn.ID().Str(),
@@ -171,19 +197,7 @@ func runLs(opts LsOptions, gopts GlobalOptions, args []string) error {
}
printNode = func(path string, node *restic.Node) {
err = enc.Encode(lsNode{
Name: node.Name,
Type: node.Type,
Path: path,
UID: node.UID,
GID: node.GID,
Size: node.Size,
Mode: node.Mode,
ModTime: node.ModTime,
AccessTime: node.AccessTime,
ChangeTime: node.ChangeTime,
StructType: "node",
})
err := lsNodeJSON(enc, path, node)
if err != nil {
Warnf("JSON encode failed: %v\n", err)
}

92
cmd/restic/cmd_ls_test.go Normal file
View File

@@ -0,0 +1,92 @@
package main
import (
"bytes"
"encoding/json"
"os"
"testing"
"time"
"github.com/restic/restic/internal/restic"
rtest "github.com/restic/restic/internal/test"
)
func TestLsNodeJSON(t *testing.T) {
for _, c := range []struct {
path string
restic.Node
expect string
}{
// Mode is omitted when zero.
// Permissions, by convention is "-" per mode bit
{
path: "/bar/baz",
Node: restic.Node{
Name: "baz",
Type: "file",
Size: 12345,
UID: 10000000,
GID: 20000000,
User: "nobody",
Group: "nobodies",
Links: 1,
},
expect: `{"name":"baz","type":"file","path":"/bar/baz","uid":10000000,"gid":20000000,"size":12345,"permissions":"----------","mtime":"0001-01-01T00:00:00Z","atime":"0001-01-01T00:00:00Z","ctime":"0001-01-01T00:00:00Z","struct_type":"node"}`,
},
// Even empty files get an explicit size.
{
path: "/foo/empty",
Node: restic.Node{
Name: "empty",
Type: "file",
Size: 0,
UID: 1001,
GID: 1001,
User: "not printed",
Group: "not printed",
Links: 0xF00,
},
expect: `{"name":"empty","type":"file","path":"/foo/empty","uid":1001,"gid":1001,"size":0,"permissions":"----------","mtime":"0001-01-01T00:00:00Z","atime":"0001-01-01T00:00:00Z","ctime":"0001-01-01T00:00:00Z","struct_type":"node"}`,
},
// Non-regular files do not get a size.
// Mode is printed in decimal, including the type bits.
{
path: "/foo/link",
Node: restic.Node{
Name: "link",
Type: "symlink",
Mode: os.ModeSymlink | 0777,
LinkTarget: "not printed",
},
expect: `{"name":"link","type":"symlink","path":"/foo/link","uid":0,"gid":0,"mode":134218239,"permissions":"Lrwxrwxrwx","mtime":"0001-01-01T00:00:00Z","atime":"0001-01-01T00:00:00Z","ctime":"0001-01-01T00:00:00Z","struct_type":"node"}`,
},
{
path: "/some/directory",
Node: restic.Node{
Name: "directory",
Type: "dir",
Mode: os.ModeDir | 0755,
ModTime: time.Date(2020, 1, 2, 3, 4, 5, 0, time.UTC),
AccessTime: time.Date(2021, 2, 3, 4, 5, 6, 7, time.UTC),
ChangeTime: time.Date(2022, 3, 4, 5, 6, 7, 8, time.UTC),
},
expect: `{"name":"directory","type":"dir","path":"/some/directory","uid":0,"gid":0,"mode":2147484141,"permissions":"drwxr-xr-x","mtime":"2020-01-02T03:04:05Z","atime":"2021-02-03T04:05:06.000000007Z","ctime":"2022-03-04T05:06:07.000000008Z","struct_type":"node"}`,
},
} {
buf := new(bytes.Buffer)
enc := json.NewEncoder(buf)
err := lsNodeJSON(enc, c.path, &c.Node)
rtest.OK(t, err)
rtest.Equals(t, c.expect+"\n", buf.String())
// Sanity check: output must be valid JSON.
var v interface{}
err = json.NewDecoder(buf).Decode(&v)
rtest.OK(t, err)
}
}

View File

@@ -122,6 +122,7 @@ func runMount(opts MountOptions, gopts GlobalOptions, args []string) error {
mountOptions := []systemFuse.MountOption{
systemFuse.ReadOnly(),
systemFuse.FSName("restic"),
systemFuse.MaxReadahead(128 * 1024),
}
if opts.AllowOther {
@@ -161,7 +162,8 @@ func runMount(opts MountOptions, gopts GlobalOptions, args []string) error {
root := fuse.NewRoot(repo, cfg)
Printf("Now serving the repository at %s\n", mountpoint)
Printf("When finished, quit with Ctrl-c or umount the mountpoint.\n")
Printf("Use another terminal or tool to browse the contents of this folder.\n")
Printf("When finished, quit with Ctrl-c here or umount the mountpoint.\n")
debug.Log("serving mount at %v", mountpoint)
err = fs.Serve(c, root)

View File

@@ -66,6 +66,7 @@ func addPruneOptions(c *cobra.Command) {
}
func verifyPruneOptions(opts *PruneOptions) error {
opts.MaxRepackBytes = math.MaxUint64
if len(opts.MaxRepackSize) > 0 {
size, err := parseSizeStr(opts.MaxRepackSize)
if err != nil {
@@ -119,18 +120,6 @@ func verifyPruneOptions(opts *PruneOptions) error {
return nil
}
func shortenStatus(maxLength int, s string) string {
if len(s) <= maxLength {
return s
}
if maxLength < 3 {
return s[:maxLength]
}
return s[:maxLength-3] + "..."
}
func runPrune(opts PruneOptions, gopts GlobalOptions) error {
err := verifyPruneOptions(&opts)
if err != nil {
@@ -249,7 +238,7 @@ func prune(opts PruneOptions, gopts GlobalOptions, repo restic.Repository, usedB
"Integrity check failed: Data seems to be missing.\n"+
"Will not start prune to prevent (additional) data loss!\n"+
"Please report this error (along with the output of the 'prune' run) at\n"+
"https://github.com/restic/restic/issues/new/choose", usedBlobs)
"https://github.com/restic/restic/issues/new/choose\n", usedBlobs)
return errorIndexIncomplete
}
@@ -324,7 +313,7 @@ func prune(opts PruneOptions, gopts GlobalOptions, repo restic.Repository, usedB
// Pack size does not fit and pack is needed => error
// If the pack is not needed, this is no error, the pack can
// and will be simply removed, see below.
Warnf("pack %s: calculated size %d does not match real size %d\nRun 'restic rebuild-index'.",
Warnf("pack %s: calculated size %d does not match real size %d\nRun 'restic rebuild-index'.\n",
id.Str(), p.unusedSize+p.usedSize, packSize)
return errorSizeNotMatching
}
@@ -430,11 +419,7 @@ func prune(opts PruneOptions, gopts GlobalOptions, repo restic.Repository, usedB
for _, p := range repackCandidates {
reachedUnusedSizeAfter := (stats.size.unused-stats.size.remove-stats.size.repackrm < maxUnusedSizeAfter)
reachedRepackSize := false
if opts.MaxRepackBytes > 0 {
reachedRepackSize = stats.size.repack+p.unusedSize+p.usedSize > opts.MaxRepackBytes
}
reachedRepackSize := stats.size.repack+p.unusedSize+p.usedSize >= opts.MaxRepackBytes
switch {
case reachedRepackSize:
@@ -459,26 +444,26 @@ func prune(opts PruneOptions, gopts GlobalOptions, repo restic.Repository, usedB
stats.size.repackrm += stats.size.duplicate
}
Verboseff("\nused: %10d blobs / %s\n", stats.blobs.used, formatBytes(stats.size.used))
Verboseff("\nused: %10d blobs / %s\n", stats.blobs.used, formatBytes(stats.size.used))
if stats.blobs.duplicate > 0 {
Verboseff("duplicates: %10d blobs / %s\n", stats.blobs.duplicate, formatBytes(stats.size.duplicate))
Verboseff("duplicates: %10d blobs / %s\n", stats.blobs.duplicate, formatBytes(stats.size.duplicate))
}
Verboseff("unused: %10d blobs / %s\n", stats.blobs.unused, formatBytes(stats.size.unused))
Verboseff("unused: %10d blobs / %s\n", stats.blobs.unused, formatBytes(stats.size.unused))
if stats.size.unref > 0 {
Verboseff("unreferenced: %s\n", formatBytes(stats.size.unref))
Verboseff("unreferenced: %s\n", formatBytes(stats.size.unref))
}
totalBlobs := stats.blobs.used + stats.blobs.unused + stats.blobs.duplicate
totalSize := stats.size.used + stats.size.duplicate + stats.size.unused + stats.size.unref
unusedSize := stats.size.duplicate + stats.size.unused
Verboseff("total: %10d blobs / %s\n", totalBlobs, formatBytes(totalSize))
Verboseff("total: %10d blobs / %s\n", totalBlobs, formatBytes(totalSize))
Verboseff("unused size: %s of total size\n", formatPercent(unusedSize, totalSize))
Verbosef("\nto repack: %10d blobs / %s\n", stats.blobs.repack, formatBytes(stats.size.repack))
Verbosef("this removes %10d blobs / %s\n", stats.blobs.repackrm, formatBytes(stats.size.repackrm))
Verbosef("to delete: %10d blobs / %s\n", stats.blobs.remove, formatBytes(stats.size.remove+stats.size.unref))
Verbosef("\nto repack: %10d blobs / %s\n", stats.blobs.repack, formatBytes(stats.size.repack))
Verbosef("this removes: %10d blobs / %s\n", stats.blobs.repackrm, formatBytes(stats.size.repackrm))
Verbosef("to delete: %10d blobs / %s\n", stats.blobs.remove, formatBytes(stats.size.remove+stats.size.unref))
totalPruneSize := stats.size.remove + stats.size.repackrm + stats.size.unref
Verbosef("total prune: %10d blobs / %s\n", stats.blobs.remove+stats.blobs.repackrm, formatBytes(totalPruneSize))
Verbosef("remaining: %10d blobs / %s\n", totalBlobs-(stats.blobs.remove+stats.blobs.repackrm), formatBytes(totalSize-totalPruneSize))
Verbosef("total prune: %10d blobs / %s\n", stats.blobs.remove+stats.blobs.repackrm, formatBytes(totalPruneSize))
Verbosef("remaining: %10d blobs / %s\n", totalBlobs-(stats.blobs.remove+stats.blobs.repackrm), formatBytes(totalSize-totalPruneSize))
unusedAfter := unusedSize - stats.size.remove - stats.size.repackrm
Verbosef("unused size after prune: %s (%s of remaining size)\n",
formatBytes(unusedAfter), formatPercent(unusedAfter, totalSize-totalPruneSize))
@@ -487,11 +472,11 @@ func prune(opts PruneOptions, gopts GlobalOptions, repo restic.Repository, usedB
Verboseff("partly used packs: %10d\n", stats.packs.partlyUsed)
Verboseff("unused packs: %10d\n\n", stats.packs.unused)
Verboseff("to keep: %10d packs\n", stats.packs.keep)
Verboseff("to repack: %10d packs\n", len(repackPacks))
Verboseff("to delete: %10d packs\n", len(removePacks))
Verboseff("to keep: %10d packs\n", stats.packs.keep)
Verboseff("to repack: %10d packs\n", len(repackPacks))
Verboseff("to delete: %10d packs\n", len(removePacks))
if len(removePacksFirst) > 0 {
Verboseff("to delete: %10d unreferenced packs\n\n", len(removePacksFirst))
Verboseff("to delete: %10d unreferenced packs\n\n", len(removePacksFirst))
}
if opts.DryRun {

View File

@@ -73,7 +73,27 @@ func rebuildIndex(opts RebuildIndexOptions, gopts GlobalOptions, repo *repositor
}
} else {
Verbosef("loading indexes...\n")
err := repo.LoadIndex(gopts.ctx)
mi := repository.NewMasterIndex()
err := repository.ForAllIndexes(ctx, repo, func(id restic.ID, idx *repository.Index, oldFormat bool, err error) error {
if err != nil {
Warnf("removing invalid index %v: %v\n", id, err)
obsoleteIndexes = append(obsoleteIndexes, id)
return nil
}
mi.Insert(idx)
return nil
})
if err != nil {
return err
}
err = mi.MergeFinalIndexes()
if err != nil {
return err
}
err = repo.SetIndex(mi)
if err != nil {
return err
}

View File

@@ -1,6 +1,7 @@
package main
import (
"context"
"os"
"time"
@@ -11,11 +12,11 @@ import (
var cmdRecover = &cobra.Command{
Use: "recover [flags]",
Short: "Recover data from the repository",
Short: "Recover data from the repository not referenced by snapshots",
Long: `
The "recover" command builds a new snapshot from all directories it can find in
the raw data of the repository. It can be used if, for example, a snapshot has
been removed by accident with "forget".
the raw data of the repository which are not referenced in an existing snapshot.
It can be used if, for example, a snapshot has been removed by accident with "forget".
EXIT STATUS
===========
@@ -59,24 +60,14 @@ func runRecover(gopts GlobalOptions) error {
trees := make(map[restic.ID]bool)
for blob := range repo.Index().Each(gopts.ctx) {
if blob.Blob.Type != restic.TreeBlob {
continue
if blob.Type == restic.TreeBlob {
trees[blob.Blob.ID] = false
}
trees[blob.Blob.ID] = false
}
cur := 0
max := len(trees)
Verbosef("load %d trees\n\n", len(trees))
Verbosef("load %d trees\n", len(trees))
bar := newProgressMax(!gopts.Quiet, uint64(len(trees)), "trees loaded")
for id := range trees {
cur++
Verbosef("\rtree (%v/%v)", cur, max)
if !trees[id] {
trees[id] = false
}
tree, err := repo.LoadTree(gopts.ctx, id)
if err != nil {
Warnf("unable to load tree %v: %v\n", id.Str(), err)
@@ -84,28 +75,39 @@ func runRecover(gopts GlobalOptions) error {
}
for _, node := range tree.Nodes {
if node.Type != "dir" || node.Subtree == nil {
continue
if node.Type == "dir" && node.Subtree != nil {
trees[*node.Subtree] = true
}
subtree := *node.Subtree
trees[subtree] = true
}
bar.Add(1)
}
Verbosef("\ndone\n")
bar.Done()
Verbosef("load snapshots\n")
err = restic.ForAllSnapshots(gopts.ctx, repo, nil, func(id restic.ID, sn *restic.Snapshot, err error) error {
trees[*sn.Tree] = true
return nil
})
if err != nil {
return err
}
Verbosef("done\n")
roots := restic.NewIDSet()
for id, seen := range trees {
if seen {
continue
if !seen {
Verboseff("found root tree %v\n", id.Str())
roots.Insert(id)
}
}
Printf("\nfound %d unreferenced roots\n", len(roots))
roots.Insert(id)
if len(roots) == 0 {
Verbosef("no snapshot to write.\n")
return nil
}
Verbosef("found %d roots\n", len(roots))
tree := restic.NewTree()
tree := restic.NewTree(len(roots))
for id := range roots {
var subtreeID = id
node := restic.Node{
@@ -117,7 +119,7 @@ func runRecover(gopts GlobalOptions) error {
ModTime: time.Now(),
ChangeTime: time.Now(),
}
err = tree.Insert(&node)
err := tree.Insert(&node)
if err != nil {
return err
}
@@ -133,19 +135,23 @@ func runRecover(gopts GlobalOptions) error {
return errors.Fatalf("unable to save blobs to the repo: %v", err)
}
sn, err := restic.NewSnapshot([]string{"/recover"}, []string{}, hostname, time.Now())
return createSnapshot(gopts.ctx, "/recover", hostname, []string{"recovered"}, repo, &treeID)
}
func createSnapshot(ctx context.Context, name, hostname string, tags []string, repo restic.Repository, tree *restic.ID) error {
sn, err := restic.NewSnapshot([]string{name}, tags, hostname, time.Now())
if err != nil {
return errors.Fatalf("unable to save snapshot: %v", err)
}
sn.Tree = &treeID
sn.Tree = tree
id, err := repo.SaveJSONUnpacked(gopts.ctx, restic.SnapshotFile, sn)
id, err := repo.SaveJSONUnpacked(ctx, restic.SnapshotFile, sn)
if err != nil {
return errors.Fatalf("unable to save snapshot: %v", err)
}
Printf("saved new snapshot %v\n", id.Str())
return nil
}

View File

@@ -2,6 +2,7 @@ package main
import (
"strings"
"time"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
@@ -117,7 +118,7 @@ func runRestore(opts RestoreOptions, gopts GlobalOptions, args []string) error {
var id restic.ID
if snapshotIDString == "latest" {
id, err = restic.FindLatestSnapshot(ctx, repo, opts.Paths, opts.Tags, opts.Hosts)
id, err = restic.FindLatestSnapshot(ctx, repo, opts.Paths, opts.Tags, opts.Hosts, nil)
if err != nil {
Exitf(1, "latest snapshot for criteria not found: %v Paths:%v Hosts:%v", err, opts.Paths, opts.Hosts)
}
@@ -202,6 +203,7 @@ func runRestore(opts RestoreOptions, gopts GlobalOptions, args []string) error {
if opts.Verify {
Verbosef("verifying files in %s\n", opts.Target)
var count int
t0 := time.Now()
count, err = res.VerifyFiles(ctx, opts.Target)
if err != nil {
return err
@@ -209,7 +211,8 @@ func runRestore(opts RestoreOptions, gopts GlobalOptions, args []string) error {
if totalErrors > 0 {
return errors.Fatalf("There were %d errors\n", totalErrors)
}
Verbosef("finished verifying %d files in %s\n", count, opts.Target)
Verbosef("finished verifying %d files in %s (took %s)\n", count, opts.Target,
time.Since(t0).Round(time.Millisecond))
}
return nil

View File

@@ -71,7 +71,7 @@ func runSelfUpdate(opts SelfUpdateOptions, gopts GlobalOptions, args []string) e
}
}
Printf("writing restic to %v\n", opts.Output)
Verbosef("writing restic to %v\n", opts.Output)
v, err := selfupdate.DownloadLatestStableRelease(gopts.ctx, opts.Output, version, Verbosef)
if err != nil {

View File

@@ -36,7 +36,8 @@ type SnapshotOptions struct {
Tags restic.TagLists
Paths []string
Compact bool
Last bool
Last bool // This option should be removed in favour of Latest.
Latest int
GroupBy string
}
@@ -51,6 +52,12 @@ func init() {
f.StringArrayVar(&snapshotOptions.Paths, "path", nil, "only consider snapshots for this `path` (can be specified multiple times)")
f.BoolVarP(&snapshotOptions.Compact, "compact", "c", false, "use compact output format")
f.BoolVar(&snapshotOptions.Last, "last", false, "only show the last snapshot for each host and path")
err := f.MarkDeprecated("last", "use --latest 1")
if err != nil {
// MarkDeprecated only returns an error when the flag is not found
panic(err)
}
f.IntVar(&snapshotOptions.Latest, "latest", 0, "only show the last `n` snapshots for each host and path")
f.StringVarP(&snapshotOptions.GroupBy, "group-by", "g", "", "string for grouping snapshots by host,paths,tags")
}
@@ -82,7 +89,11 @@ func runSnapshots(opts SnapshotOptions, gopts GlobalOptions, args []string) erro
for k, list := range snapshotGroups {
if opts.Last {
list = FilterLastSnapshots(list)
// This branch should be removed in the same time
// that --last.
list = FilterLastestSnapshots(list, 1)
} else if opts.Latest > 0 {
list = FilterLastestSnapshots(list, opts.Latest)
}
sort.Sort(sort.Reverse(list))
snapshotGroups[k] = list
@@ -125,21 +136,22 @@ func newFilterLastSnapshotsKey(sn *restic.Snapshot) filterLastSnapshotsKey {
return filterLastSnapshotsKey{sn.Hostname, strings.Join(paths, "|")}
}
// FilterLastSnapshots filters a list of snapshots to only return the last
// entry for each hostname and path. If the snapshot contains multiple paths,
// they will be joined and treated as one item.
func FilterLastSnapshots(list restic.Snapshots) restic.Snapshots {
// FilterLastestSnapshots filters a list of snapshots to only return
// the limit last entries for each hostname and path. If the snapshot
// contains multiple paths, they will be joined and treated as one
// item.
func FilterLastestSnapshots(list restic.Snapshots, limit int) restic.Snapshots {
// Sort the snapshots so that the newer ones are listed first
sort.SliceStable(list, func(i, j int) bool {
return list[i].Time.After(list[j].Time)
})
var results restic.Snapshots
seen := make(map[filterLastSnapshotsKey]bool)
seen := make(map[filterLastSnapshotsKey]int)
for _, sn := range list {
key := newFilterLastSnapshotsKey(sn)
if !seen[key] {
seen[key] = true
if seen[key] < limit {
seen[key]++
results = append(results, sn)
}
}

View File

@@ -25,16 +25,21 @@ const numDeleteWorkers = 8
func deleteFiles(gopts GlobalOptions, ignoreError bool, repo restic.Repository, fileList restic.IDSet, fileType restic.FileType) error {
totalCount := len(fileList)
fileChan := make(chan restic.ID)
go func() {
wg, ctx := errgroup.WithContext(gopts.ctx)
wg.Go(func() error {
defer close(fileChan)
for id := range fileList {
fileChan <- id
select {
case fileChan <- id:
case <-ctx.Done():
return ctx.Err()
}
}
close(fileChan)
}()
return nil
})
bar := newProgressMax(!gopts.JSON && !gopts.Quiet, uint64(totalCount), "files deleted")
defer bar.Done()
wg, ctx := errgroup.WithContext(gopts.ctx)
for i := 0; i < numDeleteWorkers; i++ {
wg.Go(func() error {
for id := range fileChan {

View File

@@ -23,7 +23,7 @@ func FindFilteredSnapshots(ctx context.Context, repo *repository.Repository, hos
for _, s := range snapshotIDs {
if s == "latest" {
usedFilter = true
id, err = restic.FindLatestSnapshot(ctx, repo, paths, tags, hosts)
id, err = restic.FindLatestSnapshot(ctx, repo, paths, tags, hosts, nil)
if err != nil {
Warnf("Ignoring %q, no snapshot matched given filter (Paths:%v Tags:%v Hosts:%v)\n", s, paths, tags, hosts)
continue
@@ -31,7 +31,7 @@ func FindFilteredSnapshots(ctx context.Context, repo *repository.Repository, hos
} else {
id, err = restic.FindSnapshot(ctx, repo, s)
if err != nil {
Warnf("Ignoring %q, it is not a snapshot id\n", s)
Warnf("Ignoring %q: %v\n", s, err)
continue
}
}

View File

@@ -31,6 +31,7 @@ import (
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/textfile"
"github.com/restic/restic/internal/ui/termstatus"
"github.com/restic/restic/internal/errors"
@@ -39,7 +40,7 @@ import (
"golang.org/x/crypto/ssh/terminal"
)
var version = "0.12.0"
var version = "0.13.0"
// TimeFormat is the format used for all timestamps printed by restic.
const TimeFormat = "2006-01-02 15:04:05"
@@ -60,6 +61,7 @@ type GlobalOptions struct {
CacheDir string
NoCache bool
CACerts []string
InsecureTLS bool
TLSClientCert string
CleanupCache bool
@@ -96,6 +98,8 @@ func init() {
var cancel context.CancelFunc
globalOptions.ctx, cancel = context.WithCancel(context.Background())
AddCleanupHandler(func() error {
// Must be called before the unlock cleanup handler to ensure that the latter is
// not blocked due to limited number of backend connections, see #1434
cancel()
return nil
})
@@ -114,10 +118,13 @@ func init() {
f.BoolVar(&globalOptions.NoCache, "no-cache", false, "do not use a local cache")
f.StringSliceVar(&globalOptions.CACerts, "cacert", nil, "`file` to load root certificates from (default: use system certificates)")
f.StringVar(&globalOptions.TLSClientCert, "tls-client-cert", "", "path to a `file` containing PEM encoded TLS client certificate and private key")
f.BoolVar(&globalOptions.InsecureTLS, "insecure-tls", false, "skip TLS certificate verification when connecting to the repo (insecure)")
f.BoolVar(&globalOptions.CleanupCache, "cleanup-cache", false, "auto remove old cache directories")
f.IntVar(&globalOptions.LimitUploadKb, "limit-upload", 0, "limits uploads to a maximum rate in KiB/s. (default: unlimited)")
f.IntVar(&globalOptions.LimitDownloadKb, "limit-download", 0, "limits downloads to a maximum rate in KiB/s. (default: unlimited)")
f.StringSliceVarP(&globalOptions.Options, "option", "o", []string{}, "set extended option (`key=value`, can be specified multiple times)")
// Use our "generate" command instead of the cobra provided "completion" command
cmdRoot.CompletionOptions.DisableDefaultCmd = true
restoreTerminal()
}
@@ -142,7 +149,13 @@ func stdinIsTerminal() bool {
}
func stdoutIsTerminal() bool {
return terminal.IsTerminal(int(os.Stdout.Fd()))
// mintty on windows can use pipes which behave like a posix terminal,
// but which are not a terminal handle
return terminal.IsTerminal(int(os.Stdout.Fd())) || stdoutCanUpdateStatus()
}
func stdoutCanUpdateStatus() bool {
return termstatus.CanUpdateStatus(os.Stdout.Fd())
}
func stdoutTerminalWidth() int {
@@ -159,7 +172,7 @@ func stdoutTerminalWidth() int {
// program execution must revert changes to the terminal configuration itself.
// The terminal configuration is only restored while reading a password.
func restoreTerminal() {
if !stdoutIsTerminal() {
if !terminal.IsTerminal(int(os.Stdout.Fd())) {
return
}
@@ -188,16 +201,21 @@ func restoreTerminal() {
}
// ClearLine creates a platform dependent string to clear the current
// line, so it can be overwritten. ANSI sequences are not supported on
// current windows cmd shell.
func ClearLine() string {
if runtime.GOOS == "windows" {
if w := stdoutTerminalWidth(); w > 0 {
return strings.Repeat(" ", w-1) + "\r"
}
return ""
// line, so it can be overwritten.
//
// w should be the terminal width, or 0 to let clearLine figure it out.
func clearLine(w int) string {
if runtime.GOOS != "windows" {
return "\x1b[2K"
}
return "\x1b[2K"
// ANSI sequences are not supported on Windows cmd shell.
if w <= 0 {
if w = stdoutTerminalWidth(); w <= 0 {
return ""
}
}
return strings.Repeat(" ", w-1) + "\r"
}
// Printf writes the message to the configured stdout stream.
@@ -238,31 +256,6 @@ func Verboseff(format string, args ...interface{}) {
}
}
// PrintProgress wraps fmt.Printf to handle the difference in writing progress
// information to terminals and non-terminal stdout
func PrintProgress(format string, args ...interface{}) {
var (
message string
carriageControl string
)
message = fmt.Sprintf(format, args...)
if !(strings.HasSuffix(message, "\r") || strings.HasSuffix(message, "\n")) {
if stdoutIsTerminal() {
carriageControl = "\r"
} else {
carriageControl = "\n"
}
message = fmt.Sprintf("%s%s", message, carriageControl)
}
if stdoutIsTerminal() {
message = fmt.Sprintf("%s%s", ClearLine(), message)
}
fmt.Print(message)
}
// Warnf writes the message to the configured stderr stream.
func Warnf(format string, args ...interface{}) {
_, err := fmt.Fprintf(globalOptions.stderr, format, args...)
@@ -364,7 +357,7 @@ func ReadPassword(opts GlobalOptions, prompt string) (string, error) {
}
if len(password) == 0 {
return "", errors.New("an empty password is not a password")
return "", errors.Fatal("an empty password is not a password")
}
return password, nil
@@ -492,7 +485,7 @@ func OpenRepository(opts GlobalOptions) (*repository.Repository, error) {
return s, nil
}
if c.Created && !opts.JSON {
if c.Created && !opts.JSON && stdoutIsTerminal() {
Verbosef("created new cache in %v\n", c.Base)
}
@@ -511,8 +504,9 @@ func OpenRepository(opts GlobalOptions) (*repository.Repository, error) {
// cleanup old cache dirs if instructed to do so
if opts.CleanupCache {
Printf("removing %d old cache dirs from %v\n", len(oldCacheDirs), c.Base)
if stdoutIsTerminal() && !opts.JSON {
Verbosef("removing %d old cache dirs from %v\n", len(oldCacheDirs), c.Base)
}
for _, item := range oldCacheDirs {
dir := filepath.Join(c.Base, item.Name())
err = fs.RemoveAll(dir)
@@ -563,6 +557,12 @@ func parseConfig(loc location.Location, opts options.Options) (interface{}, erro
cfg.Secret = os.Getenv("AWS_SECRET_ACCESS_KEY")
}
if cfg.KeyID == "" && cfg.Secret != "" {
return nil, errors.Fatalf("unable to open S3 backend: Key ID ($AWS_ACCESS_KEY_ID) is empty")
} else if cfg.KeyID != "" && cfg.Secret == "" {
return nil, errors.Fatalf("unable to open S3 backend: Secret ($AWS_SECRET_ACCESS_KEY) is empty")
}
if cfg.Region == "" {
cfg.Region = os.Getenv("AWS_DEFAULT_REGION")
}
@@ -682,6 +682,7 @@ func open(s string, gopts GlobalOptions, opts options.Options) (restic.Backend,
tropts := backend.TransportOptions{
RootCertFilenames: globalOptions.CACerts,
TLSClientCertKeyFilename: globalOptions.TLSClientCert,
InsecureTLS: globalOptions.InsecureTLS,
}
rt, err := backend.Transport(tropts)
if err != nil {
@@ -704,7 +705,7 @@ func open(s string, gopts GlobalOptions, opts options.Options) (restic.Backend,
case "azure":
be, err = azure.Open(cfg.(azure.Config), rt)
case "swift":
be, err = swift.Open(cfg.(swift.Config), rt)
be, err = swift.Open(globalOptions.ctx, cfg.(swift.Config), rt)
case "b2":
be, err = b2.Open(globalOptions.ctx, cfg.(b2.Config), rt)
case "rest":
@@ -762,6 +763,7 @@ func create(s string, opts options.Options) (restic.Backend, error) {
tropts := backend.TransportOptions{
RootCertFilenames: globalOptions.CACerts,
TLSClientCertKeyFilename: globalOptions.TLSClientCert,
InsecureTLS: globalOptions.InsecureTLS,
}
rt, err := backend.Transport(tropts)
if err != nil {
@@ -780,7 +782,7 @@ func create(s string, opts options.Options) (restic.Backend, error) {
case "azure":
return azure.Create(cfg.(azure.Config), rt)
case "swift":
return swift.Open(cfg.(swift.Config), rt)
return swift.Open(globalOptions.ctx, cfg.(swift.Config), rt)
case "b2":
return b2.Create(globalOptions.ctx, cfg.(b2.Config), rt)
case "rest":

View File

@@ -67,7 +67,7 @@ func isSymlink(fi os.FileInfo) bool {
func sameModTime(fi1, fi2 os.FileInfo) bool {
switch runtime.GOOS {
case "darwin", "freebsd", "openbsd", "netbsd":
case "darwin", "freebsd", "openbsd", "netbsd", "solaris":
if isSymlink(fi1) && isSymlink(fi2) {
return true
}

View File

@@ -15,6 +15,7 @@ import (
"regexp"
"runtime"
"strings"
"sync"
"syscall"
"testing"
"time"
@@ -158,8 +159,11 @@ func testRunDiffOutput(gopts GlobalOptions, firstSnapshotID string, secondSnapsh
buf := bytes.NewBuffer(nil)
globalOptions.stdout = buf
oldStdout := gopts.stdout
gopts.stdout = buf
defer func() {
globalOptions.stdout = os.Stdout
gopts.stdout = oldStdout
}()
opts := DiffOptions{
@@ -345,6 +349,57 @@ func testBackup(t *testing.T, useFsSnapshot bool) {
testRunCheck(t, env.gopts)
}
func TestDryRunBackup(t *testing.T) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
testSetupBackupData(t, env)
opts := BackupOptions{}
dryOpts := BackupOptions{DryRun: true}
// dry run before first backup
testRunBackup(t, filepath.Dir(env.testdata), []string{"testdata"}, dryOpts, env.gopts)
snapshotIDs := testRunList(t, "snapshots", env.gopts)
rtest.Assert(t, len(snapshotIDs) == 0,
"expected no snapshot, got %v", snapshotIDs)
packIDs := testRunList(t, "packs", env.gopts)
rtest.Assert(t, len(packIDs) == 0,
"expected no data, got %v", snapshotIDs)
indexIDs := testRunList(t, "index", env.gopts)
rtest.Assert(t, len(indexIDs) == 0,
"expected no index, got %v", snapshotIDs)
// first backup
testRunBackup(t, filepath.Dir(env.testdata), []string{"testdata"}, opts, env.gopts)
snapshotIDs = testRunList(t, "snapshots", env.gopts)
packIDs = testRunList(t, "packs", env.gopts)
indexIDs = testRunList(t, "index", env.gopts)
// dry run between backups
testRunBackup(t, filepath.Dir(env.testdata), []string{"testdata"}, dryOpts, env.gopts)
snapshotIDsAfter := testRunList(t, "snapshots", env.gopts)
rtest.Equals(t, snapshotIDs, snapshotIDsAfter)
dataIDsAfter := testRunList(t, "packs", env.gopts)
rtest.Equals(t, packIDs, dataIDsAfter)
indexIDsAfter := testRunList(t, "index", env.gopts)
rtest.Equals(t, indexIDs, indexIDsAfter)
// second backup, implicit incremental
testRunBackup(t, filepath.Dir(env.testdata), []string{"testdata"}, opts, env.gopts)
snapshotIDs = testRunList(t, "snapshots", env.gopts)
packIDs = testRunList(t, "packs", env.gopts)
indexIDs = testRunList(t, "index", env.gopts)
// another dry run
testRunBackup(t, filepath.Dir(env.testdata), []string{"testdata"}, dryOpts, env.gopts)
snapshotIDsAfter = testRunList(t, "snapshots", env.gopts)
rtest.Equals(t, snapshotIDs, snapshotIDsAfter)
dataIDsAfter = testRunList(t, "packs", env.gopts)
rtest.Equals(t, packIDs, dataIDsAfter)
indexIDsAfter = testRunList(t, "index", env.gopts)
rtest.Equals(t, indexIDs, indexIDsAfter)
}
func TestBackupNonExistingFile(t *testing.T) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
@@ -374,10 +429,11 @@ func removePacksExcept(gopts GlobalOptions, t *testing.T, keep restic.IDSet, rem
// Get all tree packs
rtest.OK(t, r.LoadIndex(gopts.ctx))
treePacks := restic.NewIDSet()
for _, idx := range r.Index().(*repository.MasterIndex).All() {
for _, id := range idx.TreePacks() {
treePacks.Insert(id)
for pb := range r.Index().Each(context.TODO()) {
if pb.Type == restic.TreeBlob {
treePacks.Insert(pb.PackID)
}
}
@@ -436,11 +492,10 @@ func TestBackupTreeLoadError(t *testing.T) {
r, err := OpenRepository(env.gopts)
rtest.OK(t, err)
rtest.OK(t, r.LoadIndex(env.gopts.ctx))
// collect tree packs of subdirectory
subTreePacks := restic.NewIDSet()
for _, idx := range r.Index().(*repository.MasterIndex).All() {
for _, id := range idx.TreePacks() {
subTreePacks.Insert(id)
treePacks := restic.NewIDSet()
for pb := range r.Index().Each(context.TODO()) {
if pb.Type == restic.TreeBlob {
treePacks.Insert(pb.PackID)
}
}
@@ -448,7 +503,7 @@ func TestBackupTreeLoadError(t *testing.T) {
testRunCheck(t, env.gopts)
// delete the subdirectory pack first
for id := range subTreePacks {
for id := range treePacks {
rtest.OK(t, r.Backend().Remove(env.gopts.ctx, restic.Handle{Type: restic.PackFile, Name: id.String()}))
}
testRunRebuildIndex(t, env.gopts)
@@ -799,6 +854,25 @@ func TestCopyIncremental(t *testing.T) {
len(copiedSnapshotIDs), len(snapshotIDs))
}
func TestCopyUnstableJSON(t *testing.T) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
env2, cleanup2 := withTestEnvironment(t)
defer cleanup2()
// contains a symlink created using `ln -s '../i/'$'\355\246\361''d/samba' broken-symlink`
datafile := filepath.Join("testdata", "copy-unstable-json.tar.gz")
rtest.SetupTarTestFixture(t, env.base, datafile)
testRunInit(t, env2.gopts)
testRunCopy(t, env.gopts, env2.gopts)
testRunCheck(t, env2.gopts)
copiedSnapshotIDs := testRunList(t, "snapshots", env2.gopts)
rtest.Assert(t, 1 == len(copiedSnapshotIDs), "still expected %v snapshot, found %v",
1, len(copiedSnapshotIDs))
}
func TestInitCopyChunkerParams(t *testing.T) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
@@ -1014,6 +1088,41 @@ func TestKeyAddRemove(t *testing.T) {
testRunKeyAddNewKeyUserHost(t, env.gopts)
}
type emptySaveBackend struct {
restic.Backend
}
func (b *emptySaveBackend) Save(ctx context.Context, h restic.Handle, rd restic.RewindReader) error {
return b.Backend.Save(ctx, h, restic.NewByteReader([]byte{}, nil))
}
func TestKeyProblems(t *testing.T) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
testRunInit(t, env.gopts)
env.gopts.backendTestHook = func(r restic.Backend) (restic.Backend, error) {
return &emptySaveBackend{r}, nil
}
testKeyNewPassword = "geheim2"
defer func() {
testKeyNewPassword = ""
}()
err := runKey(env.gopts, []string{"passwd"})
t.Log(err)
rtest.Assert(t, err != nil, "expected passwd change to fail")
err = runKey(env.gopts, []string{"add"})
t.Log(err)
rtest.Assert(t, err != nil, "expected key adding to fail")
t.Logf("testing access with initial password %q\n", env.gopts.password)
rtest.OK(t, runKey(env.gopts, []string{"list"}))
testRunCheck(t, env.gopts)
}
func testFileSize(filename string, size int64) error {
fi, err := os.Stat(filename)
if err != nil {
@@ -1311,7 +1420,7 @@ func TestFindJSON(t *testing.T) {
rtest.Assert(t, matches[0].Hits == 3, "expected hits to show 3 matches (%v)", datafile)
}
func TestRebuildIndex(t *testing.T) {
func testRebuildIndex(t *testing.T, backendTestHook backendWrapper) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
@@ -1331,8 +1440,10 @@ func TestRebuildIndex(t *testing.T) {
t.Fatalf("did not find hint for rebuild-index command")
}
env.gopts.backendTestHook = backendTestHook
testRunRebuildIndex(t, env.gopts)
env.gopts.backendTestHook = nil
out, err = testRunCheckOutput(env.gopts)
if len(out) != 0 {
t.Fatalf("expected no output from the checker, got: %v", out)
@@ -1343,9 +1454,57 @@ func TestRebuildIndex(t *testing.T) {
}
}
func TestRebuildIndex(t *testing.T) {
testRebuildIndex(t, nil)
}
func TestRebuildIndexAlwaysFull(t *testing.T) {
indexFull := repository.IndexFull
defer func() {
repository.IndexFull = indexFull
}()
repository.IndexFull = func(*repository.Index) bool { return true }
TestRebuildIndex(t)
testRebuildIndex(t, nil)
}
// indexErrorBackend modifies the first index after reading.
type indexErrorBackend struct {
restic.Backend
lock sync.Mutex
hasErred bool
}
func (b *indexErrorBackend) Load(ctx context.Context, h restic.Handle, length int, offset int64, consumer func(rd io.Reader) error) error {
return b.Backend.Load(ctx, h, length, offset, func(rd io.Reader) error {
// protect hasErred
b.lock.Lock()
defer b.lock.Unlock()
if !b.hasErred && h.Type == restic.IndexFile {
b.hasErred = true
return consumer(errorReadCloser{rd})
}
return consumer(rd)
})
}
type errorReadCloser struct {
io.Reader
}
func (erd errorReadCloser) Read(p []byte) (int, error) {
n, err := erd.Reader.Read(p)
if n > 0 {
p[0] ^= 1
}
return n, err
}
func TestRebuildIndexDamage(t *testing.T) {
testRebuildIndex(t, func(r restic.Backend) (restic.Backend, error) {
return &indexErrorBackend{
Backend: r,
}, nil
})
}
type appendOnlyBackend struct {
@@ -1588,7 +1747,7 @@ func testEdgeCaseRepo(t *testing.T, tarfile string, optionsCheck CheckOptions, o
// a listOnceBackend only allows listing once per filetype
// listing filetypes more than once may cause problems with eventually consistent
// backends (like e.g. AWS S3) as the second listing may be inconsistent to what
// backends (like e.g. Amazon S3) as the second listing may be inconsistent to what
// is expected by the first listing + some operations.
type listOnceBackend struct {
restic.Backend
@@ -1816,10 +1975,8 @@ var diffOutputRegexPatterns = []string{
"Removed: +2[0-9]{2}\\.[0-9]{3} KiB",
}
func TestDiff(t *testing.T) {
func setupDiffRepo(t *testing.T) (*testEnvironment, func(), string, string) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
testRunInit(t, env.gopts)
datadir := filepath.Join(env.base, "testdata")
@@ -1855,19 +2012,82 @@ func TestDiff(t *testing.T) {
testRunBackup(t, "", []string{datadir}, opts, env.gopts)
_, secondSnapshotID := lastSnapshot(snapshots, loadSnapshotMap(t, env.gopts))
return env, cleanup, firstSnapshotID, secondSnapshotID
}
func TestDiff(t *testing.T) {
env, cleanup, firstSnapshotID, secondSnapshotID := setupDiffRepo(t)
defer cleanup()
// quiet suppresses the diff output except for the summary
env.gopts.Quiet = false
_, err := testRunDiffOutput(env.gopts, "", secondSnapshotID)
rtest.Assert(t, err != nil, "expected error on invalid snapshot id")
out, err := testRunDiffOutput(env.gopts, firstSnapshotID, secondSnapshotID)
if err != nil {
t.Fatalf("expected no error from diff for test repository, got %v", err)
}
rtest.OK(t, err)
for _, pattern := range diffOutputRegexPatterns {
r, err := regexp.Compile(pattern)
rtest.Assert(t, err == nil, "failed to compile regexp %v", pattern)
rtest.Assert(t, r.MatchString(out), "expected pattern %v in output, got\n%v", pattern, out)
}
// check quiet output
env.gopts.Quiet = true
outQuiet, err := testRunDiffOutput(env.gopts, firstSnapshotID, secondSnapshotID)
rtest.OK(t, err)
rtest.Assert(t, len(outQuiet) < len(out), "expected shorter output on quiet mode %v vs. %v", len(outQuiet), len(out))
}
type typeSniffer struct {
MessageType string `json:"message_type"`
}
func TestDiffJSON(t *testing.T) {
env, cleanup, firstSnapshotID, secondSnapshotID := setupDiffRepo(t)
defer cleanup()
// quiet suppresses the diff output except for the summary
env.gopts.Quiet = false
env.gopts.JSON = true
out, err := testRunDiffOutput(env.gopts, firstSnapshotID, secondSnapshotID)
rtest.OK(t, err)
var stat DiffStatsContainer
var changes int
scanner := bufio.NewScanner(strings.NewReader(out))
for scanner.Scan() {
line := scanner.Text()
var sniffer typeSniffer
rtest.OK(t, json.Unmarshal([]byte(line), &sniffer))
switch sniffer.MessageType {
case "change":
changes++
case "statistics":
rtest.OK(t, json.Unmarshal([]byte(line), &stat))
default:
t.Fatalf("unexpected message type %v", sniffer.MessageType)
}
}
rtest.Equals(t, 9, changes)
rtest.Assert(t, stat.Added.Files == 2 && stat.Added.Dirs == 3 && stat.Added.DataBlobs == 2 &&
stat.Removed.Files == 1 && stat.Removed.Dirs == 2 && stat.Removed.DataBlobs == 1 &&
stat.ChangedFiles == 1, "unexpected statistics")
// check quiet output
env.gopts.Quiet = true
outQuiet, err := testRunDiffOutput(env.gopts, firstSnapshotID, secondSnapshotID)
rtest.OK(t, err)
stat = DiffStatsContainer{}
rtest.OK(t, json.Unmarshal([]byte(outQuiet), &stat))
rtest.Assert(t, stat.Added.Files == 2 && stat.Added.Dirs == 3 && stat.Added.DataBlobs == 2 &&
stat.Removed.Files == 1 && stat.Removed.Dirs == 2 && stat.Removed.DataBlobs == 1 &&
stat.ChangedFiles == 1, "unexpected statistics")
rtest.Assert(t, stat.SourceSnapshot == firstSnapshotID && stat.TargetSnapshot == secondSnapshotID, "unexpected snapshot ids")
}
type writeToOnly struct {

View File

@@ -16,6 +16,7 @@ var globalLocks struct {
cancelRefresh chan struct{}
refreshWG sync.WaitGroup
sync.Mutex
sync.Once
}
func lockRepo(ctx context.Context, repo *repository.Repository) (*restic.Lock, error) {
@@ -27,6 +28,12 @@ func lockRepoExclusive(ctx context.Context, repo *repository.Repository) (*resti
}
func lockRepository(ctx context.Context, repo *repository.Repository, exclusive bool) (*restic.Lock, error) {
// make sure that a repository is unlocked properly and after cancel() was
// called by the cleanup handler in global.go
globalLocks.Do(func() {
AddCleanupHandler(unlockAll)
})
lockFn := restic.NewLock
if exclusive {
lockFn = restic.NewExclusiveLock
@@ -128,7 +135,3 @@ func unlockAll() error {
return nil
}
func init() {
AddCleanupHandler(unlockAll)
}

View File

@@ -4,15 +4,17 @@ import (
"fmt"
"os"
"strconv"
"strings"
"time"
"github.com/restic/restic/internal/ui/progress"
"github.com/restic/restic/internal/ui/termstatus"
)
// calculateProgressInterval returns the interval configured via RESTIC_PROGRESS_FPS
// or if unset returns an interval for 60fps on interactive terminals and 0 (=disabled)
// for non-interactive terminals
func calculateProgressInterval() time.Duration {
// for non-interactive terminals or when run using the --quiet flag
func calculateProgressInterval(show bool, json bool) time.Duration {
interval := time.Second / 60
fps, err := strconv.ParseFloat(os.Getenv("RESTIC_PROGRESS_FPS"), 64)
if err == nil && fps > 0 {
@@ -20,7 +22,7 @@ func calculateProgressInterval() time.Duration {
fps = 60
}
interval = time.Duration(float64(time.Second) / fps)
} else if !stdoutIsTerminal() {
} else if !json && !stdoutCanUpdateStatus() || !show {
interval = 0
}
return interval
@@ -31,7 +33,8 @@ func newProgressMax(show bool, max uint64, description string) *progress.Counter
if !show {
return nil
}
interval := calculateProgressInterval()
interval := calculateProgressInterval(show, false)
canUpdateStatus := stdoutCanUpdateStatus()
return progress.New(interval, max, func(v uint64, max uint64, d time.Duration, final bool) {
var status string
@@ -42,13 +45,36 @@ func newProgressMax(show bool, max uint64, description string) *progress.Counter
formatDuration(d), formatPercent(v, max), v, max, description)
}
if w := stdoutTerminalWidth(); w > 0 {
status = shortenStatus(w, status)
}
PrintProgress("%s", status)
printProgress(status, canUpdateStatus)
if final {
fmt.Print("\n")
}
})
}
func printProgress(status string, canUpdateStatus bool) {
w := stdoutTerminalWidth()
if w > 0 {
if w < 3 {
status = termstatus.Truncate(status, w)
} else {
status = termstatus.Truncate(status, w-3) + "..."
}
}
var carriageControl, clear string
if canUpdateStatus {
clear = clearLine(w)
}
if !(strings.HasSuffix(status, "\r") || strings.HasSuffix(status, "\n")) {
if canUpdateStatus {
carriageControl = "\r"
} else {
carriageControl = "\n"
}
}
_, _ = os.Stdout.Write([]byte(clear + status + carriageControl))
}

View File

@@ -9,6 +9,7 @@ import (
type secondaryRepoOptions struct {
Repo string
RepositoryFile string
password string
PasswordFile string
PasswordCommand string
@@ -17,18 +18,25 @@ type secondaryRepoOptions struct {
func initSecondaryRepoOptions(f *pflag.FlagSet, opts *secondaryRepoOptions, repoPrefix string, repoUsage string) {
f.StringVarP(&opts.Repo, "repo2", "", os.Getenv("RESTIC_REPOSITORY2"), repoPrefix+" `repository` "+repoUsage+" (default: $RESTIC_REPOSITORY2)")
f.StringVarP(&opts.RepositoryFile, "repository-file2", "", os.Getenv("RESTIC_REPOSITORY_FILE2"), "`file` from which to read the "+repoPrefix+" repository location "+repoUsage+" (default: $RESTIC_REPOSITORY_FILE2)")
f.StringVarP(&opts.PasswordFile, "password-file2", "", os.Getenv("RESTIC_PASSWORD_FILE2"), "`file` to read the "+repoPrefix+" repository password from (default: $RESTIC_PASSWORD_FILE2)")
f.StringVarP(&opts.KeyHint, "key-hint2", "", os.Getenv("RESTIC_KEY_HINT2"), "key ID of key to try decrypting the "+repoPrefix+" repository first (default: $RESTIC_KEY_HINT2)")
f.StringVarP(&opts.PasswordCommand, "password-command2", "", os.Getenv("RESTIC_PASSWORD_COMMAND2"), "shell `command` to obtain the "+repoPrefix+" repository password from (default: $RESTIC_PASSWORD_COMMAND2)")
}
func fillSecondaryGlobalOpts(opts secondaryRepoOptions, gopts GlobalOptions, repoPrefix string) (GlobalOptions, error) {
if opts.Repo == "" {
return GlobalOptions{}, errors.Fatal("Please specify a " + repoPrefix + " repository location (--repo2)")
if opts.Repo == "" && opts.RepositoryFile == "" {
return GlobalOptions{}, errors.Fatal("Please specify a " + repoPrefix + " repository location (--repo2 or --repository-file2)")
}
if opts.Repo != "" && opts.RepositoryFile != "" {
return GlobalOptions{}, errors.Fatal("Options --repo2 and --repository-file2 are mutually exclusive, please specify only one")
}
var err error
dstGopts := gopts
dstGopts.Repo = opts.Repo
dstGopts.RepositoryFile = opts.RepositoryFile
dstGopts.PasswordFile = opts.PasswordFile
dstGopts.PasswordCommand = opts.PasswordCommand
dstGopts.KeyHint = opts.KeyHint

View File

@@ -0,0 +1,132 @@
package main
import (
"io/ioutil"
"path/filepath"
"testing"
rtest "github.com/restic/restic/internal/test"
)
//TestFillSecondaryGlobalOpts tests valid and invalid data on fillSecondaryGlobalOpts-function
func TestFillSecondaryGlobalOpts(t *testing.T) {
//secondaryRepoTestCase defines a struct for test cases
type secondaryRepoTestCase struct {
Opts secondaryRepoOptions
DstGOpts GlobalOptions
}
//validSecondaryRepoTestCases is a list with test cases that must pass
var validSecondaryRepoTestCases = []secondaryRepoTestCase{
{
// Test if Repo and Password are parsed correctly.
Opts: secondaryRepoOptions{
Repo: "backupDst",
password: "secretDst",
},
DstGOpts: GlobalOptions{
Repo: "backupDst",
password: "secretDst",
},
},
{
// Test if RepositoryFile and PasswordFile are parsed correctly.
Opts: secondaryRepoOptions{
RepositoryFile: "backupDst",
PasswordFile: "passwordFileDst",
},
DstGOpts: GlobalOptions{
RepositoryFile: "backupDst",
password: "secretDst",
PasswordFile: "passwordFileDst",
},
},
{
// Test if RepositoryFile and PasswordCommand are parsed correctly.
Opts: secondaryRepoOptions{
RepositoryFile: "backupDst",
PasswordCommand: "echo secretDst",
},
DstGOpts: GlobalOptions{
RepositoryFile: "backupDst",
password: "secretDst",
PasswordCommand: "echo secretDst",
},
},
}
//invalidSecondaryRepoTestCases is a list with test cases that must fail
var invalidSecondaryRepoTestCases = []secondaryRepoTestCase{
{
// Test must fail on no repo given.
Opts: secondaryRepoOptions{},
},
{
// Test must fail as Repo and RepositoryFile are both given
Opts: secondaryRepoOptions{
Repo: "backupDst",
RepositoryFile: "backupDst",
},
},
{
// Test must fail as PasswordFile and PasswordCommand are both given
Opts: secondaryRepoOptions{
Repo: "backupDst",
PasswordFile: "passwordFileDst",
PasswordCommand: "notEmpty",
},
},
{
// Test must fail as PasswordFile does not exist
Opts: secondaryRepoOptions{
Repo: "backupDst",
PasswordFile: "NonExistingFile",
},
},
{
// Test must fail as PasswordCommand does not exist
Opts: secondaryRepoOptions{
Repo: "backupDst",
PasswordCommand: "notEmpty",
},
},
{
// Test must fail as no password is given.
Opts: secondaryRepoOptions{
Repo: "backupDst",
},
},
}
//gOpts defines the Global options used in the secondary repository tests
var gOpts = GlobalOptions{
Repo: "backupSrc",
RepositoryFile: "backupSrc",
password: "secretSrc",
PasswordFile: "passwordFileSrc",
}
//Create temp dir to create password file.
dir, cleanup := rtest.TempDir(t)
defer cleanup()
cleanup = rtest.Chdir(t, dir)
defer cleanup()
//Create temporary password file
err := ioutil.WriteFile(filepath.Join(dir, "passwordFileDst"), []byte("secretDst"), 0666)
rtest.OK(t, err)
// Test all valid cases
for _, testCase := range validSecondaryRepoTestCases {
DstGOpts, err := fillSecondaryGlobalOpts(testCase.Opts, gOpts, "destination")
rtest.OK(t, err)
rtest.Equals(t, DstGOpts, testCase.DstGOpts)
}
// Test all invalid cases
for _, testCase := range invalidSecondaryRepoTestCases {
_, err := fillSecondaryGlobalOpts(testCase.Opts, gOpts, "destination")
rtest.Assert(t, err != nil, "Expected error, but function did not return an error")
}
}

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More