Compare commits

..

450 Commits

Author SHA1 Message Date
Alexander Neumann
37d0e1fe58 Add version for 0.15.0 2023-01-12 20:51:19 +01:00
Alexander Neumann
da196aa43e Update manpages and auto-completion 2023-01-12 20:51:19 +01:00
Alexander Neumann
099774c2aa Generate CHANGELOG.md for 0.15.0 2023-01-12 20:50:45 +01:00
Alexander Neumann
cd2f53e3f9 Prepare changelog for 0.15.0 2023-01-12 20:50:44 +01:00
Alexander Neumann
0c5a55d1bd Merge pull request #4142 from restic/update-deps
Upgrade dependencies
2023-01-12 20:23:25 +01:00
Michael Eischer
9ddca65f6d update dependencies 2023-01-11 23:22:10 +01:00
Michael Eischer
06fee601bc Merge pull request #4141 from restic/doc-no-scan-correction
doc: Correct heading level for --no-scan
2023-01-11 22:01:05 +01:00
Leo R. Lundgren
1cb920cc57 doc: Correct heading level for --no-scan 2023-01-11 21:47:38 +01:00
Michael Eischer
8f53ffb921 Merge pull request #4140 from restic/doc-manual
doc: Update manual page with --no-scan
2023-01-11 21:42:42 +01:00
Michael Eischer
351cbb4f94 Merge pull request #4139 from restic/doc-no-scan
doc: Move and update documentation for --no-scan
2023-01-11 21:41:20 +01:00
Leo R. Lundgren
b8b5508d15 doc: Update manual page with --no-scan 2023-01-11 02:27:43 +01:00
Leo R. Lundgren
c5542ddcd2 doc: Move and update documentation for --no-scan 2023-01-11 00:09:24 +01:00
rawtaz
dffb8e0c14 Merge pull request #4138 from MichaelEischer/doc-sparse
doc: add description for restore --sparse
2023-01-10 23:39:51 +01:00
Michael Eischer
b151fa498a doc: add description for restore --sparse 2023-01-10 23:27:42 +01:00
Michael Eischer
c354b55e62 Merge pull request #4136 from restic/doc-small-files
doc: Clarify text about tuning backups for small files
2023-01-08 22:00:09 +01:00
Michael Eischer
375953a001 Merge pull request #4088 from MichaelEischer/doc-cifs-backup-source
doc: reading from CIFS can be a problem on linux
2023-01-08 21:56:48 +01:00
Leo R. Lundgren
6306797238 doc: Clarify text about tuning backups for small files 2023-01-08 21:49:55 +01:00
Michael Eischer
ef9164fcbb doc: reading from CIFS can be a problem on linux 2023-01-08 21:47:34 +01:00
rawtaz
e2bcfd68dd Merge pull request #4135 from restic/changelogs
Polish changelogs
2023-01-08 20:04:24 +01:00
Leo R. Lundgren
33fb351386 Polish changelogs 2023-01-08 19:48:51 +01:00
Michael Eischer
c9840da4f8 Merge pull request #4134 from MichaelEischer/fix-verbose-help-text
Correct maximum verbosity level in help message
2023-01-07 22:24:04 +01:00
Michael Eischer
732184a849 Correct maximum verbosity level in help message
The maximum for `--verbose=n` is n=2. Internally it is translated into a
scale from 0 to 3. However, the default (without verbose) is 1, thus the
verbosity level can only be increased two times.
2023-01-07 22:02:13 +01:00
Michael Eischer
24178c97e9 Merge pull request #4117 from MichaelEischer/prune-dry-run-help
prune: make it clearer when prune is used in dry-run mode
2023-01-04 23:18:53 +01:00
Michael Eischer
7a36306901 forget: Clarify log message for --dry-run --prune 2023-01-04 00:44:46 +01:00
Michael Eischer
b404ad4eaa prune: make it clearer when prune is used in dry-run mode 2023-01-04 00:44:46 +01:00
Michael Eischer
e02a10c58a Merge pull request #4109 from MichaelEischer/fix-prune-uncompressed-accounting
prune: Fix calculation of remaining uncompressed data
2023-01-03 23:28:10 +01:00
Michael Eischer
81dc8c8d13 prune: Fix calculation of remaining uncompressed data
Only the repacking of *un*compressed packs reduces the amount of
uncompressed data. Previously the counter even overflowed for fully
compressed repositories.
2023-01-03 22:34:36 +01:00
Michael Eischer
89a8006578 Merge pull request #4104 from philaris/fix_max_uint32_uid_gid_to_zero
in tar dump, convert uid, gid of value -1 to zero
2023-01-02 22:28:28 +01:00
Panagiotis Cheilaris
3b516d4b70 convert uid/gid -1 to 0 only in 32-bit tar dump
Only for a 32-bit build of restic, convert a uid or gid value of -1 to 0.
2022-12-30 18:12:12 +01:00
Michael Eischer
0de3b24756 Merge pull request #4110 from MichaelEischer/remove-exitf
Remove Exitf function
2022-12-29 12:07:51 +01:00
Michael Eischer
0fbff39ae8 Merge pull request #4108 from MichaelEischer/cleanup-check-output
Cleanup check output
2022-12-29 11:59:18 +01:00
Michael Eischer
68b1f30733 Remove Exitf function
Commands should use the normal shutdown path. In addition, the Exitf
function was only used by `dump` and `restore` but not any other command
which introduces the risk of inconsistent behavior.
2022-12-28 21:42:38 +01:00
Panagiotis Cheilaris
a86a56cf3b fix lint issue with function name 'tarId'
See https://github.com/golang/lint/issues/89 and
https://github.com/golang/lint/issues/124
2022-12-28 18:46:58 +01:00
Panagiotis Cheilaris
050ed616ae be more explicit with uid or gid of value -1 2022-12-28 18:44:36 +01:00
Michael Eischer
8430399fce check: Partially fix garbled output
When reporting an error for a tree, the output message can overlap with
the progress bar output, e.g. `error for tree e91ef6fb:napshots`.

The fix only applies for this specific message and does not work on
Windows.
2022-12-28 17:47:27 +01:00
Michael Eischer
aea96b7d86 check: Slightly improve help message
If a repository has both pack/index related warnings and errors, then
the help message is quite misleading. Reword it slightly to be more
clear.
2022-12-28 17:46:06 +01:00
Michael Eischer
bcae28afb4 Merge pull request #4100 from klemensn/tag-self-update
Reinstate selfupdate tag to make builds without self-update work
2022-12-28 15:49:11 +01:00
Panagiotis Cheilaris
10fa5cde0a in tar dump, convert uid, gid of value -1 to zero 2022-12-27 16:36:04 +01:00
Klemens Nanni
61e7386384 Bugfix: Make distribution package builds without self-update work 2022-12-26 21:52:24 +04:00
Klemens Nanni
94f6e7d4a6 Reinstate selfupdate tag to make builds without self-update work
Revert what seems to be a typo introduced as part of the fix for #2041
in 2018 7d0f2eaf24.

`xbuild` does not look like a go build/tag keyword to me, I failed to
find documentation for it and using `go install -tags '!selfupdate' ...`
has no effect, i.e. self-update code is still compiled.

`+build` however works;  updating the OpenBSD port/binary package
security/restic to apply this PR works as expected:

```
	$ restic help | grep self
	$ restic self-update
	unknown command "self-update" for "restic"
```

(Using `go:build` now as per restic's style and gofmt.)

Previously, using `restic-0.14.0p1` on OpenBSD/amd64 7.2-current would
check for a newer version and probably attempt replacing the system wide
root-owned executable (on a read-only filesystem) as unprivileged user:

```
	$ restic version
	restic 0.14.0 compiled with go1.19.2 on openbsd/amd64
	$ restic help | grep self
	  self-update   Update the restic binary
	$ restic self-update
	writing restic to /usr/local/bin/restic
	find latest release of restic at GitHub
	restic is up to date
```

(It never tried to actually write besaid path;  doing so would fail, so
the current message can be considered misleading.)
2022-12-26 21:46:22 +04:00
Michael Eischer
90fb6f70b4 Merge pull request #4089 from greatroar/errors
Clean up error handling further
2022-12-24 10:41:56 +01:00
Michael Eischer
29b8500254 Merge pull request #4090 from restic/upgrade-dependencies
Upgrade dependencies
2022-12-23 22:34:39 +01:00
Michael Eischer
705cabb304 Merge pull request #3981 from MichaelEischer/prune-uncompressed-stats
prune: report how much data must be repacked to compressed the repo
2022-12-23 22:34:04 +01:00
Michael Eischer
a6f3ae5790 Merge pull request #4094 from googol42/master
remove duplicated init
2022-12-23 22:33:01 +01:00
Andreas Dominik Preikschat
ea37240597 remove duplicated init
the documentation contained the `init` command twice
2022-12-20 17:24:56 +01:00
Michael Eischer
bd2f6aaac3 azure: downgrade azblob dependency due to build breakages on Solaris 2022-12-17 23:35:07 +01:00
Michael Eischer
583372956b Upgrade dependencies
Nothing special has changed.
2022-12-17 15:23:11 +01:00
greatroar
1678392a6d checker: Make ErrLegacyLayout a value, not a type 2022-12-17 09:41:07 +01:00
greatroar
d9002f050e backend: Don't Wrap errors from url.Parse
The messages from url.Error.Error already start with the word "parse".
2022-12-17 09:41:07 +01:00
greatroar
b150dd0235 all: Replace some errors.Wrap calls by errors.WithStack
Mostly changed the ones that repeat the name of a system call, which is
already contained in os.PathError.Op. internal/fs.Reader had to be
changed to actually return such errors.
2022-12-17 09:41:07 +01:00
Michael Eischer
cccc17e4e9 Merge pull request #4086 from blackpiglet/modify_access_denied_code
Fix: change error code in function isAccessDenied to AccessDenied
2022-12-16 21:55:52 +01:00
Michael Eischer
2723159ed4 Merge pull request #3931 from kjetilho/feature/optional_scanner
add --no-scan to backup command
2022-12-16 21:42:22 +01:00
Xun Jiang/Bruce Jiang
ecc62c8be2 Update changelog/unreleased/issue-4085
Co-authored-by: greatroar <61184462+greatroar@users.noreply.github.com>
Signed-off-by: Xun Jiang <blackpiglet@gmail.com>
2022-12-16 21:41:16 +01:00
Xun Jiang
cc5325d22b Fix: change error code in function isAccessDenied to AccessDenied
Signed-off-by: Xun Jiang <blackpiglet@gmail.com>
2022-12-16 21:41:16 +01:00
Michael Eischer
da0e45cf40 Merge pull request #4083 from greatroar/cleanup
repository: Remove empty cleanup functions in tests
2022-12-16 21:39:30 +01:00
Kjetil Torgrim Homme
14aa6f2a00 add --disable-scanner to backup command
The scanner process has only cosmetic effect for the progress printer,
and can be disabled without impacting functionality when the user does
not need an estimate of completion.

In many cases the scanner process can provide beneficial priming of
the file system cache, so as general advice it should not be disabled.
However, tests have shown that backup of NFS and fuse based filesystems,
where stat(2) is relatively expensive, can be significantly faster
without the scanner.
2022-12-16 21:29:59 +01:00
Michael Eischer
7bdb985dde Merge pull request #4079 from MichaelEischer/rewrite-set-original
rewrite: Always set the Original field in a rewritten snapshot
2022-12-13 22:56:20 +01:00
Michael Eischer
1bfe98bdc0 Merge pull request #2398 from DanielG/b2-hide-file
b2: Fallback to b2_hide_file when delete returns unauthorized
2022-12-13 22:52:23 +01:00
Michael Eischer
1c071a462e Merge pull request #4084 from ekarlso/azure-stat-fix
fix: Make create not error out when ContainerNotFound
2022-12-13 22:49:05 +01:00
Michael Eischer
25d22d5241 Merge pull request #4082 from MichaelEischer/unbuffered-logger-for-testing
Don't buffer the golang `log` package output when running tests
2022-12-13 22:45:50 +01:00
Endre Karlson
7dd33c0ecc azure: Make create not error out when ContainerNotFound 2022-12-11 22:57:23 +01:00
greatroar
c0b5ec55ab repository: Remove empty cleanup functions in tests
TestRepository and its variants always returned no-op cleanup functions.
If they ever do need to do cleanup, using testing.T.Cleanup is easier
than passing these functions around.
2022-12-11 11:06:25 +01:00
Michael Eischer
2e3d4640be Don't buffer the golang log output when running tests 2022-12-10 16:08:27 +01:00
Michael Eischer
38b2e9b42c rewrite: Always set the Original field in a rewritten snapshot
The Original field is meant to remember the original snapshot id if e.g.
changing its tags. It was only set by the `rewrite` command if it was
not set previously. However, a rewritten snapshot is potentially rather
different from the original snapshot. Thus just always set the Original
field. This also makes it easier to later on detect and potentially
remove the original snapshots.
2022-12-10 12:47:00 +01:00
Michael Eischer
049a105ba5 Merge pull request #4077 from greatroar/cleanup
test: Use testing.T.Cleanup to remove tempdirs
2022-12-09 22:17:46 +01:00
Michael Eischer
4b98b5562d Merge pull request #4075 from greatroar/sftp-enospc
sftp: Fix ENOSPC check
2022-12-09 22:00:13 +01:00
greatroar
f90bf84ba7 test: Use testing.T.Cleanup to remove tempdirs 2022-12-09 14:23:55 +01:00
greatroar
83d23b3ae8 Changelog for ENOSPC handling bug 2022-12-09 08:50:30 +01:00
Michael Eischer
eae7366563 Merge pull request #4028 from ekarlso/use-az-blob-sdk
Switch to azblob sdk
2022-12-07 21:58:03 +01:00
Endre Karlson
25648e2501 azure: Switch to azblob sdk 2022-12-07 21:46:07 +01:00
greatroar
62520bb7b4 sftp: Fix ENOSPC check
We now check for space that is not reserved for the root user on the
remote, and the check is no longer in a defer block because it wouldn't
fire. Some change in the surrounding code may have led the deferred
function to capture the wrong err variable.

Fixes #3336.
2022-12-07 21:06:46 +01:00
rawtaz
4ba31df08f Merge pull request #4074 from greatroar/lobaro-docker
doc: Remove ref to Lobaro's Docker image
2022-12-04 18:00:08 +01:00
greatroar
5efcbe143c doc: Remove ref to Lobaro's Docker image
It hasn't been updated for a while and has restic 0.12.0. Fixes #4002.
2022-12-04 16:20:42 +01:00
Michael Eischer
0df585dd99 Merge pull request #4066 from sedlund/fix#4033
fix#4033 cmd: copy no longer lists skipped existing snapshots by default
2022-12-03 19:22:11 +01:00
Michael Eischer
223da7344e Merge pull request #4070 from restic/fix-cloud-tests
Fix cloud tests
2022-12-03 19:21:25 +01:00
Michael Eischer
2b67862420 backend/test: check that IsNotExist actually works 2022-12-03 18:56:55 +01:00
Michael Eischer
2f934f5803 gs: check against the correct error in IsNotExist 2022-12-03 18:49:54 +01:00
Michael Eischer
04d101fa94 gs/s3: remove useless os.IsNotExist check 2022-12-03 18:49:54 +01:00
Michael Eischer
579cd6dc64 azure: fix totally broken IsNotExist 2022-12-03 18:49:54 +01:00
Michael Eischer
3ebdadc58f Merge pull request #4069 from greatroar/cleanup
cache, prune, restic: Cleanup
2022-12-03 17:50:52 +01:00
Michael Eischer
bc8b2455b9 Merge pull request #4064 from MichaelEischer/flaky-abort-early-on-error
archiver: Fix flaky TestArchiverAbortEarlyOnError
2022-12-03 17:43:44 +01:00
Michael Eischer
60c6a09324 Merge pull request #4065 from MichaelEischer/flaky-rclone-failed-start
rclone: treat "file already closed" as command startup error
2022-12-03 17:42:56 +01:00
Michael Eischer
8bf6b2b80d Merge pull request #4067 from MichaelEischer/remove-backend-test-method
Remove `Test()` method from Backend
2022-12-03 17:40:55 +01:00
Michael Eischer
78ea69082a Merge pull request #4068 from MichaelEischer/debug-lock-refresh-test
Add more debug logging to `TestLockSuccessfulRefresh`
2022-12-03 17:38:21 +01:00
Scott Edlund
cbe73ace3f Update changelog/unreleased/issue-4033
Co-authored-by: greatroar <61184462+greatroar@users.noreply.github.com>
2022-12-03 20:07:37 +08:00
greatroar
63bed34608 restic: Clean up restic.IDs type
IDs.Less can be rewritten as

	string(list[i][:]) < string(list[j][:])

Note that this does not copy the ID's.

The Uniq method was no longer used.

The String method has been reimplemented without first copying into a
separate slice of a custom type.
2022-12-03 12:38:20 +01:00
greatroar
0c749dd358 prune: Pass fewer options around 2022-12-03 12:14:04 +01:00
greatroar
d45a2475e1 cache: Rewrite unnecessary if-else 2022-12-03 12:13:54 +01:00
Michael Eischer
6b5d6b9f2c Add more debug logging to TestLockSuccessfulRefresh
The test fails from time to time. Add some more logging to hopefully get
an idea where things go wrong.
2022-12-03 12:05:38 +01:00
Michael Eischer
648edeca40 retry: Do not retry Stat() if file does not exist
In non test/debug code, Stat() is used exclusively to check whether a
file exists. Thus, do not retry if a file is reported as not existing.
2022-12-03 11:42:48 +01:00
Michael Eischer
40ac678252 backend: remove Test method
The Test method was only used in exactly one place, namely when trying
to create a new repository it was used to check whether a config file
already exists.

Use a combination of Stat() and IsNotExist() instead.
2022-12-03 11:28:10 +01:00
sedlund
06ee0339aa fix#4033 cmd: copy no longer lists skipped existing snapshots by default 2022-12-03 09:55:39 +08:00
Michael Eischer
57d8eedb88 Merge pull request #4020 from greatroar/fuse-inode
fuse: Better inode generation
2022-12-02 22:28:15 +01:00
Michael Eischer
ca1803cacb Merge pull request #4063 from MichaelEischer/replace-ioutil-usage
Replace ioutil usage
2022-12-02 21:49:40 +01:00
Michael Eischer
0af89a5738 Merge pull request #3132 from metalsp0rk/init-json
Init command JSON output
2022-12-02 21:49:22 +01:00
Michael Eischer
364a396fd6 init: use standard name message_type to distinguish JSON messages 2022-12-02 21:33:03 +01:00
Michael Eischer
9a9f559806 init: cleanup json print code 2022-12-02 21:33:03 +01:00
Kyle Brennan
933c9af328 create changelog entry for issue-3124 and pull-3132 2022-12-02 21:32:30 +01:00
Kyle Brennan
a6ae79b39e support json output for init command 2022-12-02 21:32:30 +01:00
Michael Eischer
f3d964a8c1 rclone: treat "file already closed" as command startup error
Since #3940 the rclone backend returns the commands exit code if it
fails to start. The list of expected errors was missing the "file
already closed"-error which can occur if the http test request first
learns about the closed pipe to rclone before noticing the canceled
context.

Go internally makes sure that a file descriptor is unusable once it was
closed, thus this cannot have unintended side effects (like accidentally
reading from the wrong file due to a reused file descriptor).
2022-12-02 20:46:02 +01:00
Michael Eischer
a9972dbe7d archiver: Fix flaky TestArchiverAbortEarlyOnError
Saving the blobs of a file by now happens asynchronously to the
processing in the FileSaver. Thus we have to account for the blobs
queued for saving.
2022-12-02 20:07:34 +01:00
Michael Eischer
f755233210 Replace usages of ioutil.ReadDir
This changes the return type to []fs.DirEntry. However, as we only use
the filenames anyways, this doesn't make a difference.
2022-12-02 19:54:27 +01:00
Michael Eischer
fa20a78bb6 Merge pull request #4056 from greatroar/cleanup
backend, fs, options: Minor cleanup
2022-12-02 19:44:54 +01:00
Michael Eischer
ff7ef5007e Replace most usages of ioutil with the underlying function
The ioutil functions are deprecated since Go 1.17 and only wrap another
library function. Thus directly call the underlying function.

This commit only mechanically replaces the function calls.
2022-12-02 19:36:43 +01:00
greatroar
65612d797c backend, options: Prefer strings.Cut to SplitN
Also realigned the various "split into host🪣prefix"
implementations.
2022-12-02 19:19:14 +01:00
Michael Eischer
2d5e28e777 Merge pull request #4059 from restic/dependabot/go_modules/github.com/minio/minio-go/v7-7.0.45
build(deps): bump github.com/minio/minio-go/v7 from 7.0.44 to 7.0.45
2022-12-01 21:20:57 +01:00
dependabot[bot]
4fefa2ade2 build(deps): bump github.com/minio/minio-go/v7 from 7.0.44 to 7.0.45
Bumps [github.com/minio/minio-go/v7](https://github.com/minio/minio-go) from 7.0.44 to 7.0.45.
- [Release notes](https://github.com/minio/minio-go/releases)
- [Commits](https://github.com/minio/minio-go/compare/v7.0.44...v7.0.45)

---
updated-dependencies:
- dependency-name: github.com/minio/minio-go/v7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-12-01 01:03:42 +00:00
Michael Eischer
3c5d1eabe9 Merge pull request #4051 from restic/dependabot/go_modules/github.com/klauspost/compress-1.15.12
build(deps): bump github.com/klauspost/compress from 1.15.9 to 1.15.12
2022-11-28 21:47:55 +01:00
Michael Eischer
bec391ee26 Merge pull request #4053 from greatroar/xattr
Upgrade pkg/xattr to version with Solaris FIFO fix
2022-11-28 21:06:19 +01:00
greatroar
daafcaf380 Upgrade pkg/xattr to version with Solaris FIFO fix
This version doesn't have a release tag yet, but it's 0.4.9 + one patch.

Fixes #4003.
2022-11-28 20:43:51 +01:00
Alexander Neumann
1d7e7fcd6b Merge pull request #4049 from MichaelEischer/fix-rewrite-docs
rewrite: fix link anchors in documentation
2022-11-28 19:39:38 +01:00
dependabot[bot]
57d59c71e3 build(deps): bump github.com/klauspost/compress from 1.15.9 to 1.15.12
Bumps [github.com/klauspost/compress](https://github.com/klauspost/compress) from 1.15.9 to 1.15.12.
- [Release notes](https://github.com/klauspost/compress/releases)
- [Changelog](https://github.com/klauspost/compress/blob/master/.goreleaser.yml)
- [Commits](https://github.com/klauspost/compress/compare/v1.15.9...v1.15.12)

---
updated-dependencies:
- dependency-name: github.com/klauspost/compress
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-28 18:39:35 +00:00
Alexander Neumann
bb83c78ee5 Merge pull request #4047 from MichaelEischer/clean-ci-configuration
Cleanup CI configuration
2022-11-28 19:38:33 +01:00
greatroar
60aa87bbab fs: Remove explicit type check in extendedStat
Without comma-ok, the runtime inserts the same check with a similar
enough panic message:

    interface conversion: interface {} is nil, not *syscall.Stat_t
2022-11-27 19:58:06 +01:00
Michael Eischer
34609bca0e Merge pull request #4050 from greatroar/lruv2
bloblru: Upgrade to hashicorp/golang-lru/v2
2022-11-27 17:37:14 +01:00
greatroar
e5d597fd22 bloblru: Upgrade to hashicorp/golang-lru/v2
The new genericized LRU cache no longer needs to have the IDs separately
allocated:

name   old time/op    new time/op    delta
Add-8     494ns ± 2%     388ns ± 2%  -21.46%  (p=0.000 n=10+9)

name   old alloc/op   new alloc/op   delta
Add-8      176B ± 0%      152B ± 0%  -13.64%  (p=0.000 n=10+10)

name   old allocs/op  new allocs/op  delta
Add-8      5.00 ± 0%      3.00 ± 0%  -40.00%  (p=0.000 n=10+10)
2022-11-27 17:18:13 +01:00
Michael Eischer
0eddc89e98 doc: design.rst: Fix highlighting for index snippet
JSON does not support comments. As JSON is a subset of Javascript, use
the latter instead.
2022-11-27 17:01:27 +01:00
Michael Eischer
41b0f1d43a doc: fix link to amazon s3 section 2022-11-27 17:01:22 +01:00
Michael Eischer
6a793db9ca rewrite: fix link anchors in documentation 2022-11-27 16:38:10 +01:00
Michael Eischer
05cebc1c4b Merge pull request #4044 from restic/dependabot/go_modules/cloud.google.com/go/storage-1.28.0
build(deps): bump cloud.google.com/go/storage from 1.25.0 to 1.28.0
2022-11-27 15:23:06 +01:00
Michael Eischer
ce39727846 Merge pull request #4036 from restic/dependabot/go_modules/github.com/pkg/profile-1.7.0
build(deps): bump github.com/pkg/profile from 1.6.0 to 1.7.0
2022-11-27 15:22:16 +01:00
Michael Eischer
9aa06ce959 CI: remove option to configure command used to install go tools
With the minimum required go version of 1.18, we always use `go
install`.
2022-11-27 15:07:29 +01:00
Michael Eischer
5968971313 CI: remove dependabot ignore for bazil.org/fuse
We've switched to a fork of the original library, thus the ignore is no
longer necessary.
2022-11-27 15:06:30 +01:00
dependabot[bot]
95374767de build(deps): bump github.com/pkg/profile from 1.6.0 to 1.7.0
Bumps [github.com/pkg/profile](https://github.com/pkg/profile) from 1.6.0 to 1.7.0.
- [Release notes](https://github.com/pkg/profile/releases)
- [Commits](https://github.com/pkg/profile/compare/v1.6.0...v1.7.0)

---
updated-dependencies:
- dependency-name: github.com/pkg/profile
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-27 14:02:32 +00:00
dependabot[bot]
c100a62ebf build(deps): bump cloud.google.com/go/storage from 1.25.0 to 1.28.0
Bumps [cloud.google.com/go/storage](https://github.com/googleapis/google-cloud-go) from 1.25.0 to 1.28.0.
- [Release notes](https://github.com/googleapis/google-cloud-go/releases)
- [Changelog](https://github.com/googleapis/google-cloud-go/blob/main/CHANGES.md)
- [Commits](https://github.com/googleapis/google-cloud-go/compare/pubsub/v1.25.0...spanner/v1.28.0)

---
updated-dependencies:
- dependency-name: cloud.google.com/go/storage
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-27 14:01:36 +00:00
Michael Eischer
c41a1b66e1 Merge pull request #4037 from restic/dependabot/go_modules/google.golang.org/api-0.103.0
build(deps): bump google.golang.org/api from 0.93.0 to 0.103.0
2022-11-27 15:00:16 +01:00
dependabot[bot]
705aed0ecb build(deps): bump google.golang.org/api from 0.93.0 to 0.103.0
Bumps [google.golang.org/api](https://github.com/googleapis/google-api-go-client) from 0.93.0 to 0.103.0.
- [Release notes](https://github.com/googleapis/google-api-go-client/releases)
- [Changelog](https://github.com/googleapis/google-api-go-client/blob/main/CHANGES.md)
- [Commits](https://github.com/googleapis/google-api-go-client/compare/v0.93.0...v0.103.0)

---
updated-dependencies:
- dependency-name: google.golang.org/api
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-27 13:45:08 +00:00
Michael Eischer
28d6de648c Merge pull request #4040 from restic/dependabot/go_modules/github.com/spf13/cobra-1.6.1
build(deps): bump github.com/spf13/cobra from 1.5.0 to 1.6.1
2022-11-27 14:44:25 +01:00
Michael Eischer
bb40b55d1c Merge pull request #4038 from restic/dependabot/go_modules/github.com/cenkalti/backoff/v4-4.2.0
build(deps): bump github.com/cenkalti/backoff/v4 from 4.1.3 to 4.2.0
2022-11-27 14:13:39 +01:00
dependabot[bot]
a24c1e99a6 build(deps): bump github.com/cenkalti/backoff/v4 from 4.1.3 to 4.2.0
Bumps [github.com/cenkalti/backoff/v4](https://github.com/cenkalti/backoff) from 4.1.3 to 4.2.0.
- [Release notes](https://github.com/cenkalti/backoff/releases)
- [Commits](https://github.com/cenkalti/backoff/compare/v4.1.3...v4.2.0)

---
updated-dependencies:
- dependency-name: github.com/cenkalti/backoff/v4
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-27 12:57:41 +00:00
dependabot[bot]
fd56ead4a8 build(deps): bump github.com/spf13/cobra from 1.5.0 to 1.6.1
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.5.0 to 1.6.1.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Commits](https://github.com/spf13/cobra/compare/v1.5.0...v1.6.1)

---
updated-dependencies:
- dependency-name: github.com/spf13/cobra
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-27 12:57:23 +00:00
Michael Eischer
cc679c6494 Merge pull request #4041 from MichaelEischer/require-go-1.18
Require go 1.18
2022-11-27 13:56:43 +01:00
greatroar
c9c7671c58 fuse: Clean up inode generation 2022-11-27 13:53:42 +01:00
Michael Eischer
530f129a39 rest: remove workaround for content-length handling bug 2022-11-27 13:18:44 +01:00
Michael Eischer
8ad231bcad bump version numbers in instructions to reproduce binaries 2022-11-27 13:18:44 +01:00
Michael Eischer
a1eb923876 remove no longer necessary conditional compiles 2022-11-27 13:18:44 +01:00
Michael Eischer
bcdfc2a8ea CI: allow dependabot update of oauth2
Our minimum go version is new enough to allow updating the library.
2022-11-27 13:18:44 +01:00
Michael Eischer
686b0b2a3e update the minimum required go version to 1.18 2022-11-27 13:18:43 +01:00
Michael Eischer
69a2e81bd3 Merge pull request #4039 from restic/dependabot/go_modules/github.com/google/go-cmp-0.5.9
build(deps): bump github.com/google/go-cmp from 0.5.8 to 0.5.9
2022-11-26 17:39:27 +01:00
dependabot[bot]
278e93f738 build(deps): bump github.com/google/go-cmp from 0.5.8 to 0.5.9
Bumps [github.com/google/go-cmp](https://github.com/google/go-cmp) from 0.5.8 to 0.5.9.
- [Release notes](https://github.com/google/go-cmp/releases)
- [Commits](https://github.com/google/go-cmp/compare/v0.5.8...v0.5.9)

---
updated-dependencies:
- dependency-name: github.com/google/go-cmp
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-26 11:35:05 +00:00
Michael Eischer
747d2ecd7b Merge pull request #4042 from restic/skip-cloud-test-for-dependabot
CI: skip cloud tests for dependabot pull requests
2022-11-26 12:34:16 +01:00
Michael Eischer
98c6ca9d8f CI: skip cloud tests for dependabot pull requests 2022-11-26 12:23:55 +01:00
Michael Eischer
9113b2620f Merge pull request #4024 from MichaelEischer/macos-fuse
mount: switch to anacrolix fork of bazil/fuse
2022-11-26 12:15:14 +01:00
Michael Eischer
f115d64634 Merge pull request #4022 from MichaelEischer/race-checker
CI: Run the golang race checker
2022-11-26 12:13:50 +01:00
Michael Eischer
923c06cea0 Merge pull request #4025 from MichaelEischer/update-minio
Update minio library to add `credential_process` support
2022-11-25 23:21:57 +01:00
Michael Eischer
f4d3ed77c4 update minio library 2022-11-25 22:36:21 +01:00
greatroar
189e0fe5a9 fuse: Better inode generation
Hard links to the same file now get the same inode within the FUSE
mount. Also, inode generation is faster and, more importantly, no longer
allocates.

Benchmarked on Linux/amd64. Old means the benchmark with

        sink = fs.GenerateDynamicInode(1, sub.node.Name)

instead of calling inodeFromNode. Results:

name                   old time/op    new time/op    delta
Inode/no_hard_links-8     137ns ± 4%      34ns ± 1%   -75.20%  (p=0.000 n=10+10)
Inode/hard_link-8        33.6ns ± 1%     9.5ns ± 0%   -71.82%  (p=0.000 n=9+8)

name                   old alloc/op   new alloc/op   delta
Inode/no_hard_links-8     48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)
Inode/hard_link-8         0.00B          0.00B           ~     (all equal)

name                   old allocs/op  new allocs/op  delta
Inode/no_hard_links-8      1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
Inode/hard_link-8          0.00           0.00           ~     (all equal)
2022-11-16 08:35:01 +01:00
Michael Eischer
32ffcd86a2 Merge pull request #3993 from MichaelEischer/backup-json-full-snapshot-id
backup: print full snapshot id in JSON summary
2022-11-12 20:42:35 +01:00
Michael Eischer
f032a9d0ad prune: report how much data must be repacked to compressed the repo
prune now reports the remaining size of pack files containing
uncompressed blobs. The displayed value is suitable for use with `--max-repack-size`.
2022-11-12 20:20:23 +01:00
Michael Eischer
66818a8f98 Merge pull request #3980 from MichaelEischer/prune-compression-stats
prune: Correctly count used/duplicate blobs for partially compressed repos
2022-11-12 20:06:56 +01:00
Michael Eischer
4b5234924b Merge pull request #2875 from fgma/issue2699
issue2699: restore symlinks on windows when run as admin user
2022-11-12 20:06:45 +01:00
Michael Eischer
726a1969cd Merge pull request #2731 from dionorgua/rewrite-snapshot
Implement 'rewrite' command to exclude files from existing snapshots
2022-11-12 20:06:35 +01:00
Michael Eischer
bb0fa76c06 Cleanup exclude pattern collection 2022-11-12 19:55:22 +01:00
Michael Eischer
537cfe2e4c rewrite: Fix check that an exclude pattern was passed
The old check did not consider files containing case insensitive
excludes. The check is now implemented as a function of the
excludePatternOptions struct to improve cohesion.
2022-11-12 19:55:22 +01:00
Leo R. Lundgren
f175da2756 rewrite: Polish documentation 2022-11-12 19:55:22 +01:00
Leo R. Lundgren
f86ef4d3dd rewrite: Polish code and add missing messages 2022-11-12 19:55:22 +01:00
Leo R. Lundgren
c15bedccc0 rewrite: Revert unrelated documentation change 2022-11-12 19:55:22 +01:00
Michael Eischer
f88acd4503 rewrite: Fail if a tree contains an unknown field
In principle, the JSON format of Tree objects is extensible without
requiring a format change. In order to not loose information just play
it safe and reject rewriting trees for which we could loose data.
2022-11-12 19:55:22 +01:00
Michael Eischer
11b8c3a158 rewrite: add documentation 2022-11-12 19:55:22 +01:00
Michael Eischer
ec0c91e233 rewrite: Add tests for further ways to use the command 2022-11-12 19:55:22 +01:00
Michael Eischer
0224e276ec walker: Add tests for FilterTree 2022-11-12 19:55:22 +01:00
Michael Eischer
73f54cc5ea rewrite: rename --inplace to --forget 2022-11-12 19:55:22 +01:00
Michael Eischer
a47d9a1c40 rewrite: use unified snapshot filter options 2022-11-12 19:55:22 +01:00
Michael Eischer
b044649118 rewrite: add minimal test 2022-11-12 19:55:22 +01:00
Michael Eischer
375a3db64d rewrite: non-exclusive lock if snapshots are only added 2022-11-12 19:55:22 +01:00
Michael Eischer
327f418a9c rewrite: cleanup err handling and output 2022-11-12 19:55:22 +01:00
Michael Eischer
ad14d6e4ac rewrite: use SelectByName like in the backup command 2022-11-12 19:55:22 +01:00
Michael Eischer
7ebaf6e899 rewrite: start repository uploader goroutines 2022-11-12 19:55:22 +01:00
Michael Eischer
559acea0d8 unify exclude pattern options 2022-11-12 19:55:22 +01:00
Michael Eischer
4cace1ffe9 unify exclude patterns with backup command 2022-11-12 19:55:22 +01:00
Michael Eischer
2b69a1c53b rewrite: filter all snapshots if none are specified 2022-11-12 19:55:22 +01:00
Michael Eischer
f6339b88af rewrite: extract tree filtering 2022-11-12 19:55:22 +01:00
Michael Eischer
c0f7ba2388 rewrite: simplify dryrun 2022-11-12 19:55:22 +01:00
Michael Eischer
4d6ab83019 rewrite: use treejsonbuilder 2022-11-12 19:55:22 +01:00
Michael Eischer
82592b88b5 rewrite: address most review comments 2022-11-12 19:55:22 +01:00
Michael Eischer
b922774343 rewrite: fix compilation 2022-11-12 19:55:22 +01:00
Dmitry Nezhevenko
dc29709742 Implement 'rewrite' command to exclude files from existing snapshots 2022-11-12 19:55:22 +01:00
Michael Eischer
220eaee76b mount: switch to anacrolix fork of bazil/fuse
The anacrolix fork contains the latest changes from bazil/fuse and
additionally provides support for recent versions of macFUSE.
2022-11-12 19:22:31 +01:00
Michael Eischer
6fa45d0d39 Merge pull request #4011 from greatroar/backup-stdin-password
cmd: Don't read password from stdin for backup --stdin
2022-11-12 19:18:56 +01:00
Michael Eischer
bbd180ae21 Merge pull request #4017 from Rajpratik71/Rajpratik71-patch-1
feat: dependabot workflow automation for updating dependency
2022-11-12 15:48:48 +01:00
Pratik Raj
bef1064b8e chore: ignore upgrade for 'bazil/fuse' and 'golang.org/x/oauth2' 2022-11-12 19:39:16 +05:30
Michael Eischer
7b4fe7bad5 Merge pull request #4021 from greatroar/mac-fsync
backend/local: Ignore ENOTTY for fsync on Mac
2022-11-11 23:10:37 +01:00
greatroar
348e966daa backend/local: Ignore ENOTTY for fsync on Mac
Fixes #4016.
2022-11-11 22:51:51 +01:00
Michael Eischer
0e5fe4c6ab CI: run golang race checker 2022-11-11 22:15:22 +01:00
Michael Eischer
13fbc96ed3 lock: Synchronize Refresh() and Stale()
The lock test creates a lock and checks that it is not stale. However,
it is possible that the lock is refreshed concurrently, which updates
the lock timestamp. Checking the timestamp in `Stale()` without
synchronization results in a data race. Thus add a lock to prevent
concurrent accesses.
2022-11-11 21:52:53 +01:00
Michael Eischer
e1ba7ab684 lock: Don't copy the lock when checking for process existence
The lock test creates a lock and checks that it is not stale. This also
tests whether the corresponding process still exists. However, it is
possible that the lock is refreshed concurrently, which updates the lock
timestamp. Calling `processExists()` with a value receiver, however,
creates an unsynchronized copy of this field. Thus call the method using
a pointer receiver.
2022-11-11 21:45:55 +01:00
Michael Eischer
dc060356c2 mount: only start next test after mount command cleanup is complete
The test did not wait for the mount command to fully shutdown all
running goroutines. This caused the go race detector to report a data
race related to lock refreshes.

==================
WARNING: DATA RACE
Write at 0x0000021bdfdb by goroutine 667:
  github.com/restic/restic/internal/backend/retry.TestFastRetries()
      /restic/restic/internal/backend/retry/testing.go:7 +0x18f
  github.com/restic/restic/cmd/restic.withTestEnvironment()
      /restic/restic/cmd/restic/integration_helpers_test.go:175 +0x183
  github.com/restic/restic/cmd/restic.TestMountSameTimestamps()
      /restic/restic/cmd/restic/integration_fuse_test.go:202 +0xac
  testing.tRunner()
      /usr/lib/go/src/testing/testing.go:1446 +0x216
  testing.(*T).Run.func1()
      /usr/lib/go/src/testing/testing.go:1493 +0x47

Previous read at 0x0000021bdfdb by goroutine 609:
  github.com/restic/restic/internal/backend/retry.(*Backend).retry()
      /restic/restic/internal/backend/retry/backend_retry.go:72 +0x9e
  github.com/restic/restic/internal/backend/retry.(*Backend).Remove()
      /restic/restic/internal/backend/retry/backend_retry.go:149 +0x17d
  github.com/restic/restic/internal/cache.(*Backend).Remove()
      /restic/restic/internal/cache/backend.go:38 +0x11d
  github.com/restic/restic/internal/restic.(*Lock).Unlock()
      /restic/restic/internal/restic/lock.go:190 +0x249
  github.com/restic/restic/cmd/restic.refreshLocks.func1()
      /restic/restic/cmd/restic/lock.go:86 +0xae
  runtime.deferreturn()
      /usr/lib/go/src/runtime/panic.go:476 +0x32
  github.com/restic/restic/cmd/restic.lockRepository.func2()
      /restic/restic/cmd/restic/lock.go:61 +0x71

[...]

Goroutine 609 (finished) created at:
  github.com/restic/restic/cmd/restic.lockRepository()
      /restic/restic/cmd/restic/lock.go:61 +0x488
  github.com/restic/restic/cmd/restic.lockRepo()
      /restic/restic/cmd/restic/lock.go:25 +0x219
  github.com/restic/restic/cmd/restic.runMount()
      /restic/restic/cmd/restic/cmd_mount.go:126 +0x1f8
  github.com/restic/restic/cmd/restic.testRunMount()
      /restic/restic/cmd/restic/integration_fuse_test.go:61 +0x1ce
  github.com/restic/restic/cmd/restic.checkSnapshots.func1()
      /restic/restic/cmd/restic/integration_fuse_test.go:90 +0x124
==================
2022-11-11 21:43:01 +01:00
Michael Eischer
32c9667990 Merge pull request #4019 from MichaelEischer/fix-file-saver-race
archiver: Fix race condition resulting in files containing null IDs
2022-11-11 20:52:33 +01:00
Michael Eischer
d268552a0a Merge pull request #4014 from MichaelEischer/fix-debug-examine
debug: fix crash in `debug examine --reupload-blobs`
2022-11-10 20:37:32 +01:00
Michael Eischer
5756c96c9f archiver: Fix race condition resulting in files containing null IDs
In some rare cases files could be created which contain null IDs (all
zero) in their content list. This was caused by a race condition between
growing the `Content` slice and inserting the blob IDs into it. In some
cases the blob ID was written to the old slice, which a short time
afterwards was replaced with a larger copy, that did not yet contain the
blob ID.
2022-11-10 20:19:37 +01:00
Pratik Raj
df614fff26 feat: dependabot workflow automation for updating dependency
Signed-off-by: Pratik Raj <Rajpratik71@gmail.com>
2022-11-10 16:02:03 +05:30
Michael Eischer
11a4bb051e debug: fix crash in debug examine --reupload-blobs 2022-11-09 22:13:17 +01:00
Michael Eischer
5f9ac2b165 Merge pull request #4010 from MichaelEischer/file-saver-sanity-check
archiver: Check that saved file does not have null IDs in content
2022-11-08 23:07:32 +01:00
Michael Eischer
b1d1202b1d archiver: Check that saved file does not have null IDs in content
Null IDs in the file content indicate that something went wrong. Thus
fails before saving the affected file.
2022-11-08 22:57:41 +01:00
greatroar
5dceadeb72 cmd: Don't read password from stdin for backup --stdin 2022-11-06 14:55:57 +01:00
Michael Eischer
1ccab95bc4 b2: Support file hiding instead of deleting them permanently
Automatically fall back to hiding files if not authorized to permanently
delete files. This allows using restic with an append-only application
key with B2.  Thus, an attacker cannot directly delete backups with the
API key used by restic.

To use this feature create an application key without the deleteFiles
capability. It is recommended to restrict the key to just one bucket.
For example using the b2 command line tool:

    b2 create-key --bucket <bucketName> <keyName> listBuckets,readFiles,writeFiles,listFiles

Suggested-by: Daniel Gröber <dxld@darkboxed.org>
2022-11-05 20:10:45 +01:00
Michael Eischer
24a2e5cab9 Merge pull request #4008 from MichaelEischer/tweak-lock-refresh-test
lock: Tweak timeouts for lock refresh test
2022-11-05 10:53:13 +01:00
Michael Eischer
403390479c Merge pull request #3997 from greatroar/fuse-hash
fuse: Better check for whether snapshots changed
2022-11-05 10:52:11 +01:00
Michael Eischer
d29abc1a31 Merge pull request #4007 from MichaelEischer/hide-compression-level-for-v1-repo
Only print compression level starting from repository version 2
2022-11-05 10:33:25 +01:00
greatroar
c091e43b33 fuse: Better check for whether snapshots changed
We previously checked whether the set of snapshots might have changed
based only on their number, which fails when as many snapshots are
forgotten as are added. Check for the SHA-256 of their id's instead.
2022-11-05 09:32:45 +01:00
Michael Eischer
aaac63da8d lock: Tweak timeouts for lock refresh test
For some reason the test fails from time to time. Increase the timeouts
to hopefully avoid this issue.
2022-11-04 22:48:18 +01:00
Michael Eischer
fd4d23460f only print compression level starting from repository version 2 2022-11-04 22:40:07 +01:00
Alexander Neumann
8dd95b710e Merge pull request #3992 from MichaelEischer/err-on-invalid-compression
Return error if RESTIC_COMPRESSION env variable is invalid
2022-11-04 19:41:34 +01:00
Alexander Neumann
783b8781a7 Merge pull request #4000 from restic/min-go-version
build: Correct checks for minimum Go version
2022-11-04 10:31:02 +01:00
Alexander Neumann
543649f2f2 Merge pull request #4001 from restic/docker-go-version
docker: Increase Go version to 1.19
2022-11-04 10:30:11 +01:00
Leo R. Lundgren
0a4cddb34d docker: Increase Go version to 1.19 2022-11-03 22:59:59 +01:00
Leo R. Lundgren
333c2c6ed4 build: Correct checks for minimum Go version 2022-11-03 22:50:07 +01:00
rawtaz
92df039e5d Merge pull request #3996 from MichaelEischer/fix-ui-progress
backup: fix stuck status bar
2022-11-02 21:48:16 +01:00
Michael Eischer
9354262b1b backup: fix stuck status bar
The status bar got stuck once the first error was reported, the scanner
completed or some file was backed up. Either case sets a flag that the
scanner has started.

This flag is used to hide the progress bar until the flag is set. Due to
an inverted condition, the opposite happened and the status stopped
refreshing once the flag was set.

In addition, the scannerStarted flag was not set when the scanner just
reported progress information.
2022-11-02 21:31:13 +01:00
Michael Eischer
06141ce1f4 backup: print full snapshot id in JSON summary 2022-10-31 19:03:42 +01:00
Michael Eischer
59a90943bb Merge pull request #3983 from greatroar/formatting
Centralize and fix formatting of bytes, percentages, durations
2022-10-31 18:52:24 +01:00
greatroar
5ab3e6276a ui: Fix FormatBytes at exactly 1024 time a unit
1024 would be displayed as "1024 bytes" instead of "1.000 KiB", etc.
2022-10-31 18:39:28 +01:00
rawtaz
4f1fae9c98 Merge pull request #3982 from MichaelEischer/show-compression-mode
Show selected compression level when opening repository
2022-10-30 21:29:42 +01:00
Michael Eischer
8fe159cc5a enable ysmlink tests for windows 2022-10-30 18:43:04 +01:00
Michael Eischer
3499c6354e Merge pull request #3955 from MichaelEischer/async-futurefile-completion
Improve archiver performance for small files
2022-10-30 18:38:04 +01:00
Michael Eischer
144257f8bd restore symlink timestamps on windows 2022-10-30 11:04:04 +01:00
Michael Eischer
c0f34af9db backup: hide files from status which are read completely but not saved
As the FileSaver is asynchronously waiting for all blobs of a file to be
stored, the number of active files is higher than the number of files
from which restic is reading concurrently. Thus to not confuse users,
only display files in the status from which restic is currently reading.
2022-10-30 10:29:12 +01:00
Michael Eischer
a571fc4aa1 add changelog for faster backups with small files 2022-10-30 10:29:12 +01:00
Michael Eischer
b52a8ff05c ui: Properly clear lines no longer used for status
Previously, the old status text remained until it was overwritten.
2022-10-30 10:29:12 +01:00
Michael Eischer
b4de902596 archiver: Asynchronously complete FutureFile
After reading and chunking all data in a file, the FutureFile still has
to wait until the FutureBlobs are completed. This was done synchronously
which results in blocking the file saver and prevents the next file from
being read.

By replacing the FutureBlob with a callback, it becomes possible to
complete the FutureFile asynchronously.
2022-10-30 10:29:11 +01:00
Michael Eischer
47e05080a9 Merge pull request #3990 from MichaelEischer/lock-refresh-test
lock: add test to check that refreshing works
2022-10-30 10:15:44 +01:00
Michael Eischer
c7ace314f6 Merge pull request #3989 from greatroar/eachbypack
More compact data structure for Index.EachByPack
2022-10-30 00:02:55 +02:00
greatroar
0e8893dae9 index: Compact data structure for Index.EachByPack 2022-10-29 23:09:17 +02:00
greatroar
137f0bc944 repository: Fix benchmarkSaveAndEncrypt 2022-10-29 23:09:17 +02:00
Michael Eischer
01f0db4e56 return error if RESTIC_COMPRESSION env variable is invalid 2022-10-29 22:03:39 +02:00
Michael Eischer
7c87fb941c Merge pull request #3986 from greatroar/counter
ui/progress: Load both values in a single Lock/Unlock
2022-10-29 21:50:55 +02:00
Michael Eischer
3b0bb02a68 Merge pull request #3977 from greatroar/progress
ui/backup: Replace channels with a mutex
2022-10-29 21:33:04 +02:00
Michael Eischer
0d260cfd82 enable symlink test on windows 2022-10-29 21:26:34 +02:00
fgma
8e5eb1090c issue2699: restore symlinks on windows when run as admin user 2022-10-29 21:19:33 +02:00
rawtaz
af3f7c866f Merge pull request #3988 from FelixBurkhard/FelixBurkhard-patch-1
Clearify what Azure account name means
2022-10-29 13:32:44 +02:00
Michael Eischer
24267e9a9d lock: add test to check that refreshing works 2022-10-29 11:26:00 +02:00
Michael Eischer
8e51e1e605 shorten 'repository opened' output 2022-10-29 11:22:00 +02:00
FelixBurkhard
575d26ec87 Clearify what Azure account name means
When reading it first it was not clear to me the 'account name' meant the name of the
Azure Storage Account and not the Azure account itself.
2022-10-29 00:27:43 +02:00
greatroar
2dafda9164 ui/progress: Load both values in a single Lock/Unlock
We always need both values, except in a test, so we don't need to lock
twice and risk scheduling in between.

Also, removed the resetting in Done. This copied a mutex, which isn't
allowed. Static analyzers tend to trip over that.
2022-10-25 07:55:24 +02:00
Michael Eischer
f8910bc4ff Merge pull request #3985 from saltsa/fix_lock_refresh
Fix bug in lock refresh monitoring
2022-10-24 22:59:18 +02:00
Joonas Aunola
b06427c9f6 fix Unix to UnixNano 2022-10-23 23:40:21 +03:00
greatroar
006380199e cmd, ui: Deduplicate formatting utilities 2022-10-23 13:40:07 +02:00
greatroar
04216eb9aa ui/backup: Replace channels with a mutex
The channel-based algorithm had grown quite complicated. This is easier
to reason about and likely to be more performant with very many
CompleteBlob calls.
2022-10-23 13:28:41 +02:00
Michael Eischer
4fea3a413d show selected compression level when opening repository 2022-10-22 20:18:46 +02:00
Michael Eischer
ba58ccbe07 prune: add remark about non-deterministic blob selection 2022-10-22 19:46:10 +02:00
Michael Eischer
05651d6d4f prune: Correctly count used/duplicate blobs for partially compressed repos
Counting the first occurrence of a duplicate blob as used and counting
all other as duplicates, independent of which instance of the blob is
kept, is only accurate if all copies of the blob have the same size. This
is no longer the case for a repository containing both compressed and
uncompressed blobs.

Thus for duplicated blobs first count all instances as duplicates and
then subtract the actually used instance later on.
2022-10-22 19:24:36 +02:00
Michael Eischer
b57d42905c Merge pull request #3899 from MichaelEischer/less-prune-mem
Optimize prune memory usage
2022-10-22 18:56:02 +02:00
Michael Eischer
d966c52707 prune: allow gc of set of repacked blobs before index rebuild 2022-10-22 18:45:12 +02:00
Michael Eischer
1e2794fa55 add prune memory optimization changelog 2022-10-22 18:45:12 +02:00
Michael Eischer
68c9cb9c6a prune: Shrink keepBlobs set if possible
As long as only a small fraction of the data in a repository is
rewritten, the keepBlobs set will be rather small after cleaning it up.
As golang maps do not shrink their memory usage, just copy the contents
over to a new map. However, only copy the map if the cleanup removed at
least half the entries.
2022-10-22 18:45:12 +02:00
Michael Eischer
c4fc5c97f9 prune: Use a single CountedBlobSet to track blobs
The set covers necessary, existing and duplicate blobs. This removes the
duplicate sets used to track whether all necessary blobs also exist.
This reduces the memory usage of prune by about 20-30%.
2022-10-22 18:45:12 +02:00
Michael Eischer
b21241ec1c restic: Add CountedBlobSet type
This allows maintaining a usage counter for each blob.
2022-10-22 18:45:12 +02:00
Michael Eischer
ee6688a9f6 Merge pull request #3915 from plumbeo/compression-stats
restic stats: print uncompressed size in mode raw-data
2022-10-21 22:10:29 +02:00
Michael Eischer
27634a1a68 Merge pull request #3978 from MichaelEischer/fix-negative-pattern-example
Remove misleading wildcard from negative exclude pattern example
2022-10-21 22:04:30 +02:00
Michael Eischer
aa77702e49 Merge pull request #3971 from MichaelEischer/parallel-list
Unify ForAllIndex/Snapshot/Lock functions
2022-10-21 21:58:33 +02:00
Michael Eischer
6877aaa8aa Merge pull request #3967 from MichaelEischer/archiver-extract-exclude-options
backup: extract exclude pattern options
2022-10-21 21:50:00 +02:00
Michael Eischer
2e9ee8577a Merge pull request #3970 from MichaelEischer/split-retry-backend
Split backend package into smaller parts
2022-10-21 21:49:46 +02:00
Michael Eischer
59d46bb3f5 backup: extract exclude pattern options
This is a preparation to make the exclude options usable for the
upcoming `rewrite` command.
2022-10-21 21:40:59 +02:00
Michael Eischer
5c7a9a739a backend: Split RetryBackend into own package
The RetryBackend tests depend on the mock backend. When the Backend
interface is eventually split from the restic package, this will lead to
a dependency cycle between backend and backend/mock. Thus split the
RetryBackend into a separate package to avoid this problem.
2022-10-21 21:38:17 +02:00
Michael Eischer
32603d49c4 backend: remove unused ErrorBackend 2022-10-21 21:36:05 +02:00
Michael Eischer
8c18c65b3b backend: remove unused Paths variable 2022-10-21 21:36:05 +02:00
Michael Eischer
4ccd5e806b backend: split layout code into own subpackage 2022-10-21 21:36:05 +02:00
Michael Eischer
b361284f28 Merge pull request #3979 from MichaelEischer/backup-less-time-now
backup: reduce calls to time.Now
2022-10-21 21:33:34 +02:00
Michael Eischer
738b2a0445 parallelize more List usages 2022-10-21 21:26:45 +02:00
Michael Eischer
ae45f3b04f restic: Unify code to load Index/Lock/Snapshot 2022-10-21 21:25:11 +02:00
Michael Eischer
8e2695be0b Merge pull request #3973 from MichaelEischer/speedup-integration-tests
speed-up integration tests by reducing the RetryBackend timeout
2022-10-21 21:17:35 +02:00
Michael Eischer
35d968bcde Merge pull request #3969 from MichaelEischer/key-by-id
Port restic.Find to return IDs and identify keys by restic.ID
2022-10-21 21:15:40 +02:00
Michael Eischer
4133fee6f9 Merge pull request #3972 from MichaelEischer/fix-flaky-lock-cancel-test
lock: fix flaky TestLockFailedRefresh
2022-10-21 21:12:34 +02:00
Michael Eischer
c8c8391b21 Merge pull request #3974 from greatroar/cleanup
More cleanups and a micro-optimization
2022-10-21 21:11:37 +02:00
Michael Eischer
ee7c28f5e6 backup: reduce calls to time.Now
Archiver.Save queries the current time multiple times. This commit
removes one of these calls as they showed up while profiling a backup of
a nearly unchanged dataset containing 3 million files.
2022-10-21 20:55:01 +02:00
Michael Eischer
3e60d38a23 Remove misleading wildcard from negative exclude pattern example
There is no need to use a special wildcard `**` to demonstrate negative
patterns. Actually, it is both slower than the simpler variant and seems
to confuse users.
2022-10-21 20:48:45 +02:00
greatroar
9adae5521d cache: Call interface method once 2022-10-21 14:32:46 +02:00
greatroar
201e5c7e74 backup: Clean up progress reporting code 2022-10-21 13:48:30 +02:00
plumbeo
a6f83e0011 Add changelog 2022-10-17 15:38:42 +02:00
plumbeo
bc945d0bf0 restic stats: add more compression statistics
Calculate and display compression ratio, space saving and progress
2022-10-17 15:38:38 +02:00
greatroar
b513597546 internal/restic: Make FileType a uint8 instead of a string
The string form was presumably useful before the introduction of
layouts, but right now it just makes call sequences and garbage
collection more expensive (the latter because every string contains
a pointer to be scanned).
2022-10-16 10:59:01 +02:00
greatroar
22147e1e02 all: Minor cleanups
if x { return true } return false => return x

	fmt.Sprintf("%v", x) => fmt.Sprint(x) or x.String()

The fmt.Sprintf idiom is still used in the SecretString tests, where it
serves security hardening.
2022-10-16 10:50:39 +02:00
greatroar
d03460010f internal/restic: Fix ID.UnmarshalJSON, ParseID
ID.UnmarshalJSON accepted non-JSON input with ' as the string delimiter.
Also, the error message for non-hex input was less informative than it
could be and it performed too many checks.

Changed ParseID to keep the error messages consistent.
2022-10-16 10:39:52 +02:00
Michael Eischer
aa39bf3cf6 backend/test: remove duplicate test
The test is identical to the tests for the mem backend.
2022-10-15 23:15:07 +02:00
Michael Eischer
28e1c4574b mem: use cheaper hash for backend 2022-10-15 23:14:33 +02:00
Michael Eischer
c3400d3c55 backend: speedup RetryBackend tests 2022-10-15 23:13:44 +02:00
Michael Eischer
99547518cd lock: fix flaky TestLockFailedRefresh
The comparison of the current time and the last lock refresh were using
seconds represented as integers. As the test only waits for up to one
second, the associated number truncation can cause the test to take
longer than once second and thus to fail.

Switch to nanoseconds to avoid this problem. This also slightly speeds
up the test.
2022-10-15 22:36:32 +02:00
Michael Eischer
e10420553b speed-up integration tests by reducing the RetryBackend timeout
On my machine this decreases the runtime for `./cmd/restic` from 9.5s to
6.5s.
2022-10-15 22:29:58 +02:00
Michael Eischer
367f35db27 Merge pull request #3968 from MichaelEischer/cleanup-complete-blob
backup: Remove unused filename parameter from CompleteBlob callback
2022-10-15 16:11:16 +02:00
Michael Eischer
8d62a7adb4 identify keys by ID and not name 2022-10-15 16:07:43 +02:00
Michael Eischer
02634dce7a restic: change Find to return ids
That way consumers no longer have to manually convert the returned name
to an id.
2022-10-15 16:06:54 +02:00
Michael Eischer
964977677f backup: Remove unused filename parameter from CompleteBlob callback 2022-10-15 15:21:17 +02:00
Michael Eischer
258b487d8f Merge pull request #3951 from MichaelEischer/rework-snapshot-filter
Rework snapshot filtering
2022-10-15 14:47:47 +02:00
Michael Eischer
de9bc031df add changelog for ls handling of missing snapshots 2022-10-15 13:34:50 +02:00
Michael Eischer
246d3032ae restic: Don't list snapshots if FindSnapshot gets full id 2022-10-15 13:34:34 +02:00
Michael Eischer
d8c00b9726 add comment 2022-10-15 13:34:21 +02:00
Michael Eischer
a3113c6097 restic: Change FindSnapshot functions to return the snapshot 2022-10-15 13:34:04 +02:00
Michael Eischer
b50f48594d restic: cleanup arguments of findLatestSnapshot 2022-10-15 13:33:48 +02:00
Michael Eischer
61e827ae4f restic: hide findLatestSnapshot 2022-10-15 13:33:32 +02:00
Michael Eischer
fcad5e6f5d backup: use unified FindFilteredSnapshot 2022-10-15 13:33:29 +02:00
Michael Eischer
0aa73bbd39 ls: proper error handling for non-existent snapshot
Use restic.FindFilteredSnapshot to resolve the snapshot ID. This ensures
consistent behavior for all commands using initSingleSnapshotFilterOptions.
2022-10-15 13:32:00 +02:00
Michael Eischer
a81f0432e9 restic: Add unified method to resolve a single snapshot 2022-10-15 13:31:45 +02:00
Michael Eischer
95a1bb4261 restic: Rework error handling of FindFilteredSnapshots and handle snapshotIDs
FindFilteredSnapshots no longer prints errors during snapshot loading on
stderr, but instead passes the error to the callback to allow the caller
to decide on what to do.

In addition, it moves the logic to handle an explicit snapshot list from
the main package to restic.
2022-10-15 13:31:26 +02:00
Michael Eischer
cff22a5f01 dump: use correct help text for filter options 2022-10-15 13:31:10 +02:00
Michael Eischer
7a6dcb4831 Merge pull request #3966 from MichaelEischer/cleanup-walker-test
walker: Convert tests to use TreeJSONBuilder
2022-10-15 11:25:11 +02:00
Michael Eischer
7cf042118f walker: Convert tests to use TreeJSONBuilder
The old code marshalled the tree blobs different than other places in
restic. The hashed tree blob did not contain a final newline character.
2022-10-15 11:04:13 +02:00
Michael Eischer
cea7191995 Merge pull request #3959 from MichaelEischer/buffered-backup-progress
backup: Use buffered channels to collect backup status
2022-10-15 10:57:19 +02:00
Michael Eischer
ba688aad20 Merge pull request #3961 from greatroar/cleanup
Misc. cleanup
2022-10-14 21:49:35 +02:00
Michael Eischer
9c290a8093 Merge pull request #3960 from greatroar/errors
errors: Drop WithMessage
2022-10-14 21:41:28 +02:00
greatroar
0e155fd9a6 internal/restic: Fix UID/GID parsing
The helper function uidGidInt used strconv.ParseInt instead of
ParseUint, so it silently ignored some invalid user/group IDs.

Also, improve the error message. "Invalid UID" is more informative than
having "ParseInt" twice (*strconv.NumError displays the function name).

Finally, the user.User struct can be passed by pointer to get reduce
code size.
2022-10-14 18:21:00 +02:00
greatroar
e0b743c64d internal/restic: Remove unused ID.EqualString 2022-10-14 18:20:11 +02:00
greatroar
6922360179 ui/backup: Remove unused ProgressReporter type, Progress field 2022-10-14 14:36:19 +02:00
greatroar
d4aadfa389 all: Drop ctxhttp
This package is no longer needed, since we can use the stdlib's
http.NewRequestWithContext.

backend/rclone already did, but it needed a different error check due to
a difference between net/http and ctxhttp.

Also, store the http.Client by value in the REST backend (changed to a
pointer when ctxhttp was introduced) and use errors.WithStack instead
of errors.Wrap where the message was no longer accurate. Errors from
http.NewRequestWithContext will start with "net/http" or "net/url", so
they're easy to identify.
2022-10-14 14:33:49 +02:00
greatroar
16849d5361 internal/archiver: Missing argument to errors.Errorf 2022-10-14 14:18:52 +02:00
greatroar
09c14f33c8 internal/checker: Pass Error.Error pointer receiver 2022-10-14 14:13:32 +02:00
greatroar
feb790f497 internal/restic: Use errors.New when no formatting is needed 2022-10-14 14:07:20 +02:00
greatroar
ba44666704 errors: Drop WithMessage 2022-10-14 14:06:47 +02:00
Michael Eischer
1a6160d152 Merge pull request #3880 from MichaelEischer/archiver-savedir-cleanup
archiver: Improve handling of "file xxx already present" error
2022-10-08 21:48:14 +02:00
Michael Eischer
21b1d7a880 Merge pull request #3948 from MichaelEischer/split-index
repository: split index into a separate package
2022-10-08 21:41:57 +02:00
Michael Eischer
5278ab51c8 archiver: Check that duplicates are only ignored if identical 2022-10-08 21:38:36 +02:00
Michael Eischer
403b01b788 backup: Only return a warning for duplicate directory entries
The backup command failed if a directory contains duplicate entries.
Downgrade the severity of this problem from fatal error to a warning.
This allows users to still create a backup.
2022-10-08 21:38:21 +02:00
Michael Eischer
d7d7b4ab27 archiver: refactor TreeSaverTest 2022-10-08 21:29:32 +02:00
Michael Eischer
8e38c43c27 archiver: let FutureNode.Take return an error if no data is available
This ensures that we cannot accidentally store an invalid node.
2022-10-08 21:28:39 +02:00
Michael Eischer
2b88cd6eab archiver: Restructure SaveTree to work like SaveDir
SaveTree did not use the TreeSaver but rather managed the tree
collection and upload itself. This prevents using the parallelism
offered by the TreeSaver and duplicates all related code. Using the
TreeSaver can provide some speed-ups as all steps within the backup tree
now rely on FutureNodes. This can be especially relevant for backups
with large amounts of explicitly specified files.

The main difference between SaveTree and SaveDir is, that only the
former can save tree blobs in which nodes have a different name than the
actual file on disk. This is the result of resolving name conflicts
between multiple files with the same name. The filename that must be
used within the snapshot is now passed directly to
restic.NodeFromFileInfo. This ensures that a FutureNode already contains
the correct filename.
2022-10-08 21:28:39 +02:00
Michael Eischer
2e3f1c08c5 repository: split index into a separate package 2022-10-08 21:15:34 +02:00
Michael Eischer
5760ba6989 Merge pull request #3949 from MichaelEischer/simplify-mixedpacks
repository: remove IsMixedPack and add replacement for checker
2022-10-08 21:14:14 +02:00
Michael Eischer
5ee25e669a Merge pull request #3940 from MichaelEischer/better-rclone-error
Better error message if connection to rclone fails
2022-10-08 21:14:00 +02:00
Michael Eischer
5600f11696 rclone: Fix stderr handling if command exits unexpectedly
According to the documentation of exec.Cmd Wait() must not be called
before completing all reads from the pipe returned by StdErrPipe(). Thus
return a context that is canceled once rclone has exited and use that as
a precondition to calling Wait(). This should ensure that all errors
printed to stderr have been copied first.
2022-10-08 20:16:06 +02:00
Michael Eischer
b8acad4da0 rclone: return rclone error instead of canceled context
When rclone fails during the connection setup this currently often
results in a context canceled error. Replace this error with the exit
code from rclone.
2022-10-08 20:15:24 +02:00
Michael Eischer
d3ebec8f21 backup: Use buffered channels to collect backup status
When backing up many small files, the unbuffered channels frequently
cause the FileSaver to block when reporting progress information. Thus,
add buffers to these channels to avoid unnecessary scheduling.

As the status information is purely informational, it doesn't matter
that the status reporting shutdown is somewhat racy and could miss a few
final updates.
2022-10-08 18:20:41 +02:00
Michael Eischer
f9d4e0c2af Merge pull request #3958 from greatroar/errors
errors: Drop Cause in favor of Go 1.13 error handling
2022-10-08 18:06:35 +02:00
Michael Eischer
119e6aee01 Merge pull request #3957 from greatroar/typo
cmd: Typo in --read-concurrency description
2022-10-08 14:41:35 +02:00
greatroar
07e5c38361 errors: Drop Cause in favor of Go 1.13 error handling
The only use cases in the code were in errors.IsFatal, backend/b2,
which needs a workaround, and backend.ParseLayout. The last of these
requires all backends to implement error unwrapping in IsNotExist.
All backends except gs already did that.
2022-10-08 13:08:08 +02:00
greatroar
4eae4d3e1a cmd: Typo in --read-concurrency description 2022-10-08 11:27:39 +02:00
Michael Eischer
83cb58b4f3 Merge pull request #3956 from MichaelEischer/fix-lock-refresh
lock: Use the correct duration to check for expired locks
2022-10-07 22:58:10 +02:00
Michael Eischer
7c5d63a794 lock: Use the correct duration to check for expired locks 2022-10-07 22:39:53 +02:00
Michael Eischer
8b7c952f17 Merge pull request #3953 from keachi/typo
Fix typo
2022-10-07 22:18:32 +02:00
Michael Eischer
e43d2d45f7 Merge pull request #3952 from hoelzro/master
Update copy documentation to use --from-repo option
2022-10-07 22:18:06 +02:00
Rob Hoelz
03e9a26018 Update copy documentation to use --from-repo option
Removing the last references to the deprecated --repo2 option
2022-10-07 22:00:12 +02:00
tr
43cc01d63e doc: Fix typo 2022-10-05 21:03:14 +02:00
Michael Eischer
7112a132c3 Merge pull request #3950 from MichaelEischer/misc-cleanups
Cleanups for cmd_debug/repository and remove dead code from restic package
2022-10-03 12:46:32 +02:00
Michael Eischer
4bb5240720 repository: remove unused PrefixLength 2022-10-03 12:15:53 +02:00
Michael Eischer
999fe29976 repository: hide prepareCache 2022-10-03 12:15:53 +02:00
Michael Eischer
9197c63007 debug: use repository.ListPack wrapper 2022-10-03 12:09:08 +02:00
Michael Eischer
ddcf549eba repository: remove IsMixedPack and add replacement for checker
Repositories with mixed packs are probably quite rare by now. When
loading data blobs from a mixed pack file, this will no longer trigger
caching that file. However, usually tree blobs are accessed first such
that this shouldn't make much of a difference.

The checker gets a simpler replacement.
2022-10-03 12:03:59 +02:00
Michael Eischer
a61fbd287a Merge pull request #3569 from MichaelEischer/strict-locking
Strict repository lock handling
2022-10-03 00:44:44 +02:00
Michael Eischer
6d2d297215 pass global context through cobra 2022-10-03 00:19:46 +02:00
Michael Eischer
49126796d0 lock: fix timer expiry monitoring during standby
Monotonic timers are paused during standby. Thus these timers won't fire
after waking up. Fall back to periodic polling to detect too large clock
jumps. See https://github.com/golang/go/issues/35012 for a discussion of
go timers during standby.
2022-10-03 00:19:46 +02:00
Michael Eischer
401e432e9d lock: Do not ignore invalid lock files
While searching for lock file from concurrently running restic
instances, restic ignored unreadable lock files. These can either be
in fact invalid or just be temporarily unreadable. As it is not really
possible to differentiate between both cases, just err on the side of
caution and consider the repository as already locked.

The code retries searching for other locks up to three times to smooth
out temporarily unreadable lock files.
2022-10-03 00:19:46 +02:00
Michael Eischer
aeed420e1a add changelog 2022-10-03 00:19:46 +02:00
Michael Eischer
9959190e39 lock: Add integration test
The tests check that the wrapped context is properly canceled whenever
the repository is unlock or when the lock refresh fails.
2022-10-03 00:19:46 +02:00
Michael Eischer
c3538b063a lock: Use repository interface instead of struct 2022-10-03 00:19:46 +02:00
Michael Eischer
d92957dd78 lock: Implement strict lock expiry monitoring
Restic continued e.g. a backup task even when it failed to renew the
lock or failed to do so in time. For example if a backup client enters
standby during the backup this can allow other operations like `prune`
to run in the meantime (after calling `unlock`). After leaving standby
the backup client will continue its backup and upload indexes which
refer pack files that were removed in the meantime.

This commit introduces a goroutine explicitly monitoring for locks that
are not refreshed in time. To simplify the implementation there's now a
separate goroutine to refresh the lock and monitor for timeouts for each
lock. The monitoring goroutine would now cause the backup to fail as the
client has lost it's lock in the meantime.

The lock refresh goroutines are bound to the context used to lock the
repository initially. The context returned by `lockRepo` is also
cancelled when any of the goroutines exits. This ensures that the
context is cancelled whenever for any reason the lock is no longer
refreshed.
2022-10-03 00:19:46 +02:00
Michael Eischer
928914f821 Prepare for context bound to lock lifetime 2022-10-03 00:19:46 +02:00
Michael Eischer
985722b102 Remove ctx from globalOptions
Previously the global context was either accessed via gopts.ctx,
stored in a local variable and then used within that function or
sometimes both. This makes it very hard to follow which ctx or a wrapped
version of it reaches which method.

Thus just drop the context from the globalOptions struct and pass it
explicitly to every command line handler method.
2022-10-03 00:19:46 +02:00
Michael Eischer
ab819b2344 key: Cleanup method signatures 2022-10-03 00:19:46 +02:00
Michael Eischer
d0668b695d Remove unnecessary context.WithCancel calls
The gopts.ctx is cancelled when the main() method of restic exits.
2022-10-03 00:19:46 +02:00
Michael Eischer
7ce4cb7908 Merge pull request #3947 from MichaelEischer/fix-cache-verify-test
cache: Fix file descriptor leak in TestBackendRemoveBroken
2022-10-03 00:19:26 +02:00
Michael Eischer
430ab32941 cache: Fix file descriptor leak in TestBackendRemoveBroken 2022-10-03 00:06:44 +02:00
Michael Eischer
e99ad39b34 Merge pull request #2750 from metalsp0rk/min-packsize
Add `backup --file-read-concurrency` flag
2022-10-02 23:11:47 +02:00
Michael Eischer
2e606ca70b backup: rework read concurrency 2022-10-02 22:55:14 +02:00
Kyle Brennan
4a501d7118 backup: add option for file read concurrency 2022-10-02 22:51:45 +02:00
Michael Eischer
9ec7eee803 Merge pull request #3521 from MichaelEischer/redownload-broken-files
Redownload files with wrong hash
2022-10-02 22:50:03 +02:00
Michael Eischer
b25d0773b6 Merge pull request #3944 from MichaelEischer/fix-linter-errors
CI: ignore warning about missing package comment
2022-09-27 21:41:55 +02:00
Michael Eischer
5265550ff3 CI: ignore warning about missing package comment 2022-09-27 21:31:37 +02:00
Michael Eischer
e89fc2a29d Merge pull request #3943 from MichaelEischer/find-match-only-valid-ids
ignore filenames which are not IDs when expanding a prefix
2022-09-27 20:56:48 +02:00
Michael Eischer
67e4620cd6 Merge pull request #3938 from restic/errdot
rclone/sftp: Improve handling of ErrDot errors
2022-09-27 20:33:42 +02:00
Michael Eischer
5d3c5b9e50 restic: ignore filenames which are not IDs when expanding a prefix
Some backends generate additional files for each existing file, e.g.

1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef
1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef.sha256

For some commands this leads to an "multiple IDs with prefix" error when
trying to reference a snapshot.
2022-09-27 20:30:40 +02:00
Leo R. Lundgren
ebe9f2c969 rclone/sftp: Improve handling of ErrDot errors
Restic now yields a more informative error message when exec.ErrDot occurs.
2022-09-25 16:19:03 +02:00
Michael Eischer
d114e483c4 Add changelog for corrupt data downloads 2022-09-25 11:55:09 +02:00
Michael Eischer
34c1a83340 cache: Drop cache entry if it cannot be processed
Failing to process data requested from the cache usually indicates a
problem with the returned data. Assume that the cache entry is somehow
damaged and retry downloading it once.
2022-09-25 11:55:09 +02:00
Michael Eischer
aa3b1925b4 cache: Simplify loadFromCacheOrDelegate 2022-09-25 11:35:35 +02:00
Michael Eischer
5c6b6edefe retry index, lock and snapshot loading on hash mismatch 2022-09-25 11:35:35 +02:00
Michael Eischer
822422ef03 retry key loading on hash mismatch 2022-09-25 11:35:35 +02:00
Michael Eischer
d6575f53ca Merge pull request #3942 from MichaelEischer/split-cross-compile-test
Split cross compile test
2022-09-24 22:27:08 +02:00
Michael Eischer
78d2312ee9 Merge pull request #3854 from MichaelEischer/sparsefiles
restore: Add support for sparse files
2022-09-24 22:04:02 +02:00
Michael Eischer
46b30b9826 split cross compilation into three parts
The cross compilation tasks are currently the slowest part of the CI
runs. Splitting it into three parts should reduce its time to roughly
that of the windows CI run.
2022-09-24 22:00:25 +02:00
Michael Eischer
bd191ec60b update golang-ci to version 1.49 2022-09-24 22:00:08 +02:00
Michael Eischer
519059cca4 update ci actions 2022-09-24 21:59:36 +02:00
Michael Eischer
19afad8a09 restore: support sparse restores also on windows 2022-09-24 21:39:39 +02:00
Michael Eischer
0f89f443c7 update sparse restore changelog 2022-09-24 21:39:39 +02:00
Michael Eischer
c147422ba5 repository: special case SaveBlob for all zero chunks
Sparse files contain large regions containing only zero bytes. Checking
that a blob only contains zeros is possible with over 100GB/s for modern
x86 CPUs. Calculating sha256 hashes is only possible with 500MB/s (or
2GB/s using hardware acceleration). Thus we can speed up the hash
calculation for all zero blobs (which always have length
chunker.MinSize) by checking for zero bytes and then using the
precomputed hash.

The all zeros check is only performed for blobs with the minimal chunk
size, and thus should add no overhead most of the time. For chunks which
are not all zero but have the minimal chunks size, the overhead will be
below 2% based on the above performance numbers.

This allows reading sparse sections of files as fast as the kernel can
return data to us. On my system using BTRFS this resulted in about
4GB/s.
2022-09-24 21:39:39 +02:00
Michael Eischer
34fe1362da restorer: move zeroPrefixLen to restic package 2022-09-24 21:39:39 +02:00
Michael Eischer
a5ebd5de4b restorer: Fix race condition in partialFile.WriteAt
The restorer can issue multiple calls to WriteAt in parallel. This can
result in unexpected orderings of the Truncate and WriteAt calls and
sometimes too short restored files.
2022-09-24 21:39:39 +02:00
Michael Eischer
5b6a77058a Enable sparseness only conditionally
We can either preallocate storage for a file or sparsify it. This
detects a pack file as sparse if it contains an all zero block or
consists of only one block. As the file sparsification is just an
approximation, hide it behind a `--sparse` parameter.
2022-09-24 21:20:00 +02:00
greatroar
3047bf611c Changelog entry for sparse file restoring 2022-09-24 21:18:48 +02:00
greatroar
5d4568d393 Write sparse files in restorer
This writes files by using (*os.File).Truncate, which resolves to the
truncate system call on Unix.

Compared to the naive loop,

	for _, b := range p {
		if b != 0 {
			return false
		}
	}

the optimized allZero is about 10× faster:

name       old time/op    new time/op     delta
AllZero-8    1.09ms ± 1%     0.09ms ± 1%    -92.10%  (p=0.000 n=10+10)

name       old speed      new speed       delta
AllZero-8  3.84GB/s ± 1%  48.59GB/s ± 1%  +1166.51%  (p=0.000 n=10+10)
2022-09-24 21:18:48 +02:00
Michael Eischer
eb83402d39 Merge pull request #3935 from miles170/master
Only display the message if there were locks to be removed
2022-09-24 20:53:13 +02:00
Michael Eischer
ef58ddd7b1 Merge pull request #3923 from MichaelEischer/fix-flaky-cache-test
cache: fix flaky TestFileSaveConcurrent on windows
2022-09-24 20:52:55 +02:00
Michael Eischer
7fc178aaf4 internal/cache: extend description of cache sharing test failure 2022-09-24 13:07:01 +02:00
Miles Liu
1acbda18f8 Only display the message if there were locks to be removed
`restic unlock` now only shows `successfully removed locks` if there were locks to be removed.
In addition, it also reports the number of the removed lock files.
2022-09-24 19:02:24 +08:00
Michael Eischer
da1a359c8b Merge pull request #3927 from MichaelEischer/faster-index-each
Speed up MasterIndex.Each
2022-09-24 12:35:23 +02:00
Michael Eischer
041a51512a Merge pull request #3780 from jkmw/fix/2578
Remove existing path before restoring a symlink
2022-09-24 12:34:42 +02:00
Michael Eischer
1ebd57247a repository: optimize MasterIndex.Each
Sending data through a channel at very high frequency is extremely
inefficient. Thus use simple callbacks instead of channels.

> name                old time/op  new time/op  delta
> MasterIndexEach-16   6.68s ±24%   0.96s ± 2%  -85.64%  (p=0.008 n=5+5)
2022-09-24 12:21:59 +02:00
Michael Eischer
825b95e313 repository: add benchmark for MasterIndex.Each 2022-09-24 12:21:59 +02:00
greatroar
1220fe9650 internal/cache: Concurrent use of cache not working on Windows 2022-09-17 19:49:44 +02:00
Jerome Küttner
ef618bdd3f use os.Remove if path already exists on symlink restore 2022-09-14 08:14:31 +02:00
Michael Eischer
b48766d7b8 Merge pull request #3928 from restic/rawtaz-doc-b2-s3
doc: Clarify S3 recommendation for B2 slightly
2022-09-13 20:59:50 +02:00
rawtaz
20f1913ef7 doc: Clarify S3 recommendation for B2 slightly
This gives slightly more background to the recommendation for using restic's S3 backend with Backblaze B2.
2022-09-12 17:48:59 +02:00
rawtaz
d79e61ce5d Merge pull request #3925 from hgraeber/add-powershell-completion
Add powershell completion
2022-09-11 01:04:57 +02:00
Herbert Graeber
988b386e8b Add powershell completion
- Add code for powersehll complition available in cobra
- Add documentation for powershell completion
- Add changelog for pr3925
2022-09-11 00:44:12 +02:00
rawtaz
14d09a6081 Merge pull request #3912 from MichaelEischer/cleanup-snapshot-filter-options
Clean up snapshot filter options
2022-09-11 00:18:42 +02:00
Michael Eischer
381da0443a tweak snapshot filter descriptions 2022-09-10 23:50:20 +02:00
Michael Eischer
8b9778d537 Merge pull request #3900 from MichaelEischer/b2-init-timeout
Add timeout for the initial connection to B2
2022-09-10 23:28:59 +02:00
Michael Eischer
17c27400f8 Merge pull request #3921 from MichaelEischer/filter-cleanup-error-handling
filter: deduplicate error handling for pattern validation
2022-09-10 23:24:50 +02:00
Michael Eischer
f76643bd2e Merge pull request #3894 from MichaelEischer/filter-mount-exit-code
Mount should return exit code 0 after pressing Ctrl-C
2022-09-10 23:22:01 +02:00
Michael Eischer
be9ccc186e Merge pull request #3875 from MichaelEischer/fix-fuse-context-cancel
mount: Fix input/output errors for canceled syscalls
2022-09-10 23:20:29 +02:00
Michael Eischer
2363e5c083 Merge pull request #3913 from MichaelEischer/better-migrate-error-message
migrate: Report why an migration cannot be applied
2022-09-09 23:37:25 +02:00
Michael Eischer
8e0ca80547 filter: deduplicate error handling for pattern validation 2022-09-09 23:12:41 +02:00
plumbeo
d66e755ac7 Change uncompressed size calculation to account for the encryption overhead 2022-09-08 10:15:19 +02:00
plumbeo
837b816358 restic stats: print uncompressed size in mode raw-data 2022-09-05 17:38:32 +02:00
Michael Eischer
d6309961c5 deduplicate the snapshot filter cli option setup 2022-09-04 10:27:33 +02:00
Michael Eischer
8b4dd70013 migrate: Report why an migration cannot be applied
Just returning that `Migration upgrade cannot be applied: check failed`
is not too useful when running `migrate upgrade_repo_v2`.
2022-09-03 11:49:31 +02:00
Michael Eischer
7689d6c679 normalize help text for host, tag and path options 2022-09-03 00:06:38 +02:00
Michael Eischer
6c69f08a7b Merge pull request #3905 from DRON-666/haspaths-linear
Reduce quadratic time complexity of `Snapshot.HasPaths`
2022-08-30 20:35:56 +02:00
Michael Eischer
3e70bac56e Merge pull request #3898 from MichaelEischer/fix-copy-hang
don't hang when `copy` uses a single connection
2022-08-30 20:23:39 +02:00
DRON-666
2a630c51c1 Add changelog 2022-08-30 20:22:07 +02:00
DRON-666
d0f1060df7 Fix quadratic time complexity of Snapshot.HasPaths 2022-08-30 04:38:17 +03:00
Michael Eischer
f481ad64c8 Merge pull request #3904 from lbausch/add-newline
Add newline to keep prompt intact
2022-08-29 21:43:18 +02:00
Lorenz Bausch
7ddd803e46 Add newline to keep prompt intact 2022-08-29 17:37:49 +02:00
Michael Eischer
e5b2c4d571 b2: sniff the error that caused init retry loops 2022-08-28 17:46:03 +02:00
Michael Eischer
dc2db2de5e b2: cancel connection setup after a minute
If the connection to B2 fails, the library enters an endless loop.
2022-08-28 14:56:17 +02:00
Michael Eischer
7682149c9d repository: cleanup copy connection count check 2022-08-28 11:40:56 +02:00
Michael Eischer
b03277ead5 repository: don't hang when copying using a single connection 2022-08-28 11:40:31 +02:00
Michael Eischer
1b233c4e2e Merge pull request #2661 from creativeprojects/issue-1734
"Everything is ok" message after retrying
2022-08-28 11:04:59 +02:00
Fred
4042db5169 Add changelog 2022-08-27 22:36:19 +02:00
Fred
be6baaec12 Add success callback to the backend 2022-08-27 22:27:15 +02:00
Fred
baf58fbaa8 Add unit tests 2022-08-27 22:21:06 +02:00
Fred
d629333efe Add function to notify of success after retrying 2022-08-27 22:21:06 +02:00
Alexander Neumann
c169e37139 Merge pull request #3895 from MichaelEischer/refactor-cat-key
cat: Simplify implementation of 'cat key'
2022-08-27 18:40:46 +02:00
Michael Eischer
1b4af0c6e5 cat: Simplify implementation of 'cat key' 2022-08-26 23:21:51 +02:00
Michael Eischer
3174641ca4 add changelog for mount exit code filtering 2022-08-26 23:17:04 +02:00
Michael Eischer
5478ab22c5 mount: return exit code 0 after receiving a SIGINT 2022-08-26 23:07:07 +02:00
Michael Eischer
d768c1c3e4 Allow cleanup handlers to filter the exit code 2022-08-26 23:04:59 +02:00
Michael Eischer
908f7441fe Merge pull request #3885 from MichaelEischer/delete-fixes
Improve reliability of upload retries and B2 file deletions
2022-08-26 22:30:50 +02:00
Michael Eischer
4c90d91d4d backend: Test that failed uploads are not removed for backends with atomic replace 2022-08-26 21:20:52 +02:00
Michael Eischer
694dfa026a add changelog for reliable B2 deletes 2022-08-26 21:20:46 +02:00
MichaelEischer
582167d671 Merge pull request #3882 from MichaelEischer/sftp-init-single-connection
sftp: Only connect once to server during `init`
2022-08-26 21:13:28 +02:00
MichaelEischer
3822ded0b3 Merge pull request #3877 from MichaelEischer/no-env-in-help
Do not include the actual values of environment variables in help output
2022-08-26 20:59:54 +02:00
Michael Eischer
cf0a8d7758 sftp: Only connect once for repository creation
This is especially useful if ssh asks for a password or if closing the
initial connection could return an error due to a problematic server
implementation.
2022-08-26 20:50:40 +02:00
Michael Eischer
dd7cd5b9b3 fuse: remove unused context parameter 2022-08-26 20:48:48 +02:00
Michael Eischer
a0c1ae9f90 mount: Correctly return context.Canceled for interrupted syscalls
bazil/fuse expects us to return context.Canceled to signal that a
syscall was successfully interrupted. Returning a wrapped version of
that error however causes the fuse library to signal an EIO (input/output
error). Thus unwrap context.Canceled errors before returning them.
2022-08-26 20:48:48 +02:00
Michael Eischer
5d0649faaf Update help output in docs 2022-08-26 20:44:01 +02:00
Michael Eischer
faa4597af1 Set name for option values of cli 2022-08-26 20:42:34 +02:00
Michael Eischer
6ed157aee6 Do not include the actual values of environment variables in help output
This results in printing a `(default: $ENV) (default: value)` suffix for
the corresponding options which looks strange. In addition, some of the
environment variables might contain secrets which should not be
displayed.
2022-08-26 20:39:54 +02:00
MichaelEischer
f7808245aa Merge pull request #3878 from MichaelEischer/cheaper-cache-load
cache: Just try to open cache entry without calling stat first
2022-08-26 20:33:36 +02:00
MichaelEischer
bee15dd555 Merge pull request #3879 from MichaelEischer/mem-optimize
Some random (minor) memory-allocation optimizations
2022-08-26 20:33:02 +02:00
MichaelEischer
0e1d082b12 Merge pull request #3886 from MichaelEischer/recommend-s3-over-b2
doc: recommend usage of B2's S3 API
2022-08-26 20:29:05 +02:00
Alexander Neumann
d464543171 Update repo version table 2022-08-25 21:30:25 +02:00
Alexander Neumann
6b40456db7 Set development version for 0.14.0 2022-08-25 19:55:05 +02:00
Michael Eischer
c586a5e20f doc: recommend usage of B2's S3 API 2022-08-21 11:39:03 +02:00
Michael Eischer
623556bab6 b2: Increase list size to maximum
Just request as many files as possible in one call to reduce the number
of network roundtrips.
2022-08-21 11:20:03 +02:00
Michael Eischer
de0162ea76 backend/retry: Overwrite failed uploads instead of deleting them
For backends which are able to atomically replace files, we just can
overwrite the old copy, if it is necessary to retry an upload. This has
the benefit of issuing one operation less and might be beneficial if a
backend storage, due to bugs or similar, could mix up the order of the
upload and delete calls.
2022-08-21 11:14:53 +02:00
Michael Eischer
fc506f8538 b2: Repeat deleting until all file versions are removed
When hard deleting the latest file version on B2, this uncovers earlier
versions. If an upload required retries, multiple version might exist
for a file. Thus to reliably delete a file, we have to remove all
versions of it.
2022-08-21 11:11:00 +02:00
Michael Eischer
7a992fc794 repository: Reduce buffer reallocations in ForAllIndexes
Previously the buffer was grown incrementally inside `repo.LoadUnpacked`.
But we can do better as we already know how large the index will be.
Allocate a bit more memory to increase the chance that the buffer can be
reused in the future.
2022-08-19 21:13:40 +02:00
Michael Eischer
77b1980d8e repository: MasterIndex.Packs: reduce allocations 2022-08-19 21:10:43 +02:00
Michael Eischer
6ff9517e45 repository: MasterIndex.ListPacks / Index.EachByPack allow earlier GC
Allow earlier garbage collection of some of the intermediate data
structures.
2022-08-19 21:06:33 +02:00
Michael Eischer
ce902aac67 cache: Just try to open cache entry without calling stat first
Instead of first checking whether a file is in the repository cache and
then opening it, we just can open the file. This saves one stat call. If
the file is in the cache, everything is fine and otherwise the code
follows its normal fallback path.
2022-08-19 20:59:06 +02:00
Jerome Küttner
6f3883c9d2 add changelog 2022-07-05 08:47:48 +02:00
Jerome Küttner
9adaa6e240 Delete existing path before restoring a symlink 2022-06-01 17:26:25 +02:00
339 changed files with 8388 additions and 6261 deletions

13
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,13 @@
version: 2
updates:
# Dependencies listed in go.mod
- package-ecosystem: "gomod"
directory: "/" # Location of package manifests
schedule:
interval: "monthly"
# Dependencies listed in .github/workflows/*.yml
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "monthly"

View File

@@ -21,13 +21,11 @@ jobs:
- job_name: Windows
go: 1.19.x
os: windows-latest
install_verb: install
- job_name: macOS
go: 1.19.x
os: macOS-latest
test_fuse: false
install_verb: install
- job_name: Linux
go: 1.19.x
@@ -35,31 +33,17 @@ jobs:
test_cloud_backends: true
test_fuse: true
check_changelog: true
install_verb: install
- job_name: Linux (race)
go: 1.19.x
os: ubuntu-latest
test_fuse: true
test_opts: "-race"
- job_name: Linux
go: 1.18.x
os: ubuntu-latest
test_fuse: true
install_verb: install
- job_name: Linux
go: 1.17.x
os: ubuntu-latest
test_fuse: true
install_verb: install
- job_name: Linux
go: 1.16.x
os: ubuntu-latest
test_fuse: true
install_verb: get
- job_name: Linux
go: 1.15.x
os: ubuntu-latest
test_fuse: true
install_verb: get
name: ${{ matrix.job_name }} Go ${{ matrix.go }}
runs-on: ${{ matrix.os }}
@@ -69,14 +53,14 @@ jobs:
steps:
- name: Set up Go ${{ matrix.go }}
uses: actions/setup-go@v2
uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go }}
- name: Get programs (Linux/macOS)
run: |
echo "build Go tools"
go ${{ matrix.install_verb }} github.com/restic/rest-server/cmd/rest-server@latest
go install github.com/restic/rest-server/cmd/rest-server@latest
echo "install minio server"
mkdir $HOME/bin
@@ -98,7 +82,7 @@ jobs:
chmod 755 $HOME/bin/rclone
rm -rf rclone*
# add $HOME/bin to path ($GOBIN was already added to the path by setup-go@v2)
# add $HOME/bin to path ($GOBIN was already added to the path by setup-go@v3)
echo $HOME/bin >> $GITHUB_PATH
if: matrix.os == 'ubuntu-latest' || matrix.os == 'macOS-latest'
@@ -108,7 +92,7 @@ jobs:
$ProgressPreference = 'SilentlyContinue'
echo "build Go tools"
go ${{ matrix.install_verb }} github.com/restic/rest-server/...
go install github.com/restic/rest-server/...
echo "install minio server"
mkdir $Env:USERPROFILE/bin
@@ -120,7 +104,7 @@ jobs:
unzip rclone.zip
copy rclone*/rclone.exe $Env:USERPROFILE/bin
# add $USERPROFILE/bin to path ($GOBIN was already added to the path by setup-go@v2)
# add $USERPROFILE/bin to path ($GOBIN was already added to the path by setup-go@v3)
echo $Env:USERPROFILE\bin >> $Env:GITHUB_PATH
echo "install tar"
@@ -142,7 +126,7 @@ jobs:
if: matrix.os == 'windows-latest'
- name: Check out code
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Build with build.go
run: |
@@ -152,7 +136,7 @@ jobs:
env:
RESTIC_TEST_FUSE: ${{ matrix.test_fuse }}
run: |
go test -cover ./...
go test -cover ${{matrix.test_opts}} ./...
- name: Test cloud backends
env:
@@ -193,7 +177,9 @@ jobs:
# only run cloud backend tests for pull requests from and pushes to our
# own repo, otherwise the secrets are not available
if: (github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository) && matrix.test_cloud_backends
# Skip for Dependabot pull requests as these are run without secrets
# https://docs.github.com/en/code-security/dependabot/working-with-dependabot/automating-dependabot-with-github-actions#responding-to-events
if: (github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository) && (github.actor != 'dependabot[bot]') && matrix.test_cloud_backends
- name: Check changelog files with calens
run: |
@@ -209,15 +195,16 @@ jobs:
# ATTENTION: the list of architectures must be in sync with helpers/build-release-binaries/main.go!
matrix:
# run cross-compile in two batches parallel so the overall tests run faster
# run cross-compile in three batches parallel so the overall tests run faster
targets:
- "linux/386 linux/amd64 linux/arm linux/arm64 linux/ppc64le linux/mips linux/mipsle linux/mips64 linux/mips64le linux/s390x \
openbsd/386 openbsd/amd64"
- "linux/386 linux/amd64 linux/arm linux/arm64 linux/ppc64le linux/mips linux/mipsle linux/mips64 linux/mips64le linux/s390x"
- "freebsd/386 freebsd/amd64 freebsd/arm \
- "openbsd/386 openbsd/amd64 \
freebsd/386 freebsd/amd64 freebsd/arm \
aix/ppc64 \
darwin/amd64 darwin/arm64 \
netbsd/386 netbsd/amd64 \
darwin/amd64 darwin/arm64"
- "netbsd/386 netbsd/amd64 \
windows/386 windows/amd64 \
solaris/amd64"
@@ -230,7 +217,7 @@ jobs:
steps:
- name: Set up Go ${{ env.latest_go }}
uses: actions/setup-go@v2
uses: actions/setup-go@v3
with:
go-version: ${{ env.latest_go }}
@@ -239,7 +226,7 @@ jobs:
go install github.com/mitchellh/gox@latest
- name: Check out code
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Cross-compile with gox for ${{ matrix.targets }}
env:
@@ -255,22 +242,21 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Set up Go ${{ env.latest_go }}
uses: actions/setup-go@v2
uses: actions/setup-go@v3
with:
go-version: ${{ env.latest_go }}
- name: Check out code
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: golangci-lint
uses: golangci/golangci-lint-action@v2
uses: golangci/golangci-lint-action@v3
with:
# Required: the version of golangci-lint is required and must be specified without patch version: we always use the latest patch version.
version: v1.48
version: v1.49
# Optional: show only new issues if it's a pull request. The default value is `false`.
only-new-issues: true
args: --verbose --timeout 5m
skip-go-installation: true
# only run golangci-lint for pull requests, otherwise ALL hints get
# reported. We need to slowly address all issues until we can enable
@@ -288,11 +274,11 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Docker meta
id: meta
uses: docker/metadata-action@v3
uses: docker/metadata-action@v4
with:
# list of Docker images to use as base name for tags
images: |
@@ -308,14 +294,14 @@ jobs:
type=sha
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
uses: docker/setup-buildx-action@v2
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
uses: docker/build-push-action@v3
with:
push: false
context: .

View File

@@ -55,3 +55,5 @@ issues:
- exported (function|method|var|type|const) .* should have comment or be unexported
# revive: ignore constants in all caps
- don't use ALL_CAPS in Go names; use CamelCase
# revive: lots of packages don't have such a comment
- "package-comments: should have a package comment"

File diff suppressed because it is too large Load Diff

View File

@@ -76,7 +76,7 @@ Then use the `go` tool to build restic:
$ go build ./cmd/restic
$ ./restic version
restic 0.10.0-dev (compiled manually) compiled with go1.15.2 on linux/amd64
restic 0.14.0-dev (compiled manually) compiled with go1.19 on linux/amd64
You can run all tests with the following command:

View File

@@ -1 +1 @@
0.14.0
0.15.0

View File

@@ -3,8 +3,8 @@
// This program aims to make building Go programs for end users easier by just
// calling it with `go run`, without having to setup a GOPATH.
//
// This program needs Go >= 1.12. It'll use Go modules for compilation. It
// builds the package configured as Main in the Config struct.
// This program checks for a minimum Go version. It will use Go modules for
// compilation. It builds the package configured as Main in the Config struct.
// BSD 2-Clause License
//
@@ -43,7 +43,6 @@ package main
import (
"fmt"
"io"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
@@ -59,7 +58,7 @@ var config = Config{
Main: "./cmd/restic", // package name for the main package
DefaultBuildTags: []string{"selfupdate"}, // specify build tags which are always used
Tests: []string{"./..."}, // tests to run
MinVersion: GoVersion{Major: 1, Minor: 14, Patch: 0}, // minimum Go version supported
MinVersion: GoVersion{Major: 1, Minor: 18, Patch: 0}, // minimum Go version supported
}
// Config configures the build.
@@ -179,7 +178,7 @@ func test(cwd string, env map[string]string, args ...string) error {
// getVersion returns the version string from the file VERSION in the current
// directory.
func getVersionFromFile() string {
buf, err := ioutil.ReadFile("VERSION")
buf, err := os.ReadFile("VERSION")
if err != nil {
verbosePrintf("error reading file VERSION: %v\n", err)
return ""
@@ -319,12 +318,8 @@ func (v GoVersion) String() string {
}
func main() {
if !goVersion.AtLeast(GoVersion{1, 12, 0}) {
die("Go version (%v) is too old, restic requires Go >= 1.12\n", goVersion)
}
if !goVersion.AtLeast(config.MinVersion) {
fmt.Fprintf(os.Stderr, "%s detected, this program requires at least %s\n", goVersion, config.MinVersion)
fmt.Fprintf(os.Stderr, "Detected version %s is too old, restic requires at least %s\n", goVersion, config.MinVersion)
os.Exit(1)
}

View File

@@ -0,0 +1,8 @@
Enhancement: Implement `rewrite` command
Restic now has a `rewrite` command which allows to rewrite existing snapshots
to remove unwanted files.
https://github.com/restic/restic/issues/14
https://github.com/restic/restic/pull/2731
https://github.com/restic/restic/pull/4079

View File

@@ -0,0 +1,15 @@
Enhancement: Inform about successful retries after errors
When a recoverable error is encountered, restic shows a warning message saying
that it's retrying, e.g.:
`Save(<data/956b9ced99>) returned error, retrying after 357.131936ms: ...`
This message can be confusing in that it never clearly states whether the retry
is successful or not. This has now been fixed such that restic follows up with
a message confirming a successful retry, e.g.:
`Save(<data/956b9ced99>) operation successful after 1 retries`
https://github.com/restic/restic/issues/1734
https://github.com/restic/restic/pull/2661

View File

@@ -0,0 +1,12 @@
Enhancement: Improve handling of directories with duplicate entries
If for some reason a directory contains a duplicate entry, the `backup` command
would previously fail with a `node "path/to/file" already present` or `nodes
are not ordered got "path/to/file", last "path/to/file"` error.
The error handling has been improved to only report a warning in this case. Make
sure to check that the filesystem in question is not damaged if you see this!
https://github.com/restic/restic/issues/1866
https://github.com/restic/restic/issues/3937
https://github.com/restic/restic/pull/3880

View File

@@ -0,0 +1,10 @@
Bugfix: Make `mount` return exit code 0 after receiving Ctrl-C / SIGINT
To stop the `mount` command, a user has to press Ctrl-C or send a SIGINT
signal to restic. This used to cause restic to exit with a non-zero exit code.
The exit code has now been changed to zero as the above is the expected way
to stop the `mount` command and should therefore be considered successful.
https://github.com/restic/restic/issues/2015
https://github.com/restic/restic/pull/3894

View File

@@ -0,0 +1,19 @@
Enhancement: Support B2 API keys restricted to hiding but not deleting files
When the B2 backend does not have the necessary permissions to permanently
delete files, it now automatically falls back to hiding files. This allows
using restic with an application key which is not allowed to delete files.
This can prevent an attacker from deleting backups with such an API key.
To use this feature create an application key without the `deleteFiles`
capability. It is recommended to restrict the key to just one bucket.
For example using the `b2` command line tool:
`b2 create-key --bucket <bucketName> <keyName> listBuckets,readFiles,writeFiles,listFiles`
Alternatively, you can use the S3 backend to access B2, as described
in the documentation. In this mode, files are also only hidden instead
of being deleted permanently.
https://github.com/restic/restic/issues/2134
https://github.com/restic/restic/pull/2398

View File

@@ -0,0 +1,11 @@
Enhancement: Make `init` open only one connection for the SFTP backend
The `init` command using the SFTP backend used to connect twice to the
repository. This could be inconvenient if the user must enter a password,
or cause `init` to fail if the server does not correctly close the first SFTP
connection.
This has now been fixed by reusing the first/initial SFTP connection opened.
https://github.com/restic/restic/issues/2152
https://github.com/restic/restic/pull/3882

View File

@@ -0,0 +1,13 @@
Enhancement: Handle cache corruption on disk and in downloads
In rare situations, like for example after a system crash, the data stored
in the cache might be corrupted. This could cause restic to fail and required
manually deleting the cache.
Restic now automatically removes broken data from the cache, allowing it
to recover from such a situation without user intervention. In addition,
restic retries downloads which return corrupt data in order to also handle
temporary download problems.
https://github.com/restic/restic/issues/2533
https://github.com/restic/restic/pull/3521

View File

@@ -0,0 +1,17 @@
Bugfix: Don't read password from stdin for `backup --stdin`
The `backup` command when used with `--stdin` previously tried to read first
the password, then the data to be backed up from standard input. This meant
it would often confuse part of the data for the password.
From now on, it will instead exit with the message `Fatal: cannot read both
password and data from stdin` unless the password is passed in some other
way (such as `--restic-password-file`, `RESTIC_PASSWORD`, etc).
To enter the password interactively a password command has to be used. For
example on Linux, `mysqldump somedatabase | restic backup --stdin
--password-command='sh -c "systemd-ask-password < /dev/tty"'` securely reads
the password from the terminal.
https://github.com/restic/restic/issues/2591
https://github.com/restic/restic/pull/4011

View File

@@ -0,0 +1,9 @@
Enhancement: Support restoring symbolic links on Windows
The `restore` command now supports restoring symbolic links on Windows. Because
of Windows specific restrictions this is only possible when running restic with
the `SeCreateSymbolicLinkPrivilege` privilege or as an administrator.
https://github.com/restic/restic/issues/1078
https://github.com/restic/restic/issues/2699
https://github.com/restic/restic/pull/2875

View File

@@ -0,0 +1,20 @@
Enhancement: Stricter repository lock handling
Previously, restic commands kept running even if they failed to refresh their
locks in time. This could be a problem e.g. in case the client system running
a backup entered the standby power mode while the backup was still in progress
(which would prevent the client from refreshing its lock), and after a short
delay another host successfully runs `unlock` and `prune` on the repository,
which would remove all data added by the in-progress backup. If the backup
client later continues its backup, even though its lock had expired in the
meantime, this would lead to an incomplete snapshot.
To address this, lock handling is now much stricter. Commands requiring a lock
are canceled if the lock is not refreshed successfully in time. In addition,
if a lock file is not readable restic will not allow starting a command. It may
be necessary to remove invalid lock files manually or use `unlock --remove-all`.
Please make sure that no other restic processes are running concurrently before
doing this, however.
https://github.com/restic/restic/issues/2715
https://github.com/restic/restic/pull/3569

View File

@@ -0,0 +1,9 @@
Change: Include full snapshot ID in JSON output of `backup`
We have changed the JSON output of the backup command to include the full
snapshot ID instead of just a shortened version, as the latter can be ambiguous
in some rare cases. To derive the short ID, please truncate the full ID down to
eight characters.
https://github.com/restic/restic/issues/2724
https://github.com/restic/restic/pull/3993

View File

@@ -0,0 +1,8 @@
Enhancement: Add support for `credential_process` to S3 backend
Restic now uses a newer library for the S3 backend, which adds support for the
`credential_process` option in the AWS credential configuration.
https://github.com/restic/restic/issues/3029
https://github.com/restic/restic/issues/4034
https://github.com/restic/restic/pull/4025

View File

@@ -0,0 +1,8 @@
Enhancement: Make `mount` command support macOS using macFUSE 4.x
Restic now uses a different FUSE library for mounting snapshots and making them
available as a FUSE filesystem using the `mount` command. This adds support for
macFUSE 4.x which can be used to make this work on recent macOS versions.
https://github.com/restic/restic/issues/3096
https://github.com/restic/restic/pull/4024

View File

@@ -0,0 +1,7 @@
Enhancement: Support JSON output for the `init` command
The `init` command used to ignore the `--json` option, but now outputs a JSON
message if the repository was created successfully.
https://github.com/restic/restic/issues/3124
https://github.com/restic/restic/pull/3132

View File

@@ -0,0 +1,14 @@
Bugfix: Delete files on Backblaze B2 more reliably
Restic used to only delete the latest version of files stored in B2. In most
cases this worked well as there was only a single version of the file. However,
due to retries while uploading it is possible for multiple file versions to be
stored at B2. This could lead to various problems for files that should have
been deleted but still existed.
The implementation has now been changed to delete all versions of files, which
doubles the amount of Class B transactions necessary to delete files, but
assures that no file versions are left behind.
https://github.com/restic/restic/issues/3161
https://github.com/restic/restic/pull/3885

View File

@@ -0,0 +1,12 @@
Bugfix: Make SFTP backend report no space left on device
Backing up to an SFTP backend would spew repeated SSH_FX_FAILURE messages when
the remote disk was full. Restic now reports "sftp: no space left on device"
and exits immediately when it detects this condition.
A fix for this issue was implemented in restic 0.12.1, but unfortunately the
fix itself contained a bug that prevented it from taking effect.
https://github.com/restic/restic/issues/3336
https://github.com/restic/restic/pull/3345
https://github.com/restic/restic/pull/4075

View File

@@ -0,0 +1,10 @@
Bugfix: Improve handling of interrupted syscalls in `mount` command
Accessing restic's FUSE mount could result in "input/output" errors when using
programs in which syscalls can be interrupted. This is for example the case for
Go programs. This has now been fixed by improved error handling of interrupted
syscalls.
https://github.com/restic/restic/issues/3567
https://github.com/restic/restic/issues/3694
https://github.com/restic/restic/pull/3875

View File

@@ -0,0 +1,7 @@
Bugfix: Fix stuck `copy` command when `-o <backend>.connections=1`
When running the `copy` command with `-o <backend>.connections=1` the
command would be infinitely stuck. This has now been fixed.
https://github.com/restic/restic/issues/3897
https://github.com/restic/restic/pull/3898

View File

@@ -0,0 +1,9 @@
Bugfix: Correct prune statistics for partially compressed repositories
In a partially compressed repository, one data blob can exist both in an
uncompressed and a compressed version. This caused the `prune` statistics to
become inaccurate and e.g. report a too high value for the unused size, such
as "unused size after prune: 16777215.991 TiB". This has now been fixed.
https://github.com/restic/restic/issues/3918
https://github.com/restic/restic/pull/3980

View File

@@ -0,0 +1,11 @@
Change: Make `unlock` display message only when locks were actually removed
The `unlock` command used to print the "successfully removed locks" message
whenever it was run, regardless of lock files having being removed or not.
This has now been changed such that it only prints the message if any lock
files were actually removed. In addition, it also reports the number of
removed lock files.
https://github.com/restic/restic/issues/3929
https://github.com/restic/restic/pull/3935

View File

@@ -0,0 +1,15 @@
Enhancement: Improve handling of ErrDot errors in rclone and sftp backends
Since Go 1.19, restic can no longer implicitly run relative executables which
are found in the current directory (e.g. `rclone` if found in `.`). This is a
security feature of Go to prevent against running unintended and possibly
harmful executables.
The error message for this was just "cannot run executable found relative to
current directory". This has now been improved to yield a more specific error
message, informing the user how to explicitly allow running the executable
using the `-o rclone.program` and `-o sftp.command` extended options with `./`.
https://github.com/restic/restic/issues/3932
https://pkg.go.dev/os/exec#hdr-Executables_in_the_current_directory
https://go.dev/blog/path-security

View File

@@ -0,0 +1,8 @@
Bugfix: Make `backup` no longer hang on Solaris when seeing a FIFO file
The `backup` command used to hang on Solaris whenever it encountered a FIFO
file (named pipe), due to a bug in the handling of extended attributes. This
bug has now been fixed.
https://github.com/restic/restic/issues/4003
https://github.com/restic/restic/pull/4053

View File

@@ -0,0 +1,8 @@
Bugfix: Support ExFAT-formatted local backends on macOS Ventura
ExFAT-formatted disks could not be used as local backends starting from macOS
Ventura. Restic commands would fail with an "inappropriate ioctl for device"
error. This has now been fixed.
https://github.com/restic/restic/issues/4016
https://github.com/restic/restic/pull/4021

View File

@@ -0,0 +1,11 @@
Change: Don't print skipped snapshots by default in `copy` command
The `copy` command used to print each snapshot that was skipped because it
already existed in the target repository. The amount of this output could
practically bury the list of snapshots that were actually copied.
From now on, the skipped snapshots are by default not printed at all, but
this can be re-enabled by increasing the verbosity level of the command.
https://github.com/restic/restic/issues/4033
https://github.com/restic/restic/pull/4066

View File

@@ -0,0 +1,10 @@
Bugfix: Make `init` ignore "Access Denied" errors when creating S3 buckets
In restic 0.9.0 through 0.13.0, the `init` command ignored some permission
errors from S3 backends when trying to check for bucket existence, so that
manually created buckets with custom permissions could be used for backups.
This feature became broken in 0.14.0, but has now been restored again.
https://github.com/restic/restic/issues/4085
https://github.com/restic/restic/pull/4086

View File

@@ -0,0 +1,10 @@
Bugfix: Don't generate negative UIDs and GIDs in tar files from `dump`
When using a 32-bit build of restic, the `dump` command could in some cases
create tar files containing negative UIDs and GIDs, which cannot be read by
GNU tar. This corner case especially applies to backups from stdin on Windows.
This is now fixed such that `dump` creates valid tar files in these cases too.
https://github.com/restic/restic/issues/4103
https://github.com/restic/restic/pull/4104

View File

@@ -0,0 +1,17 @@
Enhancement: Restore files with long runs of zeros as sparse files
When using `restore --sparse`, the restorer may now write files containing long
runs of zeros as sparse files (also called files with holes), where the zeros
are not actually written to disk.
How much space is saved by writing sparse files depends on the operating
system, file system and the distribution of zeros in the file.
During backup restic still reads the whole file including sparse regions, but
with optimized processing speed of sparse regions.
https://github.com/restic/restic/issues/79
https://github.com/restic/restic/issues/3903
https://github.com/restic/restic/pull/2601
https://github.com/restic/restic/pull/3854
https://forum.restic.net/t/sparse-file-support/1264

View File

@@ -0,0 +1,7 @@
Enhancement: Make backup file read concurrency configurable
The `backup` command now supports a `--read-concurrency` option which allows
tuning restic for very fast storage like NVMe disks by controlling the number
of concurrent file reads during the backup process.
https://github.com/restic/restic/pull/2750

View File

@@ -0,0 +1,8 @@
Bugfix: Make `restore` replace existing symlinks
When restoring a symlink, restic used to report an error if the target path
already existed. This has now been fixed such that the potentially existing
target path is first removed before the symlink is restored.
https://github.com/restic/restic/issues/2578
https://github.com/restic/restic/pull/3780

View File

@@ -0,0 +1,6 @@
Enhancement: Optimize prune memory usage
The `prune` command needs large amounts of memory in order to determine what to
keep and what to remove. This is now optimized to use up to 30% less memory.
https://github.com/restic/restic/pull/3899

View File

@@ -0,0 +1,6 @@
Enhancement: Improve speed of parent snapshot detection in `backup` command
Backing up a large number of files using `--files-from-verbatim` or `--files-from-raw`
options could require a long time to find the parent snapshot. This has been improved.
https://github.com/restic/restic/pull/3905

View File

@@ -0,0 +1,12 @@
Enhancement: Add compression statistics to the `stats` command
When executed with `--mode raw-data` on a repository that supports compression,
the `stats` command now calculates and displays, for the selected repository or
snapshots: the uncompressed size of the data; the compression progress
(percentage of data that has been compressed); the compression ratio of the
compressed data; the total space saving.
It also takes into account both the compressed and uncompressed data if the
repository is only partially compressed.
https://github.com/restic/restic/pull/3915

View File

@@ -0,0 +1,6 @@
Enhancement: Provide command completion for PowerShell
Restic already provided generation of completion files for bash, fish and zsh.
Now powershell is supported, too.
https://github.com/restic/restic/pull/3925/files

View File

@@ -0,0 +1,10 @@
Enhancement: Allow `backup` file tree scanner to be disabled
The `backup` command walks the file tree in a separate scanner process to find
the total size and file/directory count, and uses this to provide an ETA. This
can slow down backups, especially of network filesystems.
The command now has a new option `--no-scan` which can be used to disable this
scanning in order to speed up backups when needed.
https://github.com/restic/restic/pull/3931

View File

@@ -0,0 +1,9 @@
Enhancement: Ignore additional/unknown files in repository
If a restic repository had additional files in it (not created by restic),
commands like `find` and `restore` could become confused and fail with an
`multiple IDs with prefix "12345678" found` error. These commands now
ignore such additional files.
https://github.com/restic/restic/pull/3943
https://forum.restic.net/t/which-protocol-should-i-choose-for-remote-linux-backups/5446/17

View File

@@ -0,0 +1,7 @@
Bugfix: Make `ls` return exit code 1 if snapshot cannot be loaded
The `ls` command used to show a warning and return exit code 0 when failing
to load a snapshot. This has now been fixed such that it instead returns exit
code 1 (still showing a warning).
https://github.com/restic/restic/pull/3951

View File

@@ -0,0 +1,9 @@
Enhancement: Improve `backup` performance for small files
When backing up small files restic was slower than it could be. In particular
this affected backups using maximum compression.
This has been fixed by reworking the internal parallelism of the backup
command, making it back up small files around two times faster.
https://github.com/restic/restic/pull/3955

View File

@@ -0,0 +1,7 @@
Change: Update dependencies and require Go 1.18 or newer
Most dependencies have been updated. Since some libraries require newer language
features, support for Go 1.15-1.17 has been dropped, which means that restic now
requires at least Go 1.18 to build.
https://github.com/restic/restic/pull/4041

View File

@@ -0,0 +1,11 @@
Bugfix: Make `self-update` enabled by default only in release builds
The `self-update` command was previously included by default in all builds of
restic as opposed to only in official release builds, even if the `selfupdate`
tag was not explicitly enabled when building.
This has now been corrected, and the `self-update` command is only available
if restic was built with `-tags selfupdate` (as done for official release
builds by `build.go`).
https://github.com/restic/restic/pull/4100

View File

@@ -11,7 +11,7 @@ import (
var cleanupHandlers struct {
sync.Mutex
list []func() error
list []func(code int) (int, error)
done bool
ch chan os.Signal
}
@@ -25,7 +25,7 @@ func init() {
// AddCleanupHandler adds the function f to the list of cleanup handlers so
// that it is executed when all the cleanup handlers are run, e.g. when SIGINT
// is received.
func AddCleanupHandler(f func() error) {
func AddCleanupHandler(f func(code int) (int, error)) {
cleanupHandlers.Lock()
defer cleanupHandlers.Unlock()
@@ -36,22 +36,24 @@ func AddCleanupHandler(f func() error) {
}
// RunCleanupHandlers runs all registered cleanup handlers
func RunCleanupHandlers() {
func RunCleanupHandlers(code int) int {
cleanupHandlers.Lock()
defer cleanupHandlers.Unlock()
if cleanupHandlers.done {
return
return code
}
cleanupHandlers.done = true
for _, f := range cleanupHandlers.list {
err := f()
var err error
code, err = f(code)
if err != nil {
Warnf("error in cleanup handler: %v\n", err)
}
}
cleanupHandlers.list = nil
return code
}
// CleanupHandler handles the SIGINT signals.
@@ -75,6 +77,6 @@ func CleanupHandler(c <-chan os.Signal) {
// Exit runs the cleanup handlers and then terminates the process with the
// given exit code.
func Exit(code int) {
RunCleanupHandlers()
code = RunCleanupHandlers(code)
os.Exit(code)
}

View File

@@ -6,11 +6,11 @@ import (
"context"
"fmt"
"io"
"io/ioutil"
"os"
"path"
"path/filepath"
"runtime"
"strconv"
"strings"
"sync"
"time"
@@ -21,7 +21,6 @@ import (
"github.com/restic/restic/internal/archiver"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/filter"
"github.com/restic/restic/internal/fs"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
@@ -56,8 +55,9 @@ Exit status is 3 if some source data could not be read (incomplete snapshot crea
},
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
var wg sync.WaitGroup
cancelCtx, cancel := context.WithCancel(globalOptions.ctx)
cancelCtx, cancel := context.WithCancel(ctx)
defer func() {
// shutdown termstatus
cancel()
@@ -71,35 +71,35 @@ Exit status is 3 if some source data could not be read (incomplete snapshot crea
term.Run(cancelCtx)
}()
return runBackup(backupOptions, globalOptions, term, args)
return runBackup(ctx, backupOptions, globalOptions, term, args)
},
}
// BackupOptions bundles all options for the backup command.
type BackupOptions struct {
Parent string
Force bool
Excludes []string
InsensitiveExcludes []string
ExcludeFiles []string
InsensitiveExcludeFiles []string
ExcludeOtherFS bool
ExcludeIfPresent []string
ExcludeCaches bool
ExcludeLargerThan string
Stdin bool
StdinFilename string
Tags restic.TagLists
Host string
FilesFrom []string
FilesFromVerbatim []string
FilesFromRaw []string
TimeStamp string
WithAtime bool
IgnoreInode bool
IgnoreCtime bool
UseFsSnapshot bool
DryRun bool
excludePatternOptions
Parent string
Force bool
ExcludeOtherFS bool
ExcludeIfPresent []string
ExcludeCaches bool
ExcludeLargerThan string
Stdin bool
StdinFilename string
Tags restic.TagLists
Host string
FilesFrom []string
FilesFromVerbatim []string
FilesFromRaw []string
TimeStamp string
WithAtime bool
IgnoreInode bool
IgnoreCtime bool
UseFsSnapshot bool
DryRun bool
ReadConcurrency uint
NoScan bool
}
var backupOptions BackupOptions
@@ -113,10 +113,9 @@ func init() {
f := cmdBackup.Flags()
f.StringVar(&backupOptions.Parent, "parent", "", "use this parent `snapshot` (default: last snapshot in the repository that has the same target files/directories, and is not newer than the snapshot time)")
f.BoolVarP(&backupOptions.Force, "force", "f", false, `force re-reading the target files/directories (overrides the "parent" flag)`)
f.StringArrayVarP(&backupOptions.Excludes, "exclude", "e", nil, "exclude a `pattern` (can be specified multiple times)")
f.StringArrayVar(&backupOptions.InsensitiveExcludes, "iexclude", nil, "same as --exclude `pattern` but ignores the casing of filenames")
f.StringArrayVar(&backupOptions.ExcludeFiles, "exclude-file", nil, "read exclude patterns from a `file` (can be specified multiple times)")
f.StringArrayVar(&backupOptions.InsensitiveExcludeFiles, "iexclude-file", nil, "same as --exclude-file but ignores casing of `file`names in patterns")
initExcludePatternOptions(f, &backupOptions.excludePatternOptions)
f.BoolVarP(&backupOptions.ExcludeOtherFS, "one-file-system", "x", false, "exclude other file systems, don't cross filesystem boundaries and subvolumes")
f.StringArrayVar(&backupOptions.ExcludeIfPresent, "exclude-if-present", nil, "takes `filename[:header]`, exclude contents of directories containing filename (except filename itself) if header of that file is as provided (can be specified multiple times)")
f.BoolVar(&backupOptions.ExcludeCaches, "exclude-caches", false, `excludes cache directories that are marked with a CACHEDIR.TAG file. See https://bford.info/cachedir/ for the Cache Directory Tagging Standard`)
@@ -124,7 +123,7 @@ func init() {
f.BoolVar(&backupOptions.Stdin, "stdin", false, "read backup from stdin")
f.StringVar(&backupOptions.StdinFilename, "stdin-filename", "stdin", "`filename` to use when reading from stdin")
f.Var(&backupOptions.Tags, "tag", "add `tags` for the new snapshot in the format `tag[,tag,...]` (can be specified multiple times)")
f.UintVar(&backupOptions.ReadConcurrency, "read-concurrency", 0, "read `n` files concurrently (default: $RESTIC_READ_CONCURRENCY or 2)")
f.StringVarP(&backupOptions.Host, "host", "H", "", "set the `hostname` for the snapshot manually. To prevent an expensive rescan use the \"parent\" flag")
f.StringVar(&backupOptions.Host, "hostname", "", "set the `hostname` for the snapshot manually")
err := f.MarkDeprecated("hostname", "use --host")
@@ -132,7 +131,6 @@ func init() {
// MarkDeprecated only returns an error when the flag could not be found
panic(err)
}
f.StringArrayVar(&backupOptions.FilesFrom, "files-from", nil, "read the files to backup from `file` (can be combined with file args; can be specified multiple times)")
f.StringArrayVar(&backupOptions.FilesFromVerbatim, "files-from-verbatim", nil, "read the files to backup from `file` (can be combined with file args; can be specified multiple times)")
f.StringArrayVar(&backupOptions.FilesFromRaw, "files-from-raw", nil, "read the files to backup from `file` (can be combined with file args; can be specified multiple times)")
@@ -141,9 +139,14 @@ func init() {
f.BoolVar(&backupOptions.IgnoreInode, "ignore-inode", false, "ignore inode number changes when checking for modified files")
f.BoolVar(&backupOptions.IgnoreCtime, "ignore-ctime", false, "ignore ctime changes when checking for modified files")
f.BoolVarP(&backupOptions.DryRun, "dry-run", "n", false, "do not upload or write any data, just show what would be done")
f.BoolVar(&backupOptions.NoScan, "no-scan", false, "do not run scanner to estimate size of backup")
if runtime.GOOS == "windows" {
f.BoolVar(&backupOptions.UseFsSnapshot, "use-fs-snapshot", false, "use filesystem snapshot where possible (currently only Windows VSS)")
}
// parse read concurrency from env, on error the default value will be used
readConcurrency, _ := strconv.ParseUint(os.Getenv("RESTIC_READ_CONCURRENCY"), 10, 32)
backupOptions.ReadConcurrency = uint(readConcurrency)
}
// filterExisting returns a slice of all existing items, or an error if no
@@ -183,7 +186,7 @@ func readLines(filename string) ([]string, error) {
)
if filename == "-" {
data, err = ioutil.ReadAll(os.Stdin)
data, err = io.ReadAll(os.Stdin)
} else {
data, err = textfile.Read(filename)
}
@@ -260,6 +263,10 @@ func readFilenamesRaw(r io.Reader) (names []string, err error) {
// Check returns an error when an invalid combination of options was set.
func (opts BackupOptions) Check(gopts GlobalOptions, args []string) error {
if gopts.password == "" {
if opts.Stdin {
return errors.Fatal("cannot read both password and data from stdin")
}
filesFrom := append(append(opts.FilesFrom, opts.FilesFromVerbatim...), opts.FilesFromRaw...)
for _, filename := range filesFrom {
if filename == "-" {
@@ -300,48 +307,11 @@ func collectRejectByNameFuncs(opts BackupOptions, repo *repository.Repository, t
fs = append(fs, f)
}
// add patterns from file
if len(opts.ExcludeFiles) > 0 {
excludes, err := readExcludePatternsFromFiles(opts.ExcludeFiles)
if err != nil {
return nil, err
}
if valid, invalidPatterns := filter.ValidatePatterns(excludes); !valid {
return nil, errors.Fatalf("--exclude-file: invalid pattern(s) provided:\n%s", strings.Join(invalidPatterns, "\n"))
}
opts.Excludes = append(opts.Excludes, excludes...)
}
if len(opts.InsensitiveExcludeFiles) > 0 {
excludes, err := readExcludePatternsFromFiles(opts.InsensitiveExcludeFiles)
if err != nil {
return nil, err
}
if valid, invalidPatterns := filter.ValidatePatterns(excludes); !valid {
return nil, errors.Fatalf("--iexclude-file: invalid pattern(s) provided:\n%s", strings.Join(invalidPatterns, "\n"))
}
opts.InsensitiveExcludes = append(opts.InsensitiveExcludes, excludes...)
}
if len(opts.InsensitiveExcludes) > 0 {
if valid, invalidPatterns := filter.ValidatePatterns(opts.InsensitiveExcludes); !valid {
return nil, errors.Fatalf("--iexclude: invalid pattern(s) provided:\n%s", strings.Join(invalidPatterns, "\n"))
}
fs = append(fs, rejectByInsensitivePattern(opts.InsensitiveExcludes))
}
if len(opts.Excludes) > 0 {
if valid, invalidPatterns := filter.ValidatePatterns(opts.Excludes); !valid {
return nil, errors.Fatalf("--exclude: invalid pattern(s) provided:\n%s", strings.Join(invalidPatterns, "\n"))
}
fs = append(fs, rejectByPattern(opts.Excludes))
fsPatterns, err := opts.excludePatternOptions.CollectPatterns()
if err != nil {
return nil, err
}
fs = append(fs, fsPatterns...)
if opts.ExcludeCaches {
opts.ExcludeIfPresent = append(opts.ExcludeIfPresent, "CACHEDIR.TAG:Signature: 8a477f597d28d172789f06886806bc55")
@@ -382,53 +352,6 @@ func collectRejectFuncs(opts BackupOptions, repo *repository.Repository, targets
return fs, nil
}
// readExcludePatternsFromFiles reads all exclude files and returns the list of
// exclude patterns. For each line, leading and trailing white space is removed
// and comment lines are ignored. For each remaining pattern, environment
// variables are resolved. For adding a literal dollar sign ($), write $$ to
// the file.
func readExcludePatternsFromFiles(excludeFiles []string) ([]string, error) {
getenvOrDollar := func(s string) string {
if s == "$" {
return "$"
}
return os.Getenv(s)
}
var excludes []string
for _, filename := range excludeFiles {
err := func() (err error) {
data, err := textfile.Read(filename)
if err != nil {
return err
}
scanner := bufio.NewScanner(bytes.NewReader(data))
for scanner.Scan() {
line := strings.TrimSpace(scanner.Text())
// ignore empty lines
if line == "" {
continue
}
// strip comments
if strings.HasPrefix(line, "#") {
continue
}
line = os.Expand(line, getenvOrDollar)
excludes = append(excludes, line)
}
return scanner.Err()
}()
if err != nil {
return nil, err
}
}
return excludes, nil
}
// collectTargets returns a list of target files/dirs from several sources.
func collectTargets(opts BackupOptions, args []string) (targets []string, err error) {
if opts.Stdin {
@@ -451,7 +374,7 @@ func collectTargets(opts BackupOptions, args []string) (targets []string, err er
var expanded []string
expanded, err := filepath.Glob(line)
if err != nil {
return nil, errors.WithMessage(err, fmt.Sprintf("pattern: %s", line))
return nil, fmt.Errorf("pattern: %s: %w", line, err)
}
if len(expanded) == 0 {
Warnf("pattern %q does not match any files, skipping\n", line)
@@ -498,31 +421,24 @@ func collectTargets(opts BackupOptions, args []string) (targets []string, err er
// parent returns the ID of the parent snapshot. If there is none, nil is
// returned.
func findParentSnapshot(ctx context.Context, repo restic.Repository, opts BackupOptions, targets []string, timeStampLimit time.Time) (parentID *restic.ID, err error) {
// Force using a parent
if !opts.Force && opts.Parent != "" {
id, err := restic.FindSnapshot(ctx, repo.Backend(), opts.Parent)
if err != nil {
return nil, errors.Fatalf("invalid id %q: %v", opts.Parent, err)
}
parentID = &id
func findParentSnapshot(ctx context.Context, repo restic.Repository, opts BackupOptions, targets []string, timeStampLimit time.Time) (*restic.Snapshot, error) {
if opts.Force {
return nil, nil
}
// Find last snapshot to set it as parent, if not already set
if !opts.Force && parentID == nil {
id, err := restic.FindLatestSnapshot(ctx, repo.Backend(), repo, targets, []restic.TagList{}, []string{opts.Host}, &timeStampLimit)
if err == nil {
parentID = &id
} else if err != restic.ErrNoSnapshotFound {
return nil, err
}
snName := opts.Parent
if snName == "" {
snName = "latest"
}
return parentID, nil
sn, err := restic.FindFilteredSnapshot(ctx, repo.Backend(), repo, []string{opts.Host}, []restic.TagList{}, targets, &timeStampLimit, snName)
// Snapshot not found is ok if no explicit parent was set
if opts.Parent == "" && errors.Is(err, restic.ErrNoSnapshotFound) {
err = nil
}
return sn, err
}
func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Terminal, args []string) error {
func runBackup(ctx context.Context, opts BackupOptions, gopts GlobalOptions, term *termstatus.Terminal, args []string) error {
err := opts.Check(gopts, args)
if err != nil {
return err
@@ -545,7 +461,7 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
Verbosef("open repository\n")
}
repo, err := OpenRepository(gopts)
repo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
@@ -556,11 +472,11 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
} else {
progressPrinter = backup.NewTextProgress(term, gopts.verbosity)
}
progressReporter := backup.NewProgress(progressPrinter)
progressReporter := backup.NewProgress(progressPrinter,
calculateProgressInterval(!gopts.Quiet, gopts.JSON))
if opts.DryRun {
repo.SetDryRun()
progressReporter.SetDryRun()
}
// use the terminal for stdout/stderr
@@ -570,17 +486,15 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
}()
gopts.stdout, gopts.stderr = progressPrinter.Stdout(), progressPrinter.Stderr()
progressReporter.SetMinUpdatePause(calculateProgressInterval(!gopts.Quiet, gopts.JSON))
wg, wgCtx := errgroup.WithContext(gopts.ctx)
wg, wgCtx := errgroup.WithContext(ctx)
cancelCtx, cancel := context.WithCancel(wgCtx)
defer cancel()
wg.Go(func() error { return progressReporter.Run(cancelCtx) })
wg.Go(func() error { progressReporter.Run(cancelCtx); return nil })
if !gopts.JSON {
progressPrinter.V("lock repository")
}
lock, err := lockRepo(gopts.ctx, repo)
lock, ctx, err := lockRepo(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
@@ -598,16 +512,16 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
return err
}
var parentSnapshotID *restic.ID
var parentSnapshot *restic.Snapshot
if !opts.Stdin {
parentSnapshotID, err = findParentSnapshot(gopts.ctx, repo, opts, targets, timeStamp)
parentSnapshot, err = findParentSnapshot(ctx, repo, opts, targets, timeStamp)
if err != nil {
return err
}
if !gopts.JSON {
if parentSnapshotID != nil {
progressPrinter.P("using parent snapshot %v\n", parentSnapshotID.Str())
if parentSnapshot != nil {
progressPrinter.P("using parent snapshot %v\n", parentSnapshot.ID().Str())
} else {
progressPrinter.P("no parent snapshot found, will read all files\n")
}
@@ -617,7 +531,7 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
if !gopts.JSON {
progressPrinter.V("load index files")
}
err = repo.LoadIndex(gopts.ctx)
err = repo.LoadIndex(ctx)
if err != nil {
return err
}
@@ -674,18 +588,20 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
targets = []string{filename}
}
sc := archiver.NewScanner(targetFS)
sc.SelectByName = selectByNameFilter
sc.Select = selectFilter
sc.Error = progressReporter.ScannerError
sc.Result = progressReporter.ReportTotal
if !opts.NoScan {
sc := archiver.NewScanner(targetFS)
sc.SelectByName = selectByNameFilter
sc.Select = selectFilter
sc.Error = progressPrinter.ScannerError
sc.Result = progressReporter.ReportTotal
if !gopts.JSON {
progressPrinter.V("start scan on %v", targets)
if !gopts.JSON {
progressPrinter.V("start scan on %v", targets)
}
wg.Go(func() error { return sc.Scan(cancelCtx, targets) })
}
wg.Go(func() error { return sc.Scan(cancelCtx, targets) })
arch := archiver.New(repo, targetFS, archiver.Options{})
arch := archiver.New(repo, targetFS, archiver.Options{ReadConcurrency: backupOptions.ReadConcurrency})
arch.SelectByName = selectByNameFilter
arch.Select = selectFilter
arch.WithAtime = opts.WithAtime
@@ -707,22 +623,18 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
arch.ChangeIgnoreFlags |= archiver.ChangeIgnoreCtime
}
if parentSnapshotID == nil {
parentSnapshotID = &restic.ID{}
}
snapshotOpts := archiver.SnapshotOptions{
Excludes: opts.Excludes,
Tags: opts.Tags.Flatten(),
Time: timeStamp,
Hostname: opts.Host,
ParentSnapshot: *parentSnapshotID,
ParentSnapshot: parentSnapshot,
}
if !gopts.JSON {
progressPrinter.V("start backup on %v", targets)
}
_, id, err := arch.Snapshot(gopts.ctx, targets, snapshotOpts)
_, id, err := arch.Snapshot(ctx, targets, snapshotOpts)
// cleanly shutdown all running goroutines
cancel()
@@ -736,7 +648,7 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
}
// Report finished execution
progressReporter.Finish(id)
progressReporter.Finish(id, opts.DryRun)
if !gopts.JSON && !opts.DryRun {
progressPrinter.P("snapshot %s saved\n", id.Str())
}

View File

@@ -14,8 +14,7 @@ import (
)
func TestCollectTargets(t *testing.T) {
dir, cleanup := rtest.TempDir(t)
defer cleanup()
dir := rtest.TempDir(t)
fooSpace := "foo "
barStar := "bar*" // Must sort before the others, below.

View File

@@ -11,6 +11,7 @@ import (
"github.com/restic/restic/internal/cache"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/fs"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/ui/table"
"github.com/spf13/cobra"
)
@@ -138,7 +139,7 @@ func runCache(opts CacheOptions, gopts GlobalOptions, args []string) error {
if err != nil {
return err
}
size = fmt.Sprintf("%11s", formatBytes(uint64(bytes)))
size = fmt.Sprintf("%11s", ui.FormatBytes(uint64(bytes)))
}
name := entry.Name()

View File

@@ -1,6 +1,7 @@
package main
import (
"context"
"encoding/json"
"github.com/spf13/cobra"
@@ -24,7 +25,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runCat(globalOptions, args)
return runCat(cmd.Context(), globalOptions, args)
},
}
@@ -32,40 +33,32 @@ func init() {
cmdRoot.AddCommand(cmdCat)
}
func runCat(gopts GlobalOptions, args []string) error {
func runCat(ctx context.Context, gopts GlobalOptions, args []string) error {
if len(args) < 1 || (args[0] != "masterkey" && args[0] != "config" && len(args) != 2) {
return errors.Fatal("type or ID not specified")
}
repo, err := OpenRepository(gopts)
repo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
if !gopts.NoLock {
lock, err := lockRepo(gopts.ctx, repo)
var lock *restic.Lock
lock, ctx, err = lockRepo(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
}
defer unlockRepo(lock)
}
tpe := args[0]
var id restic.ID
if tpe != "masterkey" && tpe != "config" {
if tpe != "masterkey" && tpe != "config" && tpe != "snapshot" {
id, err = restic.ParseID(args[1])
if err != nil {
if tpe != "snapshot" {
return errors.Fatalf("unable to parse ID: %v\n", err)
}
// find snapshot id with prefix
id, err = restic.FindSnapshot(gopts.ctx, repo.Backend(), args[1])
if err != nil {
return errors.Fatalf("could not find snapshot: %v\n", err)
}
return errors.Fatalf("unable to parse ID: %v\n", err)
}
}
@@ -79,7 +72,7 @@ func runCat(gopts GlobalOptions, args []string) error {
Println(string(buf))
return nil
case "index":
buf, err := repo.LoadUnpacked(gopts.ctx, restic.IndexFile, id, nil)
buf, err := repo.LoadUnpacked(ctx, restic.IndexFile, id, nil)
if err != nil {
return err
}
@@ -87,9 +80,9 @@ func runCat(gopts GlobalOptions, args []string) error {
Println(string(buf))
return nil
case "snapshot":
sn, err := restic.LoadSnapshot(gopts.ctx, repo, id)
sn, err := restic.FindSnapshot(ctx, repo.Backend(), repo, args[1])
if err != nil {
return err
return errors.Fatalf("could not find snapshot: %v\n", err)
}
buf, err := json.MarshalIndent(sn, "", " ")
@@ -100,19 +93,12 @@ func runCat(gopts GlobalOptions, args []string) error {
Println(string(buf))
return nil
case "key":
h := restic.Handle{Type: restic.KeyFile, Name: id.String()}
buf, err := backend.LoadAll(gopts.ctx, nil, repo.Backend(), h)
key, err := repository.LoadKey(ctx, repo, id)
if err != nil {
return err
}
key := &repository.Key{}
err = json.Unmarshal(buf, key)
if err != nil {
return err
}
buf, err = json.MarshalIndent(&key, "", " ")
buf, err := json.MarshalIndent(&key, "", " ")
if err != nil {
return err
}
@@ -128,7 +114,7 @@ func runCat(gopts GlobalOptions, args []string) error {
Println(string(buf))
return nil
case "lock":
lock, err := restic.LoadLock(gopts.ctx, repo, id)
lock, err := restic.LoadLock(ctx, repo, id)
if err != nil {
return err
}
@@ -143,7 +129,7 @@ func runCat(gopts GlobalOptions, args []string) error {
case "pack":
h := restic.Handle{Type: restic.PackFile, Name: id.String()}
buf, err := backend.LoadAll(gopts.ctx, nil, repo.Backend(), h)
buf, err := backend.LoadAll(ctx, nil, repo.Backend(), h)
if err != nil {
return err
}
@@ -157,7 +143,7 @@ func runCat(gopts GlobalOptions, args []string) error {
return err
case "blob":
err = repo.LoadIndex(gopts.ctx)
err = repo.LoadIndex(ctx)
if err != nil {
return err
}
@@ -168,7 +154,7 @@ func runCat(gopts GlobalOptions, args []string) error {
continue
}
buf, err := repo.LoadBlob(gopts.ctx, t, id, nil)
buf, err := repo.LoadBlob(ctx, t, id, nil)
if err != nil {
return err
}

View File

@@ -1,8 +1,9 @@
package main
import (
"io/ioutil"
"context"
"math/rand"
"os"
"strconv"
"strings"
"sync"
@@ -34,7 +35,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runCheck(checkOptions, globalOptions, args)
return runCheck(cmd.Context(), checkOptions, globalOptions, args)
},
PreRunE: func(cmd *cobra.Command, args []string) error {
return checkFlags(checkOptions)
@@ -170,7 +171,7 @@ func prepareCheckCache(opts CheckOptions, gopts *GlobalOptions) (cleanup func())
}
// use a cache in a temporary directory
tempdir, err := ioutil.TempDir(cachedir, "restic-check-cache-")
tempdir, err := os.MkdirTemp(cachedir, "restic-check-cache-")
if err != nil {
// if an error occurs, don't use any cache
Warnf("unable to create temporary directory for cache during check, disabling cache: %v\n", err)
@@ -191,25 +192,26 @@ func prepareCheckCache(opts CheckOptions, gopts *GlobalOptions) (cleanup func())
return cleanup
}
func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
func runCheck(ctx context.Context, opts CheckOptions, gopts GlobalOptions, args []string) error {
if len(args) != 0 {
return errors.Fatal("the check command expects no arguments, only options - please see `restic help check` for usage and flags")
}
cleanup := prepareCheckCache(opts, &gopts)
AddCleanupHandler(func() error {
AddCleanupHandler(func(code int) (int, error) {
cleanup()
return nil
return code, nil
})
repo, err := OpenRepository(gopts)
repo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
if !gopts.NoLock {
Verbosef("create exclusive lock for repository\n")
lock, err := lockRepoExclusive(gopts.ctx, repo)
var lock *restic.Lock
lock, ctx, err = lockRepoExclusive(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
@@ -217,13 +219,13 @@ func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
}
chkr := checker.New(repo, opts.CheckUnused)
err = chkr.LoadSnapshots(gopts.ctx)
err = chkr.LoadSnapshots(ctx)
if err != nil {
return err
}
Verbosef("load indexes\n")
hints, errs := chkr.LoadIndex(gopts.ctx)
hints, errs := chkr.LoadIndex(ctx)
errorsFound := false
suggestIndexRebuild := false
@@ -243,7 +245,7 @@ func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
}
if suggestIndexRebuild {
Printf("This is non-critical, you can run `restic rebuild-index' to correct this\n")
Printf("Duplicate packs/old indexes are non-critical, you can run `restic rebuild-index' to correct this.\n")
}
if mixedFound {
Printf("Mixed packs with tree and data blobs are non-critical, you can run `restic prune` to correct this.\n")
@@ -260,13 +262,13 @@ func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
errChan := make(chan error)
Verbosef("check all packs\n")
go chkr.Packs(gopts.ctx, errChan)
go chkr.Packs(ctx, errChan)
for err := range errChan {
if checker.IsOrphanedPack(err) {
orphanedPacks++
Verbosef("%v\n", err)
} else if _, ok := err.(*checker.ErrLegacyLayout); ok {
} else if err == checker.ErrLegacyLayout {
Verbosef("repository still uses the S3 legacy layout\nPlease run `restic migrate s3legacy` to correct this.\n")
} else {
errorsFound = true
@@ -287,13 +289,17 @@ func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
defer wg.Done()
bar := newProgressMax(!gopts.Quiet, 0, "snapshots")
defer bar.Done()
chkr.Structure(gopts.ctx, bar, errChan)
chkr.Structure(ctx, bar, errChan)
}()
for err := range errChan {
errorsFound = true
if e, ok := err.(*checker.TreeError); ok {
Warnf("error for tree %v:\n", e.ID.Str())
var clean string
if stdoutCanUpdateStatus() {
clean = clearLine(0)
}
Warnf(clean+"error for tree %v:\n", e.ID.Str())
for _, treeErr := range e.Errors {
Warnf(" %v\n", treeErr)
}
@@ -308,7 +314,7 @@ func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
wg.Wait()
if opts.CheckUnused {
for _, id := range chkr.UnusedBlobs(gopts.ctx) {
for _, id := range chkr.UnusedBlobs(ctx) {
Verbosef("unused blob %v\n", id)
errorsFound = true
}
@@ -320,7 +326,7 @@ func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
p := newProgressMax(!gopts.Quiet, packCount, "packs")
errChan := make(chan error)
go chkr.ReadPacks(gopts.ctx, packs, p, errChan)
go chkr.ReadPacks(ctx, packs, p, errChan)
for err := range errChan {
errorsFound = true

View File

@@ -32,16 +32,14 @@ This can be mitigated by the "--copy-chunker-params" option when initializing a
new destination repository using the "init" command.
`,
RunE: func(cmd *cobra.Command, args []string) error {
return runCopy(copyOptions, globalOptions, args)
return runCopy(cmd.Context(), copyOptions, globalOptions, args)
},
}
// CopyOptions bundles all options for the copy command.
type CopyOptions struct {
secondaryRepoOptions
Hosts []string
Tags restic.TagLists
Paths []string
snapshotFilterOptions
}
var copyOptions CopyOptions
@@ -51,12 +49,10 @@ func init() {
f := cmdCopy.Flags()
initSecondaryRepoOptions(f, &copyOptions.secondaryRepoOptions, "destination", "to copy snapshots from")
f.StringArrayVarP(&copyOptions.Hosts, "host", "H", nil, "only consider snapshots for this `host`, when no snapshot ID is given (can be specified multiple times)")
f.Var(&copyOptions.Tags, "tag", "only consider snapshots which include this `taglist`, when no snapshot ID is given")
f.StringArrayVar(&copyOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path`, when no snapshot ID is given")
initMultiSnapshotFilterOptions(f, &copyOptions.snapshotFilterOptions, true)
}
func runCopy(opts CopyOptions, gopts GlobalOptions, args []string) error {
func runCopy(ctx context.Context, opts CopyOptions, gopts GlobalOptions, args []string) error {
secondaryGopts, isFromRepo, err := fillSecondaryGlobalOpts(opts.secondaryRepoOptions, gopts, "destination")
if err != nil {
return err
@@ -66,28 +62,26 @@ func runCopy(opts CopyOptions, gopts GlobalOptions, args []string) error {
gopts, secondaryGopts = secondaryGopts, gopts
}
ctx, cancel := context.WithCancel(gopts.ctx)
defer cancel()
srcRepo, err := OpenRepository(gopts)
srcRepo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
dstRepo, err := OpenRepository(secondaryGopts)
dstRepo, err := OpenRepository(ctx, secondaryGopts)
if err != nil {
return err
}
if !gopts.NoLock {
srcLock, err := lockRepo(ctx, srcRepo)
var srcLock *restic.Lock
srcLock, ctx, err = lockRepo(ctx, srcRepo)
defer unlockRepo(srcLock)
if err != nil {
return err
}
}
dstLock, err := lockRepo(ctx, dstRepo)
dstLock, ctx, err := lockRepo(ctx, dstRepo)
defer unlockRepo(dstLock)
if err != nil {
return err
@@ -126,7 +120,6 @@ func runCopy(opts CopyOptions, gopts GlobalOptions, args []string) error {
visitedTrees := restic.NewIDSet()
for sn := range FindFilteredSnapshots(ctx, srcSnapshotLister, srcRepo, opts.Hosts, opts.Tags, opts.Paths, args) {
Verbosef("\nsnapshot %s of %v at %s)\n", sn.ID().Str(), sn.Paths, sn.Time)
// check whether the destination has a snapshot with the same persistent ID which has similar snapshot fields
srcOriginal := *sn.ID()
@@ -137,7 +130,8 @@ func runCopy(opts CopyOptions, gopts GlobalOptions, args []string) error {
isCopy := false
for _, originalSn := range originalSns {
if similarSnapshots(originalSn, sn) {
Verbosef("skipping source snapshot %s, was already copied to snapshot %s\n", sn.ID().Str(), originalSn.ID().Str())
Verboseff("\nsnapshot %s of %v at %s)\n", sn.ID().Str(), sn.Paths, sn.Time)
Verboseff("skipping source snapshot %s, was already copied to snapshot %s\n", sn.ID().Str(), originalSn.ID().Str())
isCopy = true
break
}
@@ -146,6 +140,7 @@ func runCopy(opts CopyOptions, gopts GlobalOptions, args []string) error {
continue
}
}
Verbosef("\nsnapshot %s of %v at %s)\n", sn.ID().Str(), sn.Paths, sn.Time)
Verbosef(" copy started, this may take a while...\n")
if err := copyTree(ctx, srcRepo, dstRepo, visitedTrees, *sn.Tree, gopts.Quiet); err != nil {
return err

View File

@@ -13,6 +13,7 @@ import (
"os"
"runtime"
"sort"
"sync"
"time"
"github.com/klauspost/compress/zstd"
@@ -22,6 +23,7 @@ import (
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/crypto"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/index"
"github.com/restic/restic/internal/pack"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
@@ -46,7 +48,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runDebugDump(globalOptions, args)
return runDebugDump(cmd.Context(), globalOptions, args)
},
}
@@ -104,10 +106,9 @@ type Blob struct {
func printPacks(ctx context.Context, repo *repository.Repository, wr io.Writer) error {
return repo.List(ctx, restic.PackFile, func(id restic.ID, size int64) error {
h := restic.Handle{Type: restic.PackFile, Name: id.String()}
blobs, _, err := pack.List(repo.Key(), backend.ReaderAt(ctx, repo.Backend(), h), size)
var m sync.Mutex
return restic.ParallelList(ctx, repo.Backend(), restic.PackFile, repo.Connections(), func(ctx context.Context, id restic.ID, size int64) error {
blobs, _, err := repo.ListPack(ctx, id, size)
if err != nil {
Warnf("error for pack %v: %v\n", id.Str(), err)
return nil
@@ -126,12 +127,14 @@ func printPacks(ctx context.Context, repo *repository.Repository, wr io.Writer)
}
}
m.Lock()
defer m.Unlock()
return prettyPrintJSON(wr, p)
})
}
func dumpIndexes(ctx context.Context, repo restic.Repository, wr io.Writer) error {
return repository.ForAllIndexes(ctx, repo, func(id restic.ID, idx *repository.Index, oldFormat bool, err error) error {
return index.ForAllIndexes(ctx, repo, func(id restic.ID, idx *index.Index, oldFormat bool, err error) error {
Printf("index_id: %v\n", id)
if err != nil {
return err
@@ -141,18 +144,19 @@ func dumpIndexes(ctx context.Context, repo restic.Repository, wr io.Writer) erro
})
}
func runDebugDump(gopts GlobalOptions, args []string) error {
func runDebugDump(ctx context.Context, gopts GlobalOptions, args []string) error {
if len(args) != 1 {
return errors.Fatal("type not specified")
}
repo, err := OpenRepository(gopts)
repo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
if !gopts.NoLock {
lock, err := lockRepo(gopts.ctx, repo)
var lock *restic.Lock
lock, ctx, err = lockRepo(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
@@ -163,20 +167,20 @@ func runDebugDump(gopts GlobalOptions, args []string) error {
switch tpe {
case "indexes":
return dumpIndexes(gopts.ctx, repo, gopts.stdout)
return dumpIndexes(ctx, repo, gopts.stdout)
case "snapshots":
return debugPrintSnapshots(gopts.ctx, repo, gopts.stdout)
return debugPrintSnapshots(ctx, repo, gopts.stdout)
case "packs":
return printPacks(gopts.ctx, repo, gopts.stdout)
return printPacks(ctx, repo, gopts.stdout)
case "all":
Printf("snapshots:\n")
err := debugPrintSnapshots(gopts.ctx, repo, gopts.stdout)
err := debugPrintSnapshots(ctx, repo, gopts.stdout)
if err != nil {
return err
}
Printf("\nindexes:\n")
err = dumpIndexes(gopts.ctx, repo, gopts.stdout)
err = dumpIndexes(ctx, repo, gopts.stdout)
if err != nil {
return err
}
@@ -192,7 +196,7 @@ var cmdDebugExamine = &cobra.Command{
Short: "Examine a pack file",
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runDebugExamine(globalOptions, args)
return runDebugExamine(cmd.Context(), globalOptions, args)
},
}
@@ -311,97 +315,104 @@ func decryptUnsigned(ctx context.Context, k *crypto.Key, buf []byte) []byte {
return out
}
func loadBlobs(ctx context.Context, repo restic.Repository, pack restic.ID, list []restic.Blob) error {
func loadBlobs(ctx context.Context, repo restic.Repository, packID restic.ID, list []restic.Blob) error {
dec, err := zstd.NewReader(nil)
if err != nil {
panic(err)
}
be := repo.Backend()
h := restic.Handle{
Name: pack.String(),
Name: packID.String(),
Type: restic.PackFile,
}
for _, blob := range list {
Printf(" loading blob %v at %v (length %v)\n", blob.ID, blob.Offset, blob.Length)
buf := make([]byte, blob.Length)
err := be.Load(ctx, h, int(blob.Length), int64(blob.Offset), func(rd io.Reader) error {
n, err := io.ReadFull(rd, buf)
wg, ctx := errgroup.WithContext(ctx)
if reuploadBlobs {
repo.StartPackUploader(ctx, wg)
}
wg.Go(func() error {
for _, blob := range list {
Printf(" loading blob %v at %v (length %v)\n", blob.ID, blob.Offset, blob.Length)
buf := make([]byte, blob.Length)
err := be.Load(ctx, h, int(blob.Length), int64(blob.Offset), func(rd io.Reader) error {
n, err := io.ReadFull(rd, buf)
if err != nil {
return fmt.Errorf("read error after %d bytes: %v", n, err)
}
return nil
})
if err != nil {
return fmt.Errorf("read error after %d bytes: %v", n, err)
Warnf("error read: %v\n", err)
continue
}
return nil
})
if err != nil {
Warnf("error read: %v\n", err)
continue
}
key := repo.Key()
key := repo.Key()
nonce, plaintext := buf[:key.NonceSize()], buf[key.NonceSize():]
plaintext, err = key.Open(plaintext[:0], nonce, plaintext, nil)
outputPrefix := ""
filePrefix := ""
if err != nil {
Warnf("error decrypting blob: %v\n", err)
if tryRepair || repairByte {
plaintext = tryRepairWithBitflip(ctx, key, buf, repairByte)
nonce, plaintext := buf[:key.NonceSize()], buf[key.NonceSize():]
plaintext, err = key.Open(plaintext[:0], nonce, plaintext, nil)
outputPrefix := ""
filePrefix := ""
if err != nil {
Warnf("error decrypting blob: %v\n", err)
if tryRepair || repairByte {
plaintext = tryRepairWithBitflip(ctx, key, buf, repairByte)
}
if plaintext != nil {
outputPrefix = "repaired "
filePrefix = "repaired-"
} else {
plaintext = decryptUnsigned(ctx, key, buf)
err = storePlainBlob(blob.ID, "damaged-", plaintext)
if err != nil {
return err
}
continue
}
}
if plaintext != nil {
outputPrefix = "repaired "
filePrefix = "repaired-"
if blob.IsCompressed() {
decompressed, err := dec.DecodeAll(plaintext, nil)
if err != nil {
Printf(" failed to decompress blob %v\n", blob.ID)
}
if decompressed != nil {
plaintext = decompressed
}
}
id := restic.Hash(plaintext)
var prefix string
if !id.Equal(blob.ID) {
Printf(" successfully %vdecrypted blob (length %v), hash is %v, ID does not match, wanted %v\n", outputPrefix, len(plaintext), id, blob.ID)
prefix = "wrong-hash-"
} else {
plaintext = decryptUnsigned(ctx, key, buf)
err = storePlainBlob(blob.ID, "damaged-", plaintext)
Printf(" successfully %vdecrypted blob (length %v), hash is %v, ID matches\n", outputPrefix, len(plaintext), id)
prefix = "correct-"
}
if extractPack {
err = storePlainBlob(id, filePrefix+prefix, plaintext)
if err != nil {
return err
}
continue
}
if reuploadBlobs {
_, _, _, err := repo.SaveBlob(ctx, blob.Type, plaintext, id, true)
if err != nil {
return err
}
Printf(" uploaded %v %v\n", blob.Type, id)
}
}
if blob.IsCompressed() {
decompressed, err := dec.DecodeAll(plaintext, nil)
if err != nil {
Printf(" failed to decompress blob %v\n", blob.ID)
}
if decompressed != nil {
plaintext = decompressed
}
}
id := restic.Hash(plaintext)
var prefix string
if !id.Equal(blob.ID) {
Printf(" successfully %vdecrypted blob (length %v), hash is %v, ID does not match, wanted %v\n", outputPrefix, len(plaintext), id, blob.ID)
prefix = "wrong-hash-"
} else {
Printf(" successfully %vdecrypted blob (length %v), hash is %v, ID matches\n", outputPrefix, len(plaintext), id)
prefix = "correct-"
}
if extractPack {
err = storePlainBlob(id, filePrefix+prefix, plaintext)
if err != nil {
return err
}
}
if reuploadBlobs {
_, _, _, err := repo.SaveBlob(ctx, blob.Type, plaintext, id, true)
if err != nil {
return err
}
Printf(" uploaded %v %v\n", blob.Type, id)
return repo.Flush(ctx)
}
}
return nil
})
if reuploadBlobs {
err := repo.Flush(ctx)
if err != nil {
return err
}
}
return nil
return wg.Wait()
}
func storePlainBlob(id restic.ID, prefix string, plain []byte) error {
@@ -426,8 +437,8 @@ func storePlainBlob(id restic.ID, prefix string, plain []byte) error {
return nil
}
func runDebugExamine(gopts GlobalOptions, args []string) error {
repo, err := OpenRepository(gopts)
func runDebugExamine(ctx context.Context, gopts GlobalOptions, args []string) error {
repo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
@@ -436,10 +447,7 @@ func runDebugExamine(gopts GlobalOptions, args []string) error {
for _, name := range args {
id, err := restic.ParseID(name)
if err != nil {
name, err = restic.Find(gopts.ctx, repo.Backend(), restic.PackFile, name)
if err == nil {
id, err = restic.ParseID(name)
}
id, err = restic.Find(ctx, repo.Backend(), restic.PackFile, name)
if err != nil {
Warnf("error: %v\n", err)
continue
@@ -453,20 +461,21 @@ func runDebugExamine(gopts GlobalOptions, args []string) error {
}
if !gopts.NoLock {
lock, err := lockRepo(gopts.ctx, repo)
var lock *restic.Lock
lock, ctx, err = lockRepo(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
}
}
err = repo.LoadIndex(gopts.ctx)
err = repo.LoadIndex(ctx)
if err != nil {
return err
}
for _, id := range ids {
err := examinePack(gopts.ctx, repo, id)
err := examinePack(ctx, repo, id)
if err != nil {
Warnf("error: %v\n", err)
}
@@ -525,7 +534,7 @@ func examinePack(ctx context.Context, repo restic.Repository, id restic.ID) erro
Printf(" ========================================\n")
Printf(" inspect the pack itself\n")
blobs, _, err := pack.List(repo.Key(), backend.ReaderAt(ctx, repo.Backend(), h), fi.Size)
blobs, _, err := repo.ListPack(ctx, id, fi.Size)
if err != nil {
return fmt.Errorf("pack %v: %v", id.Str(), err)
}
@@ -556,7 +565,7 @@ func checkPackSize(blobs []restic.Blob, fileSize int64) {
size += uint64(pack.CalculateHeaderSize(blobs))
if uint64(fileSize) != size {
Printf(" file sizes do not match: computed %v from index, file size is %v\n", size, fileSize)
Printf(" file sizes do not match: computed %v, file size is %v\n", size, fileSize)
} else {
Printf(" file sizes match\n")
}

View File

@@ -11,6 +11,7 @@ import (
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui"
"github.com/spf13/cobra"
)
@@ -35,7 +36,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runDiff(diffOptions, globalOptions, args)
return runDiff(cmd.Context(), diffOptions, globalOptions, args)
},
}
@@ -54,11 +55,11 @@ func init() {
}
func loadSnapshot(ctx context.Context, be restic.Lister, repo restic.Repository, desc string) (*restic.Snapshot, error) {
id, err := restic.FindSnapshot(ctx, be, desc)
sn, err := restic.FindSnapshot(ctx, be, repo, desc)
if err != nil {
return nil, errors.Fatal(err.Error())
}
return restic.LoadSnapshot(ctx, repo, id)
return sn, err
}
// Comparer collects all things needed to compare two snapshots.
@@ -321,21 +322,19 @@ func (c *Comparer) diffTree(ctx context.Context, stats *DiffStatsContainer, pref
return nil
}
func runDiff(opts DiffOptions, gopts GlobalOptions, args []string) error {
func runDiff(ctx context.Context, opts DiffOptions, gopts GlobalOptions, args []string) error {
if len(args) != 2 {
return errors.Fatalf("specify two snapshot IDs")
}
ctx, cancel := context.WithCancel(gopts.ctx)
defer cancel()
repo, err := OpenRepository(gopts)
repo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
if !gopts.NoLock {
lock, err := lockRepo(ctx, repo)
var lock *restic.Lock
lock, ctx, err = lockRepo(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
@@ -427,8 +426,8 @@ func runDiff(opts DiffOptions, gopts GlobalOptions, args []string) error {
Printf("Others: %5d new, %5d removed\n", stats.Added.Others, stats.Removed.Others)
Printf("Data Blobs: %5d new, %5d removed\n", stats.Added.DataBlobs, stats.Removed.DataBlobs)
Printf("Tree Blobs: %5d new, %5d removed\n", stats.Added.TreeBlobs, stats.Removed.TreeBlobs)
Printf(" Added: %-5s\n", formatBytes(uint64(stats.Added.Bytes)))
Printf(" Removed: %-5s\n", formatBytes(uint64(stats.Removed.Bytes)))
Printf(" Added: %-5s\n", ui.FormatBytes(uint64(stats.Added.Bytes)))
Printf(" Removed: %-5s\n", ui.FormatBytes(uint64(stats.Removed.Bytes)))
}
return nil

View File

@@ -34,15 +34,13 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runDump(dumpOptions, globalOptions, args)
return runDump(cmd.Context(), dumpOptions, globalOptions, args)
},
}
// DumpOptions collects all options for the dump command.
type DumpOptions struct {
Hosts []string
Paths []string
Tags restic.TagLists
snapshotFilterOptions
Archive string
}
@@ -52,9 +50,7 @@ func init() {
cmdRoot.AddCommand(cmdDump)
flags := cmdDump.Flags()
flags.StringArrayVarP(&dumpOptions.Hosts, "host", "H", nil, `only consider snapshots for this host when the snapshot ID is "latest" (can be specified multiple times)`)
flags.Var(&dumpOptions.Tags, "tag", "only consider snapshots which include this `taglist` for snapshot ID \"latest\"")
flags.StringArrayVar(&dumpOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path` for snapshot ID \"latest\"")
initSingleSnapshotFilterOptions(flags, &dumpOptions.snapshotFilterOptions)
flags.StringVarP(&dumpOptions.Archive, "archive", "a", "tar", "set archive `format` as \"tar\" or \"zip\"")
}
@@ -111,9 +107,7 @@ func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.Repositor
return fmt.Errorf("path %q not found in snapshot", item)
}
func runDump(opts DumpOptions, gopts GlobalOptions, args []string) error {
ctx := gopts.ctx
func runDump(ctx context.Context, opts DumpOptions, gopts GlobalOptions, args []string) error {
if len(args) != 2 {
return errors.Fatal("no file and no snapshot ID specified")
}
@@ -131,36 +125,23 @@ func runDump(opts DumpOptions, gopts GlobalOptions, args []string) error {
splittedPath := splitPath(path.Clean(pathToPrint))
repo, err := OpenRepository(gopts)
repo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
if !gopts.NoLock {
lock, err := lockRepo(ctx, repo)
var lock *restic.Lock
lock, ctx, err = lockRepo(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
}
}
var id restic.ID
if snapshotIDString == "latest" {
id, err = restic.FindLatestSnapshot(ctx, repo.Backend(), repo, opts.Paths, opts.Tags, opts.Hosts, nil)
if err != nil {
Exitf(1, "latest snapshot for criteria not found: %v Paths:%v Hosts:%v", err, opts.Paths, opts.Hosts)
}
} else {
id, err = restic.FindSnapshot(ctx, repo.Backend(), snapshotIDString)
if err != nil {
Exitf(1, "invalid id %q: %v", snapshotIDString, err)
}
}
sn, err := restic.LoadSnapshot(gopts.ctx, repo, id)
sn, err := restic.FindFilteredSnapshot(ctx, repo.Backend(), repo, opts.Paths, opts.Tags, opts.Hosts, nil, snapshotIDString)
if err != nil {
Exitf(2, "loading snapshot %q failed: %v", snapshotIDString, err)
return errors.Fatalf("failed to find snapshot: %v", err)
}
err = repo.LoadIndex(ctx)
@@ -170,13 +151,13 @@ func runDump(opts DumpOptions, gopts GlobalOptions, args []string) error {
tree, err := restic.LoadTree(ctx, repo, *sn.Tree)
if err != nil {
Exitf(2, "loading tree for snapshot %q failed: %v", snapshotIDString, err)
return errors.Fatalf("loading tree for snapshot %q failed: %v", snapshotIDString, err)
}
d := dump.New(opts.Archive, repo, os.Stdout)
err = printFromTree(ctx, tree, repo, "/", splittedPath, d)
if err != nil {
Exitf(2, "cannot dump file: %v", err)
return errors.Fatalf("cannot dump file: %v", err)
}
return nil

View File

@@ -38,7 +38,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runFind(findOptions, globalOptions, args)
return runFind(cmd.Context(), findOptions, globalOptions, args)
},
}
@@ -51,9 +51,7 @@ type FindOptions struct {
PackID, ShowPackID bool
CaseInsensitive bool
ListLong bool
Hosts []string
Paths []string
Tags restic.TagLists
snapshotFilterOptions
}
var findOptions FindOptions
@@ -72,9 +70,7 @@ func init() {
f.BoolVarP(&findOptions.CaseInsensitive, "ignore-case", "i", false, "ignore case for pattern")
f.BoolVarP(&findOptions.ListLong, "long", "l", false, "use a long listing format showing size and mode")
f.StringArrayVarP(&findOptions.Hosts, "host", "H", nil, "only consider snapshots for this `host`, when no snapshot ID is given (can be specified multiple times)")
f.Var(&findOptions.Tags, "tag", "only consider snapshots which include this `taglist`, when no snapshot-ID is given")
f.StringArrayVar(&findOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path`, when no snapshot-ID is given")
initMultiSnapshotFilterOptions(f, &findOptions.snapshotFilterOptions, true)
}
type findPattern struct {
@@ -471,7 +467,7 @@ func (f *Finder) indexPacksToBlobs(ctx context.Context, packIDs map[string]struc
// remember which packs were found in the index
indexPackIDs := make(map[string]struct{})
for pb := range f.repo.Index().Each(wctx) {
f.repo.Index().Each(wctx, func(pb restic.PackedBlob) {
idStr := pb.PackID.String()
// keep entry in packIDs as Each() returns individual index entries
matchingID := false
@@ -489,7 +485,7 @@ func (f *Finder) indexPacksToBlobs(ctx context.Context, packIDs map[string]struc
f.blobIDs[pb.ID.String()] = struct{}{}
indexPackIDs[idStr] = struct{}{}
}
}
})
for id := range indexPackIDs {
delete(packIDs, id)
@@ -538,7 +534,7 @@ func (f *Finder) findObjectsPacks(ctx context.Context) {
}
}
func runFind(opts FindOptions, gopts GlobalOptions, args []string) error {
func runFind(ctx context.Context, opts FindOptions, gopts GlobalOptions, args []string) error {
if len(args) == 0 {
return errors.Fatal("wrong number of arguments")
}
@@ -572,31 +568,29 @@ func runFind(opts FindOptions, gopts GlobalOptions, args []string) error {
return errors.Fatal("cannot have several ID types")
}
repo, err := OpenRepository(gopts)
repo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
if !gopts.NoLock {
lock, err := lockRepo(gopts.ctx, repo)
var lock *restic.Lock
lock, ctx, err = lockRepo(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
}
}
snapshotLister, err := backend.MemorizeList(gopts.ctx, repo.Backend(), restic.SnapshotFile)
snapshotLister, err := backend.MemorizeList(ctx, repo.Backend(), restic.SnapshotFile)
if err != nil {
return err
}
if err = repo.LoadIndex(gopts.ctx); err != nil {
if err = repo.LoadIndex(ctx); err != nil {
return err
}
ctx, cancel := context.WithCancel(gopts.ctx)
defer cancel()
f := &Finder{
repo: repo,
pat: pat,

View File

@@ -32,7 +32,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runForget(forgetOptions, globalOptions, args)
return runForget(cmd.Context(), forgetOptions, globalOptions, args)
},
}
@@ -52,9 +52,7 @@ type ForgetOptions struct {
WithinYearly restic.Duration
KeepTags restic.TagLists
Hosts []string
Tags restic.TagLists
Paths []string
snapshotFilterOptions
Compact bool
// Grouping
@@ -81,9 +79,9 @@ func init() {
f.VarP(&forgetOptions.WithinWeekly, "keep-within-weekly", "", "keep weekly snapshots that are newer than `duration` (eg. 1y5m7d2h) relative to the latest snapshot")
f.VarP(&forgetOptions.WithinMonthly, "keep-within-monthly", "", "keep monthly snapshots that are newer than `duration` (eg. 1y5m7d2h) relative to the latest snapshot")
f.VarP(&forgetOptions.WithinYearly, "keep-within-yearly", "", "keep yearly snapshots that are newer than `duration` (eg. 1y5m7d2h) relative to the latest snapshot")
f.Var(&forgetOptions.KeepTags, "keep-tag", "keep snapshots with this `taglist` (can be specified multiple times)")
f.StringArrayVar(&forgetOptions.Hosts, "host", nil, "only consider snapshots with the given `host` (can be specified multiple times)")
initMultiSnapshotFilterOptions(f, &forgetOptions.snapshotFilterOptions, false)
f.StringArrayVar(&forgetOptions.Hosts, "hostname", nil, "only consider snapshots with the given `hostname` (can be specified multiple times)")
err := f.MarkDeprecated("hostname", "use --host")
if err != nil {
@@ -91,9 +89,6 @@ func init() {
panic(err)
}
f.Var(&forgetOptions.Tags, "tag", "only consider snapshots which include this `taglist` in the format `tag[,tag,...]` (can be specified multiple times)")
f.StringArrayVar(&forgetOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path` (can be specified multiple times)")
f.BoolVarP(&forgetOptions.Compact, "compact", "c", false, "use compact output format")
f.StringVarP(&forgetOptions.GroupBy, "group-by", "g", "host,paths", "`group` snapshots by host, paths and/or tags, separated by comma (disable grouping with '')")
@@ -104,13 +99,13 @@ func init() {
addPruneOptions(cmdForget)
}
func runForget(opts ForgetOptions, gopts GlobalOptions, args []string) error {
func runForget(ctx context.Context, opts ForgetOptions, gopts GlobalOptions, args []string) error {
err := verifyPruneOptions(&pruneOptions)
if err != nil {
return err
}
repo, err := OpenRepository(gopts)
repo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
@@ -120,16 +115,14 @@ func runForget(opts ForgetOptions, gopts GlobalOptions, args []string) error {
}
if !opts.DryRun || !gopts.NoLock {
lock, err := lockRepoExclusive(gopts.ctx, repo)
var lock *restic.Lock
lock, ctx, err = lockRepoExclusive(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
}
}
ctx, cancel := context.WithCancel(gopts.ctx)
defer cancel()
var snapshots restic.Snapshots
removeSnIDs := restic.NewIDSet()
@@ -224,7 +217,7 @@ func runForget(opts ForgetOptions, gopts GlobalOptions, args []string) error {
if len(removeSnIDs) > 0 {
if !opts.DryRun {
err := DeleteFilesChecked(gopts, repo, removeSnIDs, restic.SnapshotFile)
err := DeleteFilesChecked(ctx, gopts, repo, removeSnIDs, restic.SnapshotFile)
if err != nil {
return err
}
@@ -244,10 +237,14 @@ func runForget(opts ForgetOptions, gopts GlobalOptions, args []string) error {
if len(removeSnIDs) > 0 && opts.Prune {
if !gopts.JSON {
Verbosef("%d snapshots have been removed, running prune\n", len(removeSnIDs))
if opts.DryRun {
Verbosef("%d snapshots would be removed, running prune dry run\n", len(removeSnIDs))
} else {
Verbosef("%d snapshots have been removed, running prune\n", len(removeSnIDs))
}
}
pruneOptions.DryRun = opts.DryRun
return runPruneWithRepo(pruneOptions, gopts, repo, removeSnIDs)
return runPruneWithRepo(ctx, pruneOptions, gopts, repo, removeSnIDs)
}
return nil

View File

@@ -10,7 +10,7 @@ import (
var cmdGenerate = &cobra.Command{
Use: "generate [flags]",
Short: "Generate manual pages and auto-completion files (bash, fish, zsh)",
Short: "Generate manual pages and auto-completion files (bash, fish, zsh, powershell)",
Long: `
The "generate" command writes automatically generated files (like the man pages
and the auto-completion files for bash, fish and zsh).
@@ -25,10 +25,11 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
}
type generateOptions struct {
ManDir string
BashCompletionFile string
FishCompletionFile string
ZSHCompletionFile string
ManDir string
BashCompletionFile string
FishCompletionFile string
ZSHCompletionFile string
PowerShellCompletionFile string
}
var genOpts generateOptions
@@ -40,6 +41,7 @@ func init() {
fs.StringVar(&genOpts.BashCompletionFile, "bash-completion", "", "write bash completion `file`")
fs.StringVar(&genOpts.FishCompletionFile, "fish-completion", "", "write fish completion `file`")
fs.StringVar(&genOpts.ZSHCompletionFile, "zsh-completion", "", "write zsh completion `file`")
fs.StringVar(&genOpts.PowerShellCompletionFile, "powershell-completion", "", "write powershell completion `file`")
}
func writeManpages(dir string) error {
@@ -75,6 +77,11 @@ func writeZSHCompletion(file string) error {
return cmdRoot.GenZshCompletionFile(file)
}
func writePowerShellCompletion(file string) error {
Verbosef("writing powershell completion file to %v\n", file)
return cmdRoot.GenPowerShellCompletionFile(file)
}
func runGenerate(cmd *cobra.Command, args []string) error {
if genOpts.ManDir != "" {
err := writeManpages(genOpts.ManDir)
@@ -104,6 +111,13 @@ func runGenerate(cmd *cobra.Command, args []string) error {
}
}
if genOpts.PowerShellCompletionFile != "" {
err := writePowerShellCompletion(genOpts.PowerShellCompletionFile)
if err != nil {
return err
}
}
var empty generateOptions
if genOpts == empty {
return errors.Fatal("nothing to do, please specify at least one output file/dir")

View File

@@ -1,6 +1,8 @@
package main
import (
"context"
"encoding/json"
"strconv"
"github.com/restic/chunker"
@@ -25,7 +27,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runInit(initOptions, globalOptions, args)
return runInit(cmd.Context(), initOptions, globalOptions, args)
},
}
@@ -47,7 +49,7 @@ func init() {
f.StringVar(&initOptions.RepositoryVersion, "repository-version", "stable", "repository format version to use, allowed values are a format version, 'latest' and 'stable'")
}
func runInit(opts InitOptions, gopts GlobalOptions, args []string) error {
func runInit(ctx context.Context, opts InitOptions, gopts GlobalOptions, args []string) error {
var version uint
if opts.RepositoryVersion == "latest" || opts.RepositoryVersion == "" {
version = restic.MaxRepoVersion
@@ -64,7 +66,7 @@ func runInit(opts InitOptions, gopts GlobalOptions, args []string) error {
return errors.Fatalf("only repository versions between %v and %v are allowed", restic.MinRepoVersion, restic.MaxRepoVersion)
}
chunkerPolynomial, err := maybeReadChunkerPolynomial(opts, gopts)
chunkerPolynomial, err := maybeReadChunkerPolynomial(ctx, opts, gopts)
if err != nil {
return err
}
@@ -81,7 +83,7 @@ func runInit(opts InitOptions, gopts GlobalOptions, args []string) error {
return err
}
be, err := create(repo, gopts.extended)
be, err := create(ctx, repo, gopts.extended)
if err != nil {
return errors.Fatalf("create repository at %s failed: %v\n", location.StripPassword(gopts.Repo), err)
}
@@ -94,28 +96,38 @@ func runInit(opts InitOptions, gopts GlobalOptions, args []string) error {
return err
}
err = s.Init(gopts.ctx, version, gopts.password, chunkerPolynomial)
err = s.Init(ctx, version, gopts.password, chunkerPolynomial)
if err != nil {
return errors.Fatalf("create key in repository at %s failed: %v\n", location.StripPassword(gopts.Repo), err)
}
Verbosef("created restic repository %v at %s\n", s.Config().ID[:10], location.StripPassword(gopts.Repo))
Verbosef("\n")
Verbosef("Please note that knowledge of your password is required to access\n")
Verbosef("the repository. Losing your password means that your data is\n")
Verbosef("irrecoverably lost.\n")
if !gopts.JSON {
Verbosef("created restic repository %v at %s\n", s.Config().ID[:10], location.StripPassword(gopts.Repo))
Verbosef("\n")
Verbosef("Please note that knowledge of your password is required to access\n")
Verbosef("the repository. Losing your password means that your data is\n")
Verbosef("irrecoverably lost.\n")
} else {
status := initSuccess{
MessageType: "initialized",
ID: s.Config().ID,
Repository: location.StripPassword(gopts.Repo),
}
return json.NewEncoder(gopts.stdout).Encode(status)
}
return nil
}
func maybeReadChunkerPolynomial(opts InitOptions, gopts GlobalOptions) (*chunker.Pol, error) {
func maybeReadChunkerPolynomial(ctx context.Context, opts InitOptions, gopts GlobalOptions) (*chunker.Pol, error) {
if opts.CopyChunkerParameters {
otherGopts, _, err := fillSecondaryGlobalOpts(opts.secondaryRepoOptions, gopts, "secondary")
if err != nil {
return nil, err
}
otherRepo, err := OpenRepository(otherGopts)
otherRepo, err := OpenRepository(ctx, otherGopts)
if err != nil {
return nil, err
}
@@ -129,3 +141,9 @@ func maybeReadChunkerPolynomial(opts InitOptions, gopts GlobalOptions) (*chunker
}
return nil, nil
}
type initSuccess struct {
MessageType string `json:"message_type"` // "initialized"
ID string `json:"id"`
Repository string `json:"repository"`
}

View File

@@ -3,9 +3,9 @@ package main
import (
"context"
"encoding/json"
"io/ioutil"
"os"
"strings"
"sync"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/repository"
@@ -28,7 +28,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runKey(globalOptions, args)
return runKey(cmd.Context(), globalOptions, args)
},
}
@@ -56,23 +56,26 @@ func listKeys(ctx context.Context, s *repository.Repository, gopts GlobalOptions
Created string `json:"created"`
}
var m sync.Mutex
var keys []keyInfo
err := s.List(ctx, restic.KeyFile, func(id restic.ID, size int64) error {
k, err := repository.LoadKey(ctx, s, id.String())
err := restic.ParallelList(ctx, s.Backend(), restic.KeyFile, s.Connections(), func(ctx context.Context, id restic.ID, size int64) error {
k, err := repository.LoadKey(ctx, s, id)
if err != nil {
Warnf("LoadKey() failed: %v\n", err)
return nil
}
key := keyInfo{
Current: id.String() == s.KeyName(),
Current: id == s.KeyID(),
ID: id.Str(),
UserName: k.Username,
HostName: k.Hostname,
Created: k.Created.Local().Format(TimeFormat),
}
m.Lock()
defer m.Unlock()
keys = append(keys, key)
return nil
})
@@ -120,18 +123,18 @@ func getNewPassword(gopts GlobalOptions) (string, error) {
"enter password again: ")
}
func addKey(gopts GlobalOptions, repo *repository.Repository) error {
func addKey(ctx context.Context, repo *repository.Repository, gopts GlobalOptions) error {
pw, err := getNewPassword(gopts)
if err != nil {
return err
}
id, err := repository.AddKey(gopts.ctx, repo, pw, keyUsername, keyHostname, repo.Key())
id, err := repository.AddKey(ctx, repo, pw, keyUsername, keyHostname, repo.Key())
if err != nil {
return errors.Fatalf("creating new key failed: %v\n", err)
}
err = switchToNewKeyAndRemoveIfBroken(gopts.ctx, repo, id, pw)
err = switchToNewKeyAndRemoveIfBroken(ctx, repo, id, pw)
if err != nil {
return err
}
@@ -141,40 +144,40 @@ func addKey(gopts GlobalOptions, repo *repository.Repository) error {
return nil
}
func deleteKey(ctx context.Context, repo *repository.Repository, name string) error {
if name == repo.KeyName() {
func deleteKey(ctx context.Context, repo *repository.Repository, id restic.ID) error {
if id == repo.KeyID() {
return errors.Fatal("refusing to remove key currently used to access repository")
}
h := restic.Handle{Type: restic.KeyFile, Name: name}
h := restic.Handle{Type: restic.KeyFile, Name: id.String()}
err := repo.Backend().Remove(ctx, h)
if err != nil {
return err
}
Verbosef("removed key %v\n", name)
Verbosef("removed key %v\n", id)
return nil
}
func changePassword(gopts GlobalOptions, repo *repository.Repository) error {
func changePassword(ctx context.Context, repo *repository.Repository, gopts GlobalOptions) error {
pw, err := getNewPassword(gopts)
if err != nil {
return err
}
id, err := repository.AddKey(gopts.ctx, repo, pw, "", "", repo.Key())
id, err := repository.AddKey(ctx, repo, pw, "", "", repo.Key())
if err != nil {
return errors.Fatalf("creating new key failed: %v\n", err)
}
oldID := repo.KeyName()
oldID := repo.KeyID()
err = switchToNewKeyAndRemoveIfBroken(gopts.ctx, repo, id, pw)
err = switchToNewKeyAndRemoveIfBroken(ctx, repo, id, pw)
if err != nil {
return err
}
h := restic.Handle{Type: restic.KeyFile, Name: oldID}
err = repo.Backend().Remove(gopts.ctx, h)
h := restic.Handle{Type: restic.KeyFile, Name: oldID.String()}
err = repo.Backend().Remove(ctx, h)
if err != nil {
return err
}
@@ -187,32 +190,29 @@ func changePassword(gopts GlobalOptions, repo *repository.Repository) error {
func switchToNewKeyAndRemoveIfBroken(ctx context.Context, repo *repository.Repository, key *repository.Key, pw string) error {
// Verify new key to make sure it really works. A broken key can render the
// whole repository inaccessible
err := repo.SearchKey(ctx, pw, 0, key.Name())
err := repo.SearchKey(ctx, pw, 0, key.ID().String())
if err != nil {
// the key is invalid, try to remove it
h := restic.Handle{Type: restic.KeyFile, Name: key.Name()}
h := restic.Handle{Type: restic.KeyFile, Name: key.ID().String()}
_ = repo.Backend().Remove(ctx, h)
return errors.Fatalf("failed to access repository with new key: %v", err)
}
return nil
}
func runKey(gopts GlobalOptions, args []string) error {
func runKey(ctx context.Context, gopts GlobalOptions, args []string) error {
if len(args) < 1 || (args[0] == "remove" && len(args) != 2) || (args[0] != "remove" && len(args) != 1) {
return errors.Fatal("wrong number of arguments")
}
ctx, cancel := context.WithCancel(gopts.ctx)
defer cancel()
repo, err := OpenRepository(gopts)
repo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
switch args[0] {
case "list":
lock, err := lockRepo(ctx, repo)
lock, ctx, err := lockRepo(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
@@ -220,15 +220,15 @@ func runKey(gopts GlobalOptions, args []string) error {
return listKeys(ctx, repo, gopts)
case "add":
lock, err := lockRepo(ctx, repo)
lock, ctx, err := lockRepo(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
}
return addKey(gopts, repo)
return addKey(ctx, repo, gopts)
case "remove":
lock, err := lockRepoExclusive(ctx, repo)
lock, ctx, err := lockRepoExclusive(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
@@ -239,22 +239,22 @@ func runKey(gopts GlobalOptions, args []string) error {
return err
}
return deleteKey(gopts.ctx, repo, id)
return deleteKey(ctx, repo, id)
case "passwd":
lock, err := lockRepoExclusive(ctx, repo)
lock, ctx, err := lockRepoExclusive(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
}
return changePassword(gopts, repo)
return changePassword(ctx, repo, gopts)
}
return nil
}
func loadPasswordFromFile(pwdFile string) (string, error) {
s, err := ioutil.ReadFile(pwdFile)
s, err := os.ReadFile(pwdFile)
if os.IsNotExist(err) {
return "", errors.Fatalf("%s does not exist", pwdFile)
}

View File

@@ -1,8 +1,10 @@
package main
import (
"context"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/index"
"github.com/restic/restic/internal/restic"
"github.com/spf13/cobra"
@@ -21,7 +23,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runList(cmd, globalOptions, args)
return runList(cmd.Context(), cmd, globalOptions, args)
},
}
@@ -29,18 +31,19 @@ func init() {
cmdRoot.AddCommand(cmdList)
}
func runList(cmd *cobra.Command, opts GlobalOptions, args []string) error {
func runList(ctx context.Context, cmd *cobra.Command, opts GlobalOptions, args []string) error {
if len(args) != 1 {
return errors.Fatal("type not specified, usage: " + cmd.Use)
}
repo, err := OpenRepository(opts)
repo, err := OpenRepository(ctx, opts)
if err != nil {
return err
}
if !opts.NoLock && args[0] != "locks" {
lock, err := lockRepo(opts.ctx, repo)
var lock *restic.Lock
lock, ctx, err = lockRepo(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
@@ -60,20 +63,20 @@ func runList(cmd *cobra.Command, opts GlobalOptions, args []string) error {
case "locks":
t = restic.LockFile
case "blobs":
return repository.ForAllIndexes(opts.ctx, repo, func(id restic.ID, idx *repository.Index, oldFormat bool, err error) error {
return index.ForAllIndexes(ctx, repo, func(id restic.ID, idx *index.Index, oldFormat bool, err error) error {
if err != nil {
return err
}
for blobs := range idx.Each(opts.ctx) {
idx.Each(ctx, func(blobs restic.PackedBlob) {
Printf("%v %v\n", blobs.Type, blobs.ID)
}
})
return nil
})
default:
return errors.Fatal("invalid type")
}
return repo.List(opts.ctx, t, func(id restic.ID, size int64) error {
return repo.List(ctx, t, func(id restic.ID, size int64) error {
Printf("%s\n", id)
return nil
})

View File

@@ -42,16 +42,14 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runLs(lsOptions, globalOptions, args)
return runLs(cmd.Context(), lsOptions, globalOptions, args)
},
}
// LsOptions collects all options for the ls command.
type LsOptions struct {
ListLong bool
Hosts []string
Tags restic.TagLists
Paths []string
ListLong bool
snapshotFilterOptions
Recursive bool
}
@@ -61,10 +59,8 @@ func init() {
cmdRoot.AddCommand(cmdLs)
flags := cmdLs.Flags()
initSingleSnapshotFilterOptions(flags, &lsOptions.snapshotFilterOptions)
flags.BoolVarP(&lsOptions.ListLong, "long", "l", false, "use a long listing format showing size and mode")
flags.StringArrayVarP(&lsOptions.Hosts, "host", "H", nil, "only consider snapshots for this `host`, when snapshot ID \"latest\" is given (can be specified multiple times)")
flags.Var(&lsOptions.Tags, "tag", "only consider snapshots which include this `taglist`, when snapshot ID \"latest\" is given (can be specified multiple times)")
flags.StringArrayVar(&lsOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path`, when snapshot ID \"latest\" is given (can be specified multiple times)")
flags.BoolVar(&lsOptions.Recursive, "recursive", false, "include files in subfolders of the listed directories")
}
@@ -115,7 +111,7 @@ func lsNodeJSON(enc *json.Encoder, path string, node *restic.Node) error {
return enc.Encode(n)
}
func runLs(opts LsOptions, gopts GlobalOptions, args []string) error {
func runLs(ctx context.Context, opts LsOptions, gopts GlobalOptions, args []string) error {
if len(args) == 0 {
return errors.Fatal("no snapshot ID specified, specify snapshot ID or use special ID 'latest'")
}
@@ -165,23 +161,20 @@ func runLs(opts LsOptions, gopts GlobalOptions, args []string) error {
return false
}
repo, err := OpenRepository(gopts)
repo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
snapshotLister, err := backend.MemorizeList(gopts.ctx, repo.Backend(), restic.SnapshotFile)
snapshotLister, err := backend.MemorizeList(ctx, repo.Backend(), restic.SnapshotFile)
if err != nil {
return err
}
if err = repo.LoadIndex(gopts.ctx); err != nil {
if err = repo.LoadIndex(ctx); err != nil {
return err
}
ctx, cancel := context.WithCancel(gopts.ctx)
defer cancel()
var (
printSnapshot func(sn *restic.Snapshot)
printNode func(path string, node *restic.Node)
@@ -217,45 +210,48 @@ func runLs(opts LsOptions, gopts GlobalOptions, args []string) error {
}
}
for sn := range FindFilteredSnapshots(ctx, snapshotLister, repo, opts.Hosts, opts.Tags, opts.Paths, args[:1]) {
printSnapshot(sn)
sn, err := restic.FindFilteredSnapshot(ctx, snapshotLister, repo, opts.Hosts, opts.Tags, opts.Paths, nil, args[0])
if err != nil {
return err
}
err := walker.Walk(ctx, repo, *sn.Tree, nil, func(_ restic.ID, nodepath string, node *restic.Node, err error) (bool, error) {
if err != nil {
return false, err
}
if node == nil {
return false, nil
}
if withinDir(nodepath) {
// if we're within a dir, print the node
printNode(nodepath, node)
// if recursive listing is requested, signal the walker that it
// should continue walking recursively
if opts.Recursive {
return false, nil
}
}
// if there's an upcoming match deeper in the tree (but we're not
// there yet), signal the walker to descend into any subdirs
if approachingMatchingTree(nodepath) {
return false, nil
}
// otherwise, signal the walker to not walk recursively into any
// subdirs
if node.Type == "dir" {
return false, walker.ErrSkipNode
}
return false, nil
})
printSnapshot(sn)
err = walker.Walk(ctx, repo, *sn.Tree, nil, func(_ restic.ID, nodepath string, node *restic.Node, err error) (bool, error) {
if err != nil {
return err
return false, err
}
if node == nil {
return false, nil
}
if withinDir(nodepath) {
// if we're within a dir, print the node
printNode(nodepath, node)
// if recursive listing is requested, signal the walker that it
// should continue walking recursively
if opts.Recursive {
return false, nil
}
}
// if there's an upcoming match deeper in the tree (but we're not
// there yet), signal the walker to descend into any subdirs
if approachingMatchingTree(nodepath) {
return false, nil
}
// otherwise, signal the walker to not walk recursively into any
// subdirs
if node.Type == "dir" {
return false, walker.ErrSkipNode
}
return false, nil
})
if err != nil {
return err
}
return nil

View File

@@ -1,6 +1,8 @@
package main
import (
"context"
"github.com/restic/restic/internal/migrations"
"github.com/restic/restic/internal/restic"
@@ -22,7 +24,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runMigrate(migrateOptions, globalOptions, args)
return runMigrate(cmd.Context(), migrateOptions, globalOptions, args)
},
}
@@ -39,13 +41,12 @@ func init() {
f.BoolVarP(&migrateOptions.Force, "force", "f", false, `apply a migration a second time`)
}
func checkMigrations(opts MigrateOptions, gopts GlobalOptions, repo restic.Repository) error {
ctx := gopts.ctx
func checkMigrations(ctx context.Context, repo restic.Repository) error {
Printf("available migrations:\n")
found := false
for _, m := range migrations.All {
ok, err := m.Check(ctx, repo)
ok, _, err := m.Check(ctx, repo)
if err != nil {
return err
}
@@ -57,27 +58,28 @@ func checkMigrations(opts MigrateOptions, gopts GlobalOptions, repo restic.Repos
}
if !found {
Printf("no migrations found")
Printf("no migrations found\n")
}
return nil
}
func applyMigrations(opts MigrateOptions, gopts GlobalOptions, repo restic.Repository, args []string) error {
ctx := gopts.ctx
func applyMigrations(ctx context.Context, opts MigrateOptions, gopts GlobalOptions, repo restic.Repository, args []string) error {
var firsterr error
for _, name := range args {
for _, m := range migrations.All {
if m.Name() == name {
ok, err := m.Check(ctx, repo)
ok, reason, err := m.Check(ctx, repo)
if err != nil {
return err
}
if !ok {
if !opts.Force {
Warnf("migration %v cannot be applied: check failed\nIf you want to apply this migration anyway, re-run with option --force\n", m.Name())
if reason == "" {
reason = "check failed"
}
Warnf("migration %v cannot be applied: %v\nIf you want to apply this migration anyway, re-run with option --force\n", m.Name(), reason)
continue
}
@@ -91,7 +93,7 @@ func applyMigrations(opts MigrateOptions, gopts GlobalOptions, repo restic.Repos
checkGopts := gopts
// the repository is already locked
checkGopts.NoLock = true
err = runCheck(checkOptions, checkGopts, []string{})
err = runCheck(ctx, checkOptions, checkGopts, []string{})
if err != nil {
return err
}
@@ -114,21 +116,21 @@ func applyMigrations(opts MigrateOptions, gopts GlobalOptions, repo restic.Repos
return firsterr
}
func runMigrate(opts MigrateOptions, gopts GlobalOptions, args []string) error {
repo, err := OpenRepository(gopts)
func runMigrate(ctx context.Context, opts MigrateOptions, gopts GlobalOptions, args []string) error {
repo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
lock, err := lockRepoExclusive(gopts.ctx, repo)
lock, ctx, err := lockRepoExclusive(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
}
if len(args) == 0 {
return checkMigrations(opts, gopts, repo)
return checkMigrations(ctx, repo)
}
return applyMigrations(opts, gopts, repo, args)
return applyMigrations(ctx, opts, gopts, repo, args)
}

View File

@@ -4,6 +4,7 @@
package main
import (
"context"
"os"
"strings"
"time"
@@ -17,8 +18,8 @@ import (
resticfs "github.com/restic/restic/internal/fs"
"github.com/restic/restic/internal/fuse"
systemFuse "bazil.org/fuse"
"bazil.org/fuse/fs"
systemFuse "github.com/anacrolix/fuse"
"github.com/anacrolix/fuse/fs"
)
var cmdMount = &cobra.Command{
@@ -67,7 +68,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runMount(mountOptions, globalOptions, args)
return runMount(cmd.Context(), mountOptions, globalOptions, args)
},
}
@@ -76,11 +77,9 @@ type MountOptions struct {
OwnerRoot bool
AllowOther bool
NoDefaultPermissions bool
Hosts []string
Tags restic.TagLists
Paths []string
TimeTemplate string
PathTemplates []string
snapshotFilterOptions
TimeTemplate string
PathTemplates []string
}
var mountOptions MountOptions
@@ -93,9 +92,7 @@ func init() {
mountFlags.BoolVar(&mountOptions.AllowOther, "allow-other", false, "allow other users to access the data in the mounted directory")
mountFlags.BoolVar(&mountOptions.NoDefaultPermissions, "no-default-permissions", false, "for 'allow-other', ignore Unix permissions and allow users to read all snapshot files")
mountFlags.StringArrayVarP(&mountOptions.Hosts, "host", "H", nil, `only consider snapshots for this host (can be specified multiple times)`)
mountFlags.Var(&mountOptions.Tags, "tag", "only consider snapshots which include this `taglist`")
mountFlags.StringArrayVar(&mountOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path`")
initMultiSnapshotFilterOptions(mountFlags, &mountOptions.snapshotFilterOptions, true)
mountFlags.StringArrayVar(&mountOptions.PathTemplates, "path-template", nil, "set `template` for path names (can be specified multiple times)")
mountFlags.StringVar(&mountOptions.TimeTemplate, "snapshot-template", time.RFC3339, "set `template` to use for snapshot dirs")
@@ -103,7 +100,7 @@ func init() {
_ = mountFlags.MarkDeprecated("snapshot-template", "use --time-template")
}
func runMount(opts MountOptions, gopts GlobalOptions, args []string) error {
func runMount(ctx context.Context, opts MountOptions, gopts GlobalOptions, args []string) error {
if opts.TimeTemplate == "" {
return errors.Fatal("time template string cannot be empty")
}
@@ -119,20 +116,21 @@ func runMount(opts MountOptions, gopts GlobalOptions, args []string) error {
debug.Log("start mount")
defer debug.Log("finish mount")
repo, err := OpenRepository(gopts)
repo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
if !gopts.NoLock {
lock, err := lockRepo(gopts.ctx, repo)
var lock *restic.Lock
lock, ctx, err = lockRepo(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
}
}
err = repo.LoadIndex(gopts.ctx)
err = repo.LoadIndex(ctx)
if err != nil {
return err
}
@@ -158,13 +156,17 @@ func runMount(opts MountOptions, gopts GlobalOptions, args []string) error {
}
}
AddCleanupHandler(func() error {
AddCleanupHandler(func(code int) (int, error) {
debug.Log("running umount cleanup handler for mount at %v", mountpoint)
err := umount(mountpoint)
if err != nil {
Warnf("unable to umount (maybe already umounted or still in use?): %v\n", err)
}
return nil
// replace error code of sigint
if code == 130 {
code = 0
}
return code, nil
})
c, err := systemFuse.Mount(mountpoint, mountOptions...)

View File

@@ -9,9 +9,11 @@ import (
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/index"
"github.com/restic/restic/internal/pack"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui"
"github.com/spf13/cobra"
)
@@ -34,7 +36,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runPrune(pruneOptions, globalOptions)
return runPrune(cmd.Context(), pruneOptions, globalOptions)
},
}
@@ -134,7 +136,7 @@ func verifyPruneOptions(opts *PruneOptions) error {
return nil
}
func runPrune(opts PruneOptions, gopts GlobalOptions) error {
func runPrune(ctx context.Context, opts PruneOptions, gopts GlobalOptions) error {
err := verifyPruneOptions(&opts)
if err != nil {
return err
@@ -144,7 +146,7 @@ func runPrune(opts PruneOptions, gopts GlobalOptions) error {
return errors.Fatal("disabled compression and `--repack-uncompressed` are mutually exclusive")
}
repo, err := OpenRepository(gopts)
repo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
@@ -165,16 +167,16 @@ func runPrune(opts PruneOptions, gopts GlobalOptions) error {
opts.unsafeRecovery = true
}
lock, err := lockRepoExclusive(gopts.ctx, repo)
lock, ctx, err := lockRepoExclusive(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
}
return runPruneWithRepo(opts, gopts, repo, restic.NewIDSet())
return runPruneWithRepo(ctx, opts, gopts, repo, restic.NewIDSet())
}
func runPruneWithRepo(opts PruneOptions, gopts GlobalOptions, repo *repository.Repository, ignoreSnapshots restic.IDSet) error {
func runPruneWithRepo(ctx context.Context, opts PruneOptions, gopts GlobalOptions, repo *repository.Repository, ignoreSnapshots restic.IDSet) error {
// we do not need index updates while pruning!
repo.DisableAutoIndexUpdate()
@@ -184,22 +186,26 @@ func runPruneWithRepo(opts PruneOptions, gopts GlobalOptions, repo *repository.R
Verbosef("loading indexes...\n")
// loading the index before the snapshots is ok, as we use an exclusive lock here
err := repo.LoadIndex(gopts.ctx)
err := repo.LoadIndex(ctx)
if err != nil {
return err
}
plan, stats, err := planPrune(opts, gopts, repo, ignoreSnapshots)
plan, stats, err := planPrune(ctx, opts, repo, ignoreSnapshots, gopts.Quiet)
if err != nil {
return err
}
err = printPruneStats(gopts, stats)
if opts.DryRun {
Verbosef("\nWould have made the following changes:")
}
err = printPruneStats(stats)
if err != nil {
return err
}
return doPrune(opts, gopts, repo, plan)
return doPrune(ctx, opts, gopts, repo, plan)
}
type pruneStats struct {
@@ -212,13 +218,14 @@ type pruneStats struct {
repackrm uint
}
size struct {
used uint64
duplicate uint64
unused uint64
remove uint64
repack uint64
repackrm uint64
unref uint64
used uint64
duplicate uint64
unused uint64
remove uint64
repack uint64
repackrm uint64
unref uint64
uncompressed uint64
}
packs struct {
used uint
@@ -232,11 +239,11 @@ type pruneStats struct {
}
type prunePlan struct {
removePacksFirst restic.IDSet // packs to remove first (unreferenced packs)
repackPacks restic.IDSet // packs to repack
keepBlobs restic.BlobSet // blobs to keep during repacking
removePacks restic.IDSet // packs to remove
ignorePacks restic.IDSet // packs to ignore when rebuilding the index
removePacksFirst restic.IDSet // packs to remove first (unreferenced packs)
repackPacks restic.IDSet // packs to repack
keepBlobs restic.CountedBlobSet // blobs to keep during repacking
removePacks restic.IDSet // packs to remove
ignorePacks restic.IDSet // packs to ignore when rebuilding the index
}
type packInfo struct {
@@ -251,15 +258,15 @@ type packInfo struct {
type packInfoWithID struct {
ID restic.ID
packInfo
mustCompress bool
}
// planPrune selects which files to rewrite and which to delete and which blobs to keep.
// Also some summary statistics are returned.
func planPrune(opts PruneOptions, gopts GlobalOptions, repo restic.Repository, ignoreSnapshots restic.IDSet) (prunePlan, pruneStats, error) {
ctx := gopts.ctx
func planPrune(ctx context.Context, opts PruneOptions, repo restic.Repository, ignoreSnapshots restic.IDSet, quiet bool) (prunePlan, pruneStats, error) {
var stats pruneStats
usedBlobs, err := getUsedBlobs(gopts, repo, ignoreSnapshots)
usedBlobs, err := getUsedBlobs(ctx, repo, ignoreSnapshots, quiet)
if err != nil {
return prunePlan{}, stats, err
}
@@ -271,19 +278,25 @@ func planPrune(opts PruneOptions, gopts GlobalOptions, repo restic.Repository, i
}
Verbosef("collecting packs for deletion and repacking\n")
plan, err := decidePackAction(ctx, opts, gopts, repo, indexPack, &stats)
plan, err := decidePackAction(ctx, opts, repo, indexPack, &stats, quiet)
if err != nil {
return prunePlan{}, stats, err
}
if len(plan.repackPacks) != 0 {
blobCount := keepBlobs.Len()
// when repacking, we do not want to keep blobs which are
// already contained in kept packs, so delete them from keepBlobs
for blob := range repo.Index().Each(ctx) {
repo.Index().Each(ctx, func(blob restic.PackedBlob) {
if plan.removePacks.Has(blob.PackID) || plan.repackPacks.Has(blob.PackID) {
continue
return
}
keepBlobs.Delete(blob.BlobHandle)
})
if keepBlobs.Len() < blobCount/2 {
// replace with copy to shrink map to necessary size if there's a chance to benefit
keepBlobs = keepBlobs.Copy()
}
} else {
// keepBlobs is only needed if packs are repacked
@@ -294,46 +307,40 @@ func planPrune(opts PruneOptions, gopts GlobalOptions, repo restic.Repository, i
return plan, stats, nil
}
func packInfoFromIndex(ctx context.Context, idx restic.MasterIndex, usedBlobs restic.BlobSet, stats *pruneStats) (restic.BlobSet, map[restic.ID]packInfo, error) {
keepBlobs := restic.NewBlobSet()
duplicateBlobs := make(map[restic.BlobHandle]uint8)
func packInfoFromIndex(ctx context.Context, idx restic.MasterIndex, usedBlobs restic.CountedBlobSet, stats *pruneStats) (restic.CountedBlobSet, map[restic.ID]packInfo, error) {
// iterate over all blobs in index to find out which blobs are duplicates
for blob := range idx.Each(ctx) {
// The counter in usedBlobs describes how many instances of the blob exist in the repository index
// Thus 0 == blob is missing, 1 == blob exists once, >= 2 == duplicates exist
idx.Each(ctx, func(blob restic.PackedBlob) {
bh := blob.BlobHandle
size := uint64(blob.Length)
switch {
case usedBlobs.Has(bh): // used blob, move to keepBlobs
usedBlobs.Delete(bh)
keepBlobs.Insert(bh)
stats.size.used += size
stats.blobs.used++
case keepBlobs.Has(bh): // duplicate blob
count, ok := duplicateBlobs[bh]
if !ok {
count = 2 // this one is already the second blob!
} else if count < math.MaxUint8 {
count, ok := usedBlobs[bh]
if ok {
if count < math.MaxUint8 {
// don't overflow, but saturate count at 255
// this can lead to a non-optimal pack selection, but won't cause
// problems otherwise
count++
}
duplicateBlobs[bh] = count
stats.size.duplicate += size
stats.blobs.duplicate++
default:
stats.size.unused += size
stats.blobs.unused++
usedBlobs[bh] = count
}
})
// Check if all used blobs have been found in index
missingBlobs := restic.NewBlobSet()
for bh, count := range usedBlobs {
if count == 0 {
// blob does not exist in any pack files
missingBlobs.Insert(bh)
}
}
// Check if all used blobs have been found in index
if len(usedBlobs) != 0 {
if len(missingBlobs) != 0 {
Warnf("%v not found in the index\n\n"+
"Integrity check failed: Data seems to be missing.\n"+
"Will not start prune to prevent (additional) data loss!\n"+
"Please report this error (along with the output of the 'prune' run) at\n"+
"https://github.com/restic/restic/issues/new/choose\n", usedBlobs)
"https://github.com/restic/restic/issues/new/choose\n", missingBlobs)
return nil, nil, errorIndexIncomplete
}
@@ -345,8 +352,9 @@ func packInfoFromIndex(ctx context.Context, idx restic.MasterIndex, usedBlobs re
indexPack[pid] = packInfo{tpe: restic.NumBlobTypes, usedSize: uint64(hdrSize)}
}
hasDuplicates := false
// iterate over all blobs in index to generate packInfo
for blob := range idx.Each(ctx) {
idx.Each(ctx, func(blob restic.PackedBlob) {
ip := indexPack[blob.PackID]
// Set blob type if not yet set
@@ -361,64 +369,95 @@ func packInfoFromIndex(ctx context.Context, idx restic.MasterIndex, usedBlobs re
bh := blob.BlobHandle
size := uint64(blob.Length)
_, isDuplicate := duplicateBlobs[bh]
dupCount := usedBlobs[bh]
switch {
case isDuplicate: // duplicate blobs will be handled later
case keepBlobs.Has(bh): // used blob, not duplicate
case dupCount >= 2:
hasDuplicates = true
// mark as unused for now, we will later on select one copy
ip.unusedSize += size
ip.unusedBlobs++
// count as duplicate, will later on change one copy to be counted as used
stats.size.duplicate += size
stats.blobs.duplicate++
case dupCount == 1: // used blob, not duplicate
ip.usedSize += size
ip.usedBlobs++
stats.size.used += size
stats.blobs.used++
default: // unused blob
ip.unusedSize += size
ip.unusedBlobs++
stats.size.unused += size
stats.blobs.unused++
}
if !blob.IsCompressed() {
ip.uncompressed = true
}
// update indexPack
indexPack[blob.PackID] = ip
}
})
// if duplicate blobs exist, those will be set to either "used" or "unused":
// - mark only one occurence of duplicate blobs as used
// - if there are already some used blobs in a pack, possibly mark duplicates in this pack as "used"
// - if there are no used blobs in a pack, possibly mark duplicates as "unused"
if len(duplicateBlobs) > 0 {
if hasDuplicates {
// iterate again over all blobs in index (this is pretty cheap, all in-mem)
for blob := range idx.Each(ctx) {
idx.Each(ctx, func(blob restic.PackedBlob) {
bh := blob.BlobHandle
count, isDuplicate := duplicateBlobs[bh]
if !isDuplicate {
continue
count, ok := usedBlobs[bh]
// skip non-duplicate, aka. normal blobs
// count == 0 is used to mark that this was a duplicate blob with only a single occurence remaining
if !ok || count == 1 {
return
}
ip := indexPack[blob.PackID]
size := uint64(blob.Length)
switch {
case count == 0:
// used duplicate exists -> mark as unused
ip.unusedSize += size
ip.unusedBlobs++
case ip.usedBlobs > 0, count == 1:
// other used blobs in pack or "last" occurency -> mark as used
case ip.usedBlobs > 0, count == 0:
// other used blobs in pack or "last" occurence -> transition to used
ip.usedSize += size
ip.usedBlobs++
// let other occurences be marked as unused
duplicateBlobs[bh] = 0
ip.unusedSize -= size
ip.unusedBlobs--
// same for the global statistics
stats.size.used += size
stats.blobs.used++
stats.size.duplicate -= size
stats.blobs.duplicate--
// let other occurences remain marked as unused
usedBlobs[bh] = 1
default:
// mark as unused and decrease counter
ip.unusedSize += size
ip.unusedBlobs++
duplicateBlobs[bh] = count - 1
// remain unused and decrease counter
count--
if count == 1 {
// setting count to 1 would lead to forgetting that this blob had duplicates
// thus use the special value zero. This will select the last instance of the blob for keeping.
count = 0
}
usedBlobs[bh] = count
}
// update indexPack
indexPack[blob.PackID] = ip
})
}
// Sanity check. If no duplicates exist, all blobs have value 1. After handling
// duplicates, this also applies to duplicates.
for _, count := range usedBlobs {
if count != 1 {
panic("internal error during blob selection")
}
}
return keepBlobs, indexPack, nil
return usedBlobs, indexPack, nil
}
func decidePackAction(ctx context.Context, opts PruneOptions, gopts GlobalOptions, repo restic.Repository, indexPack map[restic.ID]packInfo, stats *pruneStats) (prunePlan, error) {
func decidePackAction(ctx context.Context, opts PruneOptions, repo restic.Repository, indexPack map[restic.ID]packInfo, stats *pruneStats, quiet bool) (prunePlan, error) {
removePacksFirst := restic.NewIDSet()
removePacks := restic.NewIDSet()
repackPacks := restic.NewIDSet()
@@ -434,7 +473,7 @@ func decidePackAction(ctx context.Context, opts PruneOptions, gopts GlobalOption
}
// loop over all packs and decide what to do
bar := newProgressMax(!gopts.Quiet, uint64(len(indexPack)), "packs processed")
bar := newProgressMax(quiet, uint64(len(indexPack)), "packs processed")
err := repo.List(ctx, restic.PackFile, func(id restic.ID, packSize int64) error {
p, ok := indexPack[id]
if !ok {
@@ -464,14 +503,15 @@ func decidePackAction(ctx context.Context, opts PruneOptions, gopts GlobalOption
stats.packs.partlyUsed++
}
if p.uncompressed {
stats.size.uncompressed += p.unusedSize + p.usedSize
}
mustCompress := false
if repoVersion >= 2 {
// repo v2: always repack tree blobs if uncompressed
// compress data blobs if requested
mustCompress = (p.tpe == restic.TreeBlob || opts.RepackUncompressed) && p.uncompressed
}
// use a flag that pack must be compressed
p.uncompressed = mustCompress
// decide what to do
switch {
@@ -490,12 +530,12 @@ func decidePackAction(ctx context.Context, opts PruneOptions, gopts GlobalOption
// All blobs in pack are used and not mixed => keep pack!
stats.packs.keep++
} else {
repackSmallCandidates = append(repackSmallCandidates, packInfoWithID{ID: id, packInfo: p})
repackSmallCandidates = append(repackSmallCandidates, packInfoWithID{ID: id, packInfo: p, mustCompress: mustCompress})
}
default:
// all other packs are candidates for repacking
repackCandidates = append(repackCandidates, packInfoWithID{ID: id, packInfo: p})
repackCandidates = append(repackCandidates, packInfoWithID{ID: id, packInfo: p, mustCompress: mustCompress})
}
delete(indexPack, id)
@@ -569,6 +609,9 @@ func decidePackAction(ctx context.Context, opts PruneOptions, gopts GlobalOption
stats.size.repack += p.unusedSize + p.usedSize
stats.blobs.repackrm += p.unusedBlobs
stats.size.repackrm += p.unusedSize
if p.uncompressed {
stats.size.uncompressed -= p.unusedSize + p.usedSize
}
}
// calculate limit for number of unused bytes in the repo after repacking
@@ -583,7 +626,7 @@ func decidePackAction(ctx context.Context, opts PruneOptions, gopts GlobalOption
case reachedRepackSize:
stats.packs.keep++
case p.tpe != restic.DataBlob, p.uncompressed:
case p.tpe != restic.DataBlob, p.mustCompress:
// repacking non-data packs / uncompressed-trees is only limited by repackSize
repack(p.ID, p.packInfo)
@@ -600,6 +643,11 @@ func decidePackAction(ctx context.Context, opts PruneOptions, gopts GlobalOption
stats.packs.repack = uint(len(repackPacks))
stats.packs.remove = uint(len(removePacks))
if repo.Config().Version < 2 {
// compression not supported for repository format version 1
stats.size.uncompressed = 0
}
return prunePlan{removePacksFirst: removePacksFirst,
removePacks: removePacks,
repackPacks: repackPacks,
@@ -608,30 +656,33 @@ func decidePackAction(ctx context.Context, opts PruneOptions, gopts GlobalOption
}
// printPruneStats prints out the statistics
func printPruneStats(gopts GlobalOptions, stats pruneStats) error {
Verboseff("\nused: %10d blobs / %s\n", stats.blobs.used, formatBytes(stats.size.used))
func printPruneStats(stats pruneStats) error {
Verboseff("\nused: %10d blobs / %s\n", stats.blobs.used, ui.FormatBytes(stats.size.used))
if stats.blobs.duplicate > 0 {
Verboseff("duplicates: %10d blobs / %s\n", stats.blobs.duplicate, formatBytes(stats.size.duplicate))
Verboseff("duplicates: %10d blobs / %s\n", stats.blobs.duplicate, ui.FormatBytes(stats.size.duplicate))
}
Verboseff("unused: %10d blobs / %s\n", stats.blobs.unused, formatBytes(stats.size.unused))
Verboseff("unused: %10d blobs / %s\n", stats.blobs.unused, ui.FormatBytes(stats.size.unused))
if stats.size.unref > 0 {
Verboseff("unreferenced: %s\n", formatBytes(stats.size.unref))
Verboseff("unreferenced: %s\n", ui.FormatBytes(stats.size.unref))
}
totalBlobs := stats.blobs.used + stats.blobs.unused + stats.blobs.duplicate
totalSize := stats.size.used + stats.size.duplicate + stats.size.unused + stats.size.unref
unusedSize := stats.size.duplicate + stats.size.unused
Verboseff("total: %10d blobs / %s\n", totalBlobs, formatBytes(totalSize))
Verboseff("unused size: %s of total size\n", formatPercent(unusedSize, totalSize))
Verboseff("total: %10d blobs / %s\n", totalBlobs, ui.FormatBytes(totalSize))
Verboseff("unused size: %s of total size\n", ui.FormatPercent(unusedSize, totalSize))
Verbosef("\nto repack: %10d blobs / %s\n", stats.blobs.repack, formatBytes(stats.size.repack))
Verbosef("this removes: %10d blobs / %s\n", stats.blobs.repackrm, formatBytes(stats.size.repackrm))
Verbosef("to delete: %10d blobs / %s\n", stats.blobs.remove, formatBytes(stats.size.remove+stats.size.unref))
Verbosef("\nto repack: %10d blobs / %s\n", stats.blobs.repack, ui.FormatBytes(stats.size.repack))
Verbosef("this removes: %10d blobs / %s\n", stats.blobs.repackrm, ui.FormatBytes(stats.size.repackrm))
Verbosef("to delete: %10d blobs / %s\n", stats.blobs.remove, ui.FormatBytes(stats.size.remove+stats.size.unref))
totalPruneSize := stats.size.remove + stats.size.repackrm + stats.size.unref
Verbosef("total prune: %10d blobs / %s\n", stats.blobs.remove+stats.blobs.repackrm, formatBytes(totalPruneSize))
Verbosef("remaining: %10d blobs / %s\n", totalBlobs-(stats.blobs.remove+stats.blobs.repackrm), formatBytes(totalSize-totalPruneSize))
Verbosef("total prune: %10d blobs / %s\n", stats.blobs.remove+stats.blobs.repackrm, ui.FormatBytes(totalPruneSize))
if stats.size.uncompressed > 0 {
Verbosef("not yet compressed: %s\n", ui.FormatBytes(stats.size.uncompressed))
}
Verbosef("remaining: %10d blobs / %s\n", totalBlobs-(stats.blobs.remove+stats.blobs.repackrm), ui.FormatBytes(totalSize-totalPruneSize))
unusedAfter := unusedSize - stats.size.remove - stats.size.repackrm
Verbosef("unused size after prune: %s (%s of remaining size)\n",
formatBytes(unusedAfter), formatPercent(unusedAfter, totalSize-totalPruneSize))
ui.FormatBytes(unusedAfter), ui.FormatPercent(unusedAfter, totalSize-totalPruneSize))
Verbosef("\n")
Verboseff("totally used packs: %10d\n", stats.packs.used)
Verboseff("partly used packs: %10d\n", stats.packs.partlyUsed)
@@ -652,11 +703,10 @@ func printPruneStats(gopts GlobalOptions, stats pruneStats) error {
// - rebuild the index while ignoring all files that will be deleted
// - delete the files
// plan.removePacks and plan.ignorePacks are modified in this function.
func doPrune(opts PruneOptions, gopts GlobalOptions, repo restic.Repository, plan prunePlan) (err error) {
ctx := gopts.ctx
func doPrune(ctx context.Context, opts PruneOptions, gopts GlobalOptions, repo restic.Repository, plan prunePlan) (err error) {
if opts.DryRun {
if !gopts.JSON && gopts.verbosity >= 2 {
Printf("Repeated prune dry-runs can report slightly different amounts of data to keep or repack. This is expected behavior.\n\n")
if len(plan.removePacksFirst) > 0 {
Printf("Would have removed the following unreferenced packs:\n%v\n\n", plan.removePacksFirst)
}
@@ -670,7 +720,7 @@ func doPrune(opts PruneOptions, gopts GlobalOptions, repo restic.Repository, pla
// unreferenced packs can be safely deleted first
if len(plan.removePacksFirst) != 0 {
Verbosef("deleting unreferenced packs\n")
DeleteFiles(gopts, repo, plan.removePacksFirst, restic.PackFile)
DeleteFiles(ctx, gopts, repo, plan.removePacksFirst, restic.PackFile)
}
if len(plan.repackPacks) != 0 {
@@ -692,6 +742,9 @@ func doPrune(opts PruneOptions, gopts GlobalOptions, repo restic.Repository, pla
"https://github.com/restic/restic/issues/new/choose\n", plan.keepBlobs)
return errors.Fatal("internal error: blobs were not repacked")
}
// allow GC of the blob set
plan.keepBlobs = nil
}
if len(plan.ignorePacks) == 0 {
@@ -702,13 +755,13 @@ func doPrune(opts PruneOptions, gopts GlobalOptions, repo restic.Repository, pla
if opts.unsafeRecovery {
Verbosef("deleting index files\n")
indexFiles := repo.Index().(*repository.MasterIndex).IDs()
err = DeleteFilesChecked(gopts, repo, indexFiles, restic.IndexFile)
indexFiles := repo.Index().(*index.MasterIndex).IDs()
err = DeleteFilesChecked(ctx, gopts, repo, indexFiles, restic.IndexFile)
if err != nil {
return errors.Fatalf("%s", err)
}
} else if len(plan.ignorePacks) != 0 {
err = rebuildIndexFiles(gopts, repo, plan.ignorePacks, nil)
err = rebuildIndexFiles(ctx, gopts, repo, plan.ignorePacks, nil)
if err != nil {
return errors.Fatalf("%s", err)
}
@@ -716,11 +769,11 @@ func doPrune(opts PruneOptions, gopts GlobalOptions, repo restic.Repository, pla
if len(plan.removePacks) != 0 {
Verbosef("removing %d old packs\n", len(plan.removePacks))
DeleteFiles(gopts, repo, plan.removePacks, restic.PackFile)
DeleteFiles(ctx, gopts, repo, plan.removePacks, restic.PackFile)
}
if opts.unsafeRecovery {
_, err = writeIndexFiles(gopts, repo, plan.ignorePacks, nil)
_, err = writeIndexFiles(ctx, gopts, repo, plan.ignorePacks, nil)
if err != nil {
return errors.Fatalf("%s", err)
}
@@ -730,31 +783,29 @@ func doPrune(opts PruneOptions, gopts GlobalOptions, repo restic.Repository, pla
return nil
}
func writeIndexFiles(gopts GlobalOptions, repo restic.Repository, removePacks restic.IDSet, extraObsolete restic.IDs) (restic.IDSet, error) {
func writeIndexFiles(ctx context.Context, gopts GlobalOptions, repo restic.Repository, removePacks restic.IDSet, extraObsolete restic.IDs) (restic.IDSet, error) {
Verbosef("rebuilding index\n")
bar := newProgressMax(!gopts.Quiet, 0, "packs processed")
obsoleteIndexes, err := repo.Index().Save(gopts.ctx, repo, removePacks, extraObsolete, bar)
obsoleteIndexes, err := repo.Index().Save(ctx, repo, removePacks, extraObsolete, bar)
bar.Done()
return obsoleteIndexes, err
}
func rebuildIndexFiles(gopts GlobalOptions, repo restic.Repository, removePacks restic.IDSet, extraObsolete restic.IDs) error {
obsoleteIndexes, err := writeIndexFiles(gopts, repo, removePacks, extraObsolete)
func rebuildIndexFiles(ctx context.Context, gopts GlobalOptions, repo restic.Repository, removePacks restic.IDSet, extraObsolete restic.IDs) error {
obsoleteIndexes, err := writeIndexFiles(ctx, gopts, repo, removePacks, extraObsolete)
if err != nil {
return err
}
Verbosef("deleting obsolete index files\n")
return DeleteFilesChecked(gopts, repo, obsoleteIndexes, restic.IndexFile)
return DeleteFilesChecked(ctx, gopts, repo, obsoleteIndexes, restic.IndexFile)
}
func getUsedBlobs(gopts GlobalOptions, repo restic.Repository, ignoreSnapshots restic.IDSet) (usedBlobs restic.BlobSet, err error) {
ctx := gopts.ctx
func getUsedBlobs(ctx context.Context, repo restic.Repository, ignoreSnapshots restic.IDSet, quiet bool) (usedBlobs restic.CountedBlobSet, err error) {
var snapshotTrees restic.IDs
Verbosef("loading all snapshots...\n")
err = restic.ForAllSnapshots(gopts.ctx, repo.Backend(), repo, ignoreSnapshots,
err = restic.ForAllSnapshots(ctx, repo.Backend(), repo, ignoreSnapshots,
func(id restic.ID, sn *restic.Snapshot, err error) error {
if err != nil {
debug.Log("failed to load snapshot %v (error %v)", id, err)
@@ -770,9 +821,9 @@ func getUsedBlobs(gopts GlobalOptions, repo restic.Repository, ignoreSnapshots r
Verbosef("finding data that is still in use for %d snapshots\n", len(snapshotTrees))
usedBlobs = restic.NewBlobSet()
usedBlobs = restic.NewCountedBlobSet()
bar := newProgressMax(!gopts.Quiet, uint64(len(snapshotTrees)), "snapshots")
bar := newProgressMax(!quiet, uint64(len(snapshotTrees)), "snapshots")
defer bar.Done()
err = restic.FindUsedBlobs(ctx, repo, snapshotTrees, usedBlobs, bar)

View File

@@ -1,6 +1,9 @@
package main
import (
"context"
"github.com/restic/restic/internal/index"
"github.com/restic/restic/internal/pack"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
@@ -22,7 +25,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runRebuildIndex(rebuildIndexOptions, globalOptions)
return runRebuildIndex(cmd.Context(), rebuildIndexOptions, globalOptions)
},
}
@@ -40,24 +43,22 @@ func init() {
}
func runRebuildIndex(opts RebuildIndexOptions, gopts GlobalOptions) error {
repo, err := OpenRepository(gopts)
func runRebuildIndex(ctx context.Context, opts RebuildIndexOptions, gopts GlobalOptions) error {
repo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
lock, err := lockRepoExclusive(gopts.ctx, repo)
lock, ctx, err := lockRepoExclusive(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
}
return rebuildIndex(opts, gopts, repo, restic.NewIDSet())
return rebuildIndex(ctx, opts, gopts, repo, restic.NewIDSet())
}
func rebuildIndex(opts RebuildIndexOptions, gopts GlobalOptions, repo *repository.Repository, ignorePacks restic.IDSet) error {
ctx := gopts.ctx
func rebuildIndex(ctx context.Context, opts RebuildIndexOptions, gopts GlobalOptions, repo *repository.Repository, ignorePacks restic.IDSet) error {
var obsoleteIndexes restic.IDs
packSizeFromList := make(map[restic.ID]int64)
packSizeFromIndex := make(map[restic.ID]int64)
@@ -74,8 +75,8 @@ func rebuildIndex(opts RebuildIndexOptions, gopts GlobalOptions, repo *repositor
}
} else {
Verbosef("loading indexes...\n")
mi := repository.NewMasterIndex()
err := repository.ForAllIndexes(ctx, repo, func(id restic.ID, idx *repository.Index, oldFormat bool, err error) error {
mi := index.NewMasterIndex()
err := index.ForAllIndexes(ctx, repo, func(id restic.ID, idx *index.Index, oldFormat bool, err error) error {
if err != nil {
Warnf("removing invalid index %v: %v\n", id, err)
obsoleteIndexes = append(obsoleteIndexes, id)
@@ -141,7 +142,7 @@ func rebuildIndex(opts RebuildIndexOptions, gopts GlobalOptions, repo *repositor
}
}
err = rebuildIndexFiles(gopts, repo, removePacks, obsoleteIndexes)
err = rebuildIndexFiles(ctx, gopts, repo, removePacks, obsoleteIndexes)
if err != nil {
return err
}

View File

@@ -27,7 +27,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runRecover(globalOptions)
return runRecover(cmd.Context(), globalOptions)
},
}
@@ -35,30 +35,30 @@ func init() {
cmdRoot.AddCommand(cmdRecover)
}
func runRecover(gopts GlobalOptions) error {
func runRecover(ctx context.Context, gopts GlobalOptions) error {
hostname, err := os.Hostname()
if err != nil {
return err
}
repo, err := OpenRepository(gopts)
repo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
lock, err := lockRepo(gopts.ctx, repo)
lock, ctx, err := lockRepo(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
}
snapshotLister, err := backend.MemorizeList(gopts.ctx, repo.Backend(), restic.SnapshotFile)
snapshotLister, err := backend.MemorizeList(ctx, repo.Backend(), restic.SnapshotFile)
if err != nil {
return err
}
Verbosef("load index files\n")
if err = repo.LoadIndex(gopts.ctx); err != nil {
if err = repo.LoadIndex(ctx); err != nil {
return err
}
@@ -66,16 +66,16 @@ func runRecover(gopts GlobalOptions) error {
// tree. If it is not referenced, we have a root tree.
trees := make(map[restic.ID]bool)
for blob := range repo.Index().Each(gopts.ctx) {
repo.Index().Each(ctx, func(blob restic.PackedBlob) {
if blob.Type == restic.TreeBlob {
trees[blob.Blob.ID] = false
}
}
})
Verbosef("load %d trees\n", len(trees))
bar := newProgressMax(!gopts.Quiet, uint64(len(trees)), "trees loaded")
for id := range trees {
tree, err := restic.LoadTree(gopts.ctx, repo, id)
tree, err := restic.LoadTree(ctx, repo, id)
if err != nil {
Warnf("unable to load tree %v: %v\n", id.Str(), err)
continue
@@ -91,7 +91,7 @@ func runRecover(gopts GlobalOptions) error {
bar.Done()
Verbosef("load snapshots\n")
err = restic.ForAllSnapshots(gopts.ctx, snapshotLister, repo, nil, func(id restic.ID, sn *restic.Snapshot, err error) error {
err = restic.ForAllSnapshots(ctx, snapshotLister, repo, nil, func(id restic.ID, sn *restic.Snapshot, err error) error {
trees[*sn.Tree] = true
return nil
})
@@ -132,18 +132,18 @@ func runRecover(gopts GlobalOptions) error {
}
}
wg, ctx := errgroup.WithContext(gopts.ctx)
repo.StartPackUploader(ctx, wg)
wg, wgCtx := errgroup.WithContext(ctx)
repo.StartPackUploader(wgCtx, wg)
var treeID restic.ID
wg.Go(func() error {
var err error
treeID, err = restic.SaveTree(ctx, repo, tree)
treeID, err = restic.SaveTree(wgCtx, repo, tree)
if err != nil {
return errors.Fatalf("unable to save new tree to the repository: %v", err)
}
err = repo.Flush(ctx)
err = repo.Flush(wgCtx)
if err != nil {
return errors.Fatalf("unable to save blobs to the repository: %v", err)
}
@@ -154,7 +154,7 @@ func runRecover(gopts GlobalOptions) error {
return err
}
return createSnapshot(gopts.ctx, "/recover", hostname, []string{"recovered"}, repo, &treeID)
return createSnapshot(ctx, "/recover", hostname, []string{"recovered"}, repo, &treeID)
}

View File

@@ -1,6 +1,7 @@
package main
import (
"context"
"strings"
"time"
@@ -30,7 +31,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runRestore(restoreOptions, globalOptions, args)
return runRestore(cmd.Context(), restoreOptions, globalOptions, args)
},
}
@@ -41,10 +42,9 @@ type RestoreOptions struct {
Include []string
InsensitiveInclude []string
Target string
Hosts []string
Paths []string
Tags restic.TagLists
Verify bool
snapshotFilterOptions
Sparse bool
Verify bool
}
var restoreOptions RestoreOptions
@@ -59,36 +59,34 @@ func init() {
flags.StringArrayVar(&restoreOptions.InsensitiveInclude, "iinclude", nil, "same as `--include` but ignores the casing of filenames")
flags.StringVarP(&restoreOptions.Target, "target", "t", "", "directory to extract data to")
flags.StringArrayVarP(&restoreOptions.Hosts, "host", "H", nil, `only consider snapshots for this host when the snapshot ID is "latest" (can be specified multiple times)`)
flags.Var(&restoreOptions.Tags, "tag", "only consider snapshots which include this `taglist` for snapshot ID \"latest\"")
flags.StringArrayVar(&restoreOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path` for snapshot ID \"latest\"")
initSingleSnapshotFilterOptions(flags, &restoreOptions.snapshotFilterOptions)
flags.BoolVar(&restoreOptions.Sparse, "sparse", false, "restore files as sparse")
flags.BoolVar(&restoreOptions.Verify, "verify", false, "verify restored files content")
}
func runRestore(opts RestoreOptions, gopts GlobalOptions, args []string) error {
ctx := gopts.ctx
func runRestore(ctx context.Context, opts RestoreOptions, gopts GlobalOptions, args []string) error {
hasExcludes := len(opts.Exclude) > 0 || len(opts.InsensitiveExclude) > 0
hasIncludes := len(opts.Include) > 0 || len(opts.InsensitiveInclude) > 0
// Validate provided patterns
if len(opts.Exclude) > 0 {
if valid, invalidPatterns := filter.ValidatePatterns(opts.Exclude); !valid {
return errors.Fatalf("--exclude: invalid pattern(s) provided:\n%s", strings.Join(invalidPatterns, "\n"))
if err := filter.ValidatePatterns(opts.Exclude); err != nil {
return errors.Fatalf("--exclude: %s", err)
}
}
if len(opts.InsensitiveExclude) > 0 {
if valid, invalidPatterns := filter.ValidatePatterns(opts.InsensitiveExclude); !valid {
return errors.Fatalf("--iexclude: invalid pattern(s) provided:\n%s", strings.Join(invalidPatterns, "\n"))
if err := filter.ValidatePatterns(opts.InsensitiveExclude); err != nil {
return errors.Fatalf("--iexclude: %s", err)
}
}
if len(opts.Include) > 0 {
if valid, invalidPatterns := filter.ValidatePatterns(opts.Include); !valid {
return errors.Fatalf("--include: invalid pattern(s) provided:\n%s", strings.Join(invalidPatterns, "\n"))
if err := filter.ValidatePatterns(opts.Include); err != nil {
return errors.Fatalf("--include: %s", err)
}
}
if len(opts.InsensitiveInclude) > 0 {
if valid, invalidPatterns := filter.ValidatePatterns(opts.InsensitiveInclude); !valid {
return errors.Fatalf("--iinclude: invalid pattern(s) provided:\n%s", strings.Join(invalidPatterns, "\n"))
if err := filter.ValidatePatterns(opts.InsensitiveInclude); err != nil {
return errors.Fatalf("--iinclude: %s", err)
}
}
@@ -119,31 +117,23 @@ func runRestore(opts RestoreOptions, gopts GlobalOptions, args []string) error {
debug.Log("restore %v to %v", snapshotIDString, opts.Target)
repo, err := OpenRepository(gopts)
repo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
if !gopts.NoLock {
lock, err := lockRepo(ctx, repo)
var lock *restic.Lock
lock, ctx, err = lockRepo(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
}
}
var id restic.ID
if snapshotIDString == "latest" {
id, err = restic.FindLatestSnapshot(ctx, repo.Backend(), repo, opts.Paths, opts.Tags, opts.Hosts, nil)
if err != nil {
Exitf(1, "latest snapshot for criteria not found: %v Paths:%v Hosts:%v", err, opts.Paths, opts.Hosts)
}
} else {
id, err = restic.FindSnapshot(ctx, repo.Backend(), snapshotIDString)
if err != nil {
Exitf(1, "invalid id %q: %v", snapshotIDString, err)
}
sn, err := restic.FindFilteredSnapshot(ctx, repo.Backend(), repo, opts.Hosts, opts.Tags, opts.Paths, nil, snapshotIDString)
if err != nil {
return errors.Fatalf("failed to find snapshot: %v", err)
}
err = repo.LoadIndex(ctx)
@@ -151,10 +141,7 @@ func runRestore(opts RestoreOptions, gopts GlobalOptions, args []string) error {
return err
}
res, err := restorer.NewRestorer(ctx, repo, id)
if err != nil {
Exitf(2, "creating restorer failed: %v\n", err)
}
res := restorer.NewRestorer(ctx, repo, sn, opts.Sparse)
totalErrors := 0
res.Error = func(location string, err error) error {

216
cmd/restic/cmd_rewrite.go Normal file
View File

@@ -0,0 +1,216 @@
package main
import (
"context"
"fmt"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/walker"
)
var cmdRewrite = &cobra.Command{
Use: "rewrite [flags] [snapshotID ...]",
Short: "Rewrite snapshots to exclude unwanted files",
Long: `
The "rewrite" command excludes files from existing snapshots. It creates new
snapshots containing the same data as the original ones, but without the files
you specify to exclude. All metadata (time, host, tags) will be preserved.
The snapshots to rewrite are specified using the --host, --tag and --path options,
or by providing a list of snapshot IDs. Please note that specifying neither any of
these options nor a snapshot ID will cause the command to rewrite all snapshots.
The special tag 'rewrite' will be added to the new snapshots to distinguish
them from the original ones, unless --forget is used. If the --forget option is
used, the original snapshots will instead be directly removed from the repository.
Please note that the --forget option only removes the snapshots and not the actual
data stored in the repository. In order to delete the no longer referenced data,
use the "prune" command.
EXIT STATUS
===========
Exit status is 0 if the command was successful, and non-zero if there was any error.
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runRewrite(cmd.Context(), rewriteOptions, globalOptions, args)
},
}
// RewriteOptions collects all options for the rewrite command.
type RewriteOptions struct {
Forget bool
DryRun bool
snapshotFilterOptions
excludePatternOptions
}
var rewriteOptions RewriteOptions
func init() {
cmdRoot.AddCommand(cmdRewrite)
f := cmdRewrite.Flags()
f.BoolVarP(&rewriteOptions.Forget, "forget", "", false, "remove original snapshots after creating new ones")
f.BoolVarP(&rewriteOptions.DryRun, "dry-run", "n", false, "do not do anything, just print what would be done")
initMultiSnapshotFilterOptions(f, &rewriteOptions.snapshotFilterOptions, true)
initExcludePatternOptions(f, &rewriteOptions.excludePatternOptions)
}
func rewriteSnapshot(ctx context.Context, repo *repository.Repository, sn *restic.Snapshot, opts RewriteOptions) (bool, error) {
if sn.Tree == nil {
return false, errors.Errorf("snapshot %v has nil tree", sn.ID().Str())
}
rejectByNameFuncs, err := opts.excludePatternOptions.CollectPatterns()
if err != nil {
return false, err
}
selectByName := func(nodepath string) bool {
for _, reject := range rejectByNameFuncs {
if reject(nodepath) {
return false
}
}
return true
}
wg, wgCtx := errgroup.WithContext(ctx)
repo.StartPackUploader(wgCtx, wg)
var filteredTree restic.ID
wg.Go(func() error {
filteredTree, err = walker.FilterTree(wgCtx, repo, "/", *sn.Tree, &walker.TreeFilterVisitor{
SelectByName: selectByName,
PrintExclude: func(path string) { Verbosef(fmt.Sprintf("excluding %s\n", path)) },
})
if err != nil {
return err
}
return repo.Flush(wgCtx)
})
err = wg.Wait()
if err != nil {
return false, err
}
if filteredTree == *sn.Tree {
debug.Log("Snapshot %v not modified", sn)
return false, nil
}
debug.Log("Snapshot %v modified", sn)
if opts.DryRun {
Verbosef("would save new snapshot\n")
if opts.Forget {
Verbosef("would remove old snapshot\n")
}
return true, nil
}
// Always set the original snapshot id as this essentially a new snapshot.
sn.Original = sn.ID()
*sn.Tree = filteredTree
if !opts.Forget {
sn.AddTags([]string{"rewrite"})
}
// Save the new snapshot.
id, err := restic.SaveSnapshot(ctx, repo, sn)
if err != nil {
return false, err
}
if opts.Forget {
h := restic.Handle{Type: restic.SnapshotFile, Name: sn.ID().String()}
if err = repo.Backend().Remove(ctx, h); err != nil {
return false, err
}
debug.Log("removed old snapshot %v", sn.ID())
Verbosef("removed old snapshot %v\n", sn.ID().Str())
}
Verbosef("saved new snapshot %v\n", id.Str())
return true, nil
}
func runRewrite(ctx context.Context, opts RewriteOptions, gopts GlobalOptions, args []string) error {
if opts.excludePatternOptions.Empty() {
return errors.Fatal("Nothing to do: no excludes provided")
}
repo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
if !opts.DryRun {
var lock *restic.Lock
var err error
if opts.Forget {
Verbosef("create exclusive lock for repository\n")
lock, ctx, err = lockRepoExclusive(ctx, repo)
} else {
lock, ctx, err = lockRepo(ctx, repo)
}
defer unlockRepo(lock)
if err != nil {
return err
}
} else {
repo.SetDryRun()
}
snapshotLister, err := backend.MemorizeList(ctx, repo.Backend(), restic.SnapshotFile)
if err != nil {
return err
}
if err = repo.LoadIndex(ctx); err != nil {
return err
}
changedCount := 0
for sn := range FindFilteredSnapshots(ctx, snapshotLister, repo, opts.Hosts, opts.Tags, opts.Paths, args) {
Verbosef("\nsnapshot %s of %v at %s)\n", sn.ID().Str(), sn.Paths, sn.Time)
changed, err := rewriteSnapshot(ctx, repo, sn, opts)
if err != nil {
return errors.Fatalf("unable to rewrite snapshot ID %q: %v", sn.ID().Str(), err)
}
if changed {
changedCount++
}
}
Verbosef("\n")
if changedCount == 0 {
if !opts.DryRun {
Verbosef("no snapshots were modified\n")
} else {
Verbosef("no snapshots would be modified\n")
}
} else {
if !opts.DryRun {
Verbosef("modified %v snapshots\n", changedCount)
} else {
Verbosef("would modify %v snapshots\n", changedCount)
}
}
return nil
}

View File

@@ -1,8 +1,9 @@
// xbuild selfupdate
//go:build selfupdate
package main
import (
"context"
"os"
"path/filepath"
@@ -27,7 +28,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runSelfUpdate(selfUpdateOptions, globalOptions, args)
return runSelfUpdate(cmd.Context(), selfUpdateOptions, globalOptions, args)
},
}
@@ -45,7 +46,7 @@ func init() {
flags.StringVar(&selfUpdateOptions.Output, "output", "", "Save the downloaded file as `filename` (default: running binary itself)")
}
func runSelfUpdate(opts SelfUpdateOptions, gopts GlobalOptions, args []string) error {
func runSelfUpdate(ctx context.Context, opts SelfUpdateOptions, gopts GlobalOptions, args []string) error {
if opts.Output == "" {
file, err := os.Executable()
if err != nil {
@@ -73,7 +74,7 @@ func runSelfUpdate(opts SelfUpdateOptions, gopts GlobalOptions, args []string) e
Verbosef("writing restic to %v\n", opts.Output)
v, err := selfupdate.DownloadLatestStableRelease(gopts.ctx, opts.Output, version, Verbosef)
v, err := selfupdate.DownloadLatestStableRelease(ctx, opts.Output, version, Verbosef)
if err != nil {
return errors.Fatalf("unable to update restic: %v", err)
}

View File

@@ -26,15 +26,13 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runSnapshots(snapshotOptions, globalOptions, args)
return runSnapshots(cmd.Context(), snapshotOptions, globalOptions, args)
},
}
// SnapshotOptions bundles all options for the snapshots command.
type SnapshotOptions struct {
Hosts []string
Tags restic.TagLists
Paths []string
snapshotFilterOptions
Compact bool
Last bool // This option should be removed in favour of Latest.
Latest int
@@ -47,9 +45,7 @@ func init() {
cmdRoot.AddCommand(cmdSnapshots)
f := cmdSnapshots.Flags()
f.StringArrayVarP(&snapshotOptions.Hosts, "host", "H", nil, "only consider snapshots for this `host` (can be specified multiple times)")
f.Var(&snapshotOptions.Tags, "tag", "only consider snapshots which include this `taglist` in the format `tag[,tag,...]` (can be specified multiple times)")
f.StringArrayVar(&snapshotOptions.Paths, "path", nil, "only consider snapshots for this `path` (can be specified multiple times)")
initMultiSnapshotFilterOptions(f, &snapshotOptions.snapshotFilterOptions, true)
f.BoolVarP(&snapshotOptions.Compact, "compact", "c", false, "use compact output format")
f.BoolVar(&snapshotOptions.Last, "last", false, "only show the last snapshot for each host and path")
err := f.MarkDeprecated("last", "use --latest 1")
@@ -61,23 +57,21 @@ func init() {
f.StringVarP(&snapshotOptions.GroupBy, "group-by", "g", "", "`group` snapshots by host, paths and/or tags, separated by comma")
}
func runSnapshots(opts SnapshotOptions, gopts GlobalOptions, args []string) error {
repo, err := OpenRepository(gopts)
func runSnapshots(ctx context.Context, opts SnapshotOptions, gopts GlobalOptions, args []string) error {
repo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
if !gopts.NoLock {
lock, err := lockRepo(gopts.ctx, repo)
var lock *restic.Lock
lock, ctx, err = lockRepo(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
}
}
ctx, cancel := context.WithCancel(gopts.ctx)
defer cancel()
var snapshots restic.Snapshots
for sn := range FindFilteredSnapshots(ctx, repo.Backend(), repo, opts.Hosts, opts.Tags, opts.Paths, args) {
snapshots = append(snapshots, sn)

View File

@@ -7,7 +7,9 @@ import (
"path/filepath"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/crypto"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/walker"
"github.com/minio/sha256-simd"
@@ -47,7 +49,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runStats(globalOptions, args)
return runStats(cmd.Context(), globalOptions, args)
},
}
@@ -56,10 +58,7 @@ type StatsOptions struct {
// the mode of counting to perform (see consts for available modes)
countMode string
// filter snapshots by, if given by user
Hosts []string
Tags restic.TagLists
Paths []string
snapshotFilterOptions
}
var statsOptions StatsOptions
@@ -68,34 +67,30 @@ func init() {
cmdRoot.AddCommand(cmdStats)
f := cmdStats.Flags()
f.StringVar(&statsOptions.countMode, "mode", countModeRestoreSize, "counting mode: restore-size (default), files-by-contents, blobs-per-file or raw-data")
f.StringArrayVarP(&statsOptions.Hosts, "host", "H", nil, "only consider snapshots with the given `host` (can be specified multiple times)")
f.Var(&statsOptions.Tags, "tag", "only consider snapshots which include this `taglist` in the format `tag[,tag,...]` (can be specified multiple times)")
f.StringArrayVar(&statsOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path` (can be specified multiple times)")
initMultiSnapshotFilterOptions(f, &statsOptions.snapshotFilterOptions, true)
}
func runStats(gopts GlobalOptions, args []string) error {
func runStats(ctx context.Context, gopts GlobalOptions, args []string) error {
err := verifyStatsInput(gopts, args)
if err != nil {
return err
}
ctx, cancel := context.WithCancel(gopts.ctx)
defer cancel()
repo, err := OpenRepository(gopts)
repo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
if !gopts.NoLock {
lock, err := lockRepo(ctx, repo)
var lock *restic.Lock
lock, ctx, err = lockRepo(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
}
}
snapshotLister, err := backend.MemorizeList(gopts.ctx, repo.Backend(), restic.SnapshotFile)
snapshotLister, err := backend.MemorizeList(ctx, repo.Backend(), restic.SnapshotFile)
if err != nil {
return err
}
@@ -135,8 +130,22 @@ func runStats(gopts GlobalOptions, args []string) error {
return fmt.Errorf("blob %v not found", blobHandle)
}
stats.TotalSize += uint64(pbs[0].Length)
if repo.Config().Version >= 2 {
stats.TotalUncompressedSize += uint64(crypto.CiphertextLength(int(pbs[0].DataLength())))
if pbs[0].IsCompressed() {
stats.TotalCompressedBlobsSize += uint64(pbs[0].Length)
stats.TotalCompressedBlobsUncompressedSize += uint64(crypto.CiphertextLength(int(pbs[0].DataLength())))
}
}
stats.TotalBlobCount++
}
if stats.TotalCompressedBlobsSize > 0 {
stats.CompressionRatio = float64(stats.TotalCompressedBlobsUncompressedSize) / float64(stats.TotalCompressedBlobsSize)
}
if stats.TotalUncompressedSize > 0 {
stats.CompressionProgress = float64(stats.TotalCompressedBlobsUncompressedSize) / float64(stats.TotalUncompressedSize) * 100
stats.CompressionSpaceSaving = (1 - float64(stats.TotalSize)/float64(stats.TotalUncompressedSize)) * 100
}
}
if gopts.JSON {
@@ -148,15 +157,26 @@ func runStats(gopts GlobalOptions, args []string) error {
}
Printf("Stats in %s mode:\n", statsOptions.countMode)
Printf("Snapshots processed: %d\n", stats.SnapshotsCount)
Printf(" Snapshots processed: %d\n", stats.SnapshotsCount)
if stats.TotalBlobCount > 0 {
Printf(" Total Blob Count: %d\n", stats.TotalBlobCount)
Printf(" Total Blob Count: %d\n", stats.TotalBlobCount)
}
if stats.TotalFileCount > 0 {
Printf(" Total File Count: %d\n", stats.TotalFileCount)
Printf(" Total File Count: %d\n", stats.TotalFileCount)
}
if stats.TotalUncompressedSize > 0 {
Printf(" Total Uncompressed Size: %-5s\n", ui.FormatBytes(stats.TotalUncompressedSize))
}
Printf(" Total Size: %-5s\n", ui.FormatBytes(stats.TotalSize))
if stats.CompressionProgress > 0 {
Printf(" Compression Progress: %.2f%%\n", stats.CompressionProgress)
}
if stats.CompressionRatio > 0 {
Printf(" Compression Ratio: %.2fx\n", stats.CompressionRatio)
}
if stats.CompressionSpaceSaving > 0 {
Printf("Compression Space Saving: %.2f%%\n", stats.CompressionSpaceSaving)
}
Printf(" Total Size: %-5s\n", formatBytes(stats.TotalSize))
return nil
}
@@ -282,9 +302,15 @@ func verifyStatsInput(gopts GlobalOptions, args []string) error {
// to collect information about it, as well as state needed
// for a successful and efficient walk.
type statsContainer struct {
TotalSize uint64 `json:"total_size"`
TotalFileCount uint64 `json:"total_file_count"`
TotalBlobCount uint64 `json:"total_blob_count,omitempty"`
TotalSize uint64 `json:"total_size"`
TotalUncompressedSize uint64 `json:"total_uncompressed_size,omitempty"`
TotalCompressedBlobsSize uint64 `json:"-"`
TotalCompressedBlobsUncompressedSize uint64 `json:"-"`
CompressionRatio float64 `json:"compression_ratio,omitempty"`
CompressionProgress float64 `json:"compression_progress,omitempty"`
CompressionSpaceSaving float64 `json:"compression_space_saving,omitempty"`
TotalFileCount uint64 `json:"total_file_count,omitempty"`
TotalBlobCount uint64 `json:"total_blob_count,omitempty"`
// holds count of all considered snapshots
SnapshotsCount int `json:"snapshots_count"`

View File

@@ -29,15 +29,13 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runTag(tagOptions, globalOptions, args)
return runTag(cmd.Context(), tagOptions, globalOptions, args)
},
}
// TagOptions bundles all options for the 'tag' command.
type TagOptions struct {
Hosts []string
Paths []string
Tags restic.TagLists
snapshotFilterOptions
SetTags restic.TagLists
AddTags restic.TagLists
RemoveTags restic.TagLists
@@ -52,10 +50,7 @@ func init() {
tagFlags.Var(&tagOptions.SetTags, "set", "`tags` which will replace the existing tags in the format `tag[,tag,...]` (can be given multiple times)")
tagFlags.Var(&tagOptions.AddTags, "add", "`tags` which will be added to the existing tags in the format `tag[,tag,...]` (can be given multiple times)")
tagFlags.Var(&tagOptions.RemoveTags, "remove", "`tags` which will be removed from the existing tags in the format `tag[,tag,...]` (can be given multiple times)")
tagFlags.StringArrayVarP(&tagOptions.Hosts, "host", "H", nil, "only consider snapshots for this `host`, when no snapshot ID is given (can be specified multiple times)")
tagFlags.Var(&tagOptions.Tags, "tag", "only consider snapshots which include this `taglist`, when no snapshot-ID is given")
tagFlags.StringArrayVar(&tagOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path`, when no snapshot-ID is given")
initMultiSnapshotFilterOptions(tagFlags, &tagOptions.snapshotFilterOptions, true)
}
func changeTags(ctx context.Context, repo *repository.Repository, sn *restic.Snapshot, setTags, addTags, removeTags []string) (bool, error) {
@@ -100,7 +95,7 @@ func changeTags(ctx context.Context, repo *repository.Repository, sn *restic.Sna
return changed, nil
}
func runTag(opts TagOptions, gopts GlobalOptions, args []string) error {
func runTag(ctx context.Context, opts TagOptions, gopts GlobalOptions, args []string) error {
if len(opts.SetTags) == 0 && len(opts.AddTags) == 0 && len(opts.RemoveTags) == 0 {
return errors.Fatal("nothing to do!")
}
@@ -108,14 +103,15 @@ func runTag(opts TagOptions, gopts GlobalOptions, args []string) error {
return errors.Fatal("--set and --add/--remove cannot be given at the same time")
}
repo, err := OpenRepository(gopts)
repo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
if !gopts.NoLock {
Verbosef("create exclusive lock for repository\n")
lock, err := lockRepoExclusive(gopts.ctx, repo)
var lock *restic.Lock
lock, ctx, err = lockRepoExclusive(ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
@@ -123,8 +119,6 @@ func runTag(opts TagOptions, gopts GlobalOptions, args []string) error {
}
changeCnt := 0
ctx, cancel := context.WithCancel(gopts.ctx)
defer cancel()
for sn := range FindFilteredSnapshots(ctx, repo.Backend(), repo, opts.Hosts, opts.Tags, opts.Paths, args) {
changed, err := changeTags(ctx, repo, sn, opts.SetTags.Flatten(), opts.AddTags.Flatten(), opts.RemoveTags.Flatten())
if err != nil {

View File

@@ -1,6 +1,8 @@
package main
import (
"context"
"github.com/restic/restic/internal/restic"
"github.com/spf13/cobra"
)
@@ -18,7 +20,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runUnlock(unlockOptions, globalOptions)
return runUnlock(cmd.Context(), unlockOptions, globalOptions)
},
}
@@ -35,8 +37,8 @@ func init() {
unlockCmd.Flags().BoolVar(&unlockOptions.RemoveAll, "remove-all", false, "remove all locks, even non-stale ones")
}
func runUnlock(opts UnlockOptions, gopts GlobalOptions) error {
repo, err := OpenRepository(gopts)
func runUnlock(ctx context.Context, opts UnlockOptions, gopts GlobalOptions) error {
repo, err := OpenRepository(ctx, gopts)
if err != nil {
return err
}
@@ -46,11 +48,13 @@ func runUnlock(opts UnlockOptions, gopts GlobalOptions) error {
fn = restic.RemoveAllLocks
}
err = fn(gopts.ctx, repo)
processed, err := fn(ctx, repo)
if err != nil {
return err
}
Verbosef("successfully removed locks\n")
if processed > 0 {
Verbosef("successfully removed %d locks\n", processed)
}
return nil
}

View File

@@ -1,6 +1,8 @@
package main
import (
"context"
"golang.org/x/sync/errgroup"
"github.com/restic/restic/internal/restic"
@@ -8,22 +10,22 @@ import (
// DeleteFiles deletes the given fileList of fileType in parallel
// it will print a warning if there is an error, but continue deleting the remaining files
func DeleteFiles(gopts GlobalOptions, repo restic.Repository, fileList restic.IDSet, fileType restic.FileType) {
_ = deleteFiles(gopts, true, repo, fileList, fileType)
func DeleteFiles(ctx context.Context, gopts GlobalOptions, repo restic.Repository, fileList restic.IDSet, fileType restic.FileType) {
_ = deleteFiles(ctx, gopts, true, repo, fileList, fileType)
}
// DeleteFilesChecked deletes the given fileList of fileType in parallel
// if an error occurs, it will cancel and return this error
func DeleteFilesChecked(gopts GlobalOptions, repo restic.Repository, fileList restic.IDSet, fileType restic.FileType) error {
return deleteFiles(gopts, false, repo, fileList, fileType)
func DeleteFilesChecked(ctx context.Context, gopts GlobalOptions, repo restic.Repository, fileList restic.IDSet, fileType restic.FileType) error {
return deleteFiles(ctx, gopts, false, repo, fileList, fileType)
}
// deleteFiles deletes the given fileList of fileType in parallel
// if ignoreError=true, it will print a warning if there was an error, else it will abort.
func deleteFiles(gopts GlobalOptions, ignoreError bool, repo restic.Repository, fileList restic.IDSet, fileType restic.FileType) error {
func deleteFiles(ctx context.Context, gopts GlobalOptions, ignoreError bool, repo restic.Repository, fileList restic.IDSet, fileType restic.FileType) error {
totalCount := len(fileList)
fileChan := make(chan restic.ID)
wg, ctx := errgroup.WithContext(gopts.ctx)
wg, ctx := errgroup.WithContext(ctx)
wg.Go(func() error {
defer close(fileChan)
for id := range fileList {

View File

@@ -1,6 +1,7 @@
package main
import (
"bufio"
"bytes"
"fmt"
"io"
@@ -15,6 +16,8 @@ import (
"github.com/restic/restic/internal/filter"
"github.com/restic/restic/internal/fs"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/textfile"
"github.com/spf13/pflag"
)
type rejectionCache struct {
@@ -410,3 +413,115 @@ func parseSizeStr(sizeStr string) (int64, error) {
}
return value * unit, nil
}
// readExcludePatternsFromFiles reads all exclude files and returns the list of
// exclude patterns. For each line, leading and trailing white space is removed
// and comment lines are ignored. For each remaining pattern, environment
// variables are resolved. For adding a literal dollar sign ($), write $$ to
// the file.
func readExcludePatternsFromFiles(excludeFiles []string) ([]string, error) {
getenvOrDollar := func(s string) string {
if s == "$" {
return "$"
}
return os.Getenv(s)
}
var excludes []string
for _, filename := range excludeFiles {
err := func() (err error) {
data, err := textfile.Read(filename)
if err != nil {
return err
}
scanner := bufio.NewScanner(bytes.NewReader(data))
for scanner.Scan() {
line := strings.TrimSpace(scanner.Text())
// ignore empty lines
if line == "" {
continue
}
// strip comments
if strings.HasPrefix(line, "#") {
continue
}
line = os.Expand(line, getenvOrDollar)
excludes = append(excludes, line)
}
return scanner.Err()
}()
if err != nil {
return nil, err
}
}
return excludes, nil
}
type excludePatternOptions struct {
Excludes []string
InsensitiveExcludes []string
ExcludeFiles []string
InsensitiveExcludeFiles []string
}
func initExcludePatternOptions(f *pflag.FlagSet, opts *excludePatternOptions) {
f.StringArrayVarP(&opts.Excludes, "exclude", "e", nil, "exclude a `pattern` (can be specified multiple times)")
f.StringArrayVar(&opts.InsensitiveExcludes, "iexclude", nil, "same as --exclude `pattern` but ignores the casing of filenames")
f.StringArrayVar(&opts.ExcludeFiles, "exclude-file", nil, "read exclude patterns from a `file` (can be specified multiple times)")
f.StringArrayVar(&opts.InsensitiveExcludeFiles, "iexclude-file", nil, "same as --exclude-file but ignores casing of `file`names in patterns")
}
func (opts *excludePatternOptions) Empty() bool {
return len(opts.Excludes) == 0 && len(opts.InsensitiveExcludes) == 0 && len(opts.ExcludeFiles) == 0 && len(opts.InsensitiveExcludeFiles) == 0
}
func (opts excludePatternOptions) CollectPatterns() ([]RejectByNameFunc, error) {
var fs []RejectByNameFunc
// add patterns from file
if len(opts.ExcludeFiles) > 0 {
excludePatterns, err := readExcludePatternsFromFiles(opts.ExcludeFiles)
if err != nil {
return nil, err
}
if err := filter.ValidatePatterns(excludePatterns); err != nil {
return nil, errors.Fatalf("--exclude-file: %s", err)
}
opts.Excludes = append(opts.Excludes, excludePatterns...)
}
if len(opts.InsensitiveExcludeFiles) > 0 {
excludes, err := readExcludePatternsFromFiles(opts.InsensitiveExcludeFiles)
if err != nil {
return nil, err
}
if err := filter.ValidatePatterns(excludes); err != nil {
return nil, errors.Fatalf("--iexclude-file: %s", err)
}
opts.InsensitiveExcludes = append(opts.InsensitiveExcludes, excludes...)
}
if len(opts.InsensitiveExcludes) > 0 {
if err := filter.ValidatePatterns(opts.InsensitiveExcludes); err != nil {
return nil, errors.Fatalf("--iexclude: %s", err)
}
fs = append(fs, rejectByInsensitivePattern(opts.InsensitiveExcludes))
}
if len(opts.Excludes) > 0 {
if err := filter.ValidatePatterns(opts.Excludes); err != nil {
return nil, errors.Fatalf("--exclude: %s", err)
}
fs = append(fs, rejectByPattern(opts.Excludes))
}
return fs, nil
}

View File

@@ -1,7 +1,6 @@
package main
import (
"io/ioutil"
"os"
"path/filepath"
"testing"
@@ -85,17 +84,16 @@ func TestIsExcludedByFile(t *testing.T) {
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
tempDir, cleanup := test.TempDir(t)
defer cleanup()
tempDir := test.TempDir(t)
foo := filepath.Join(tempDir, "foo")
err := ioutil.WriteFile(foo, []byte("foo"), 0666)
err := os.WriteFile(foo, []byte("foo"), 0666)
if err != nil {
t.Fatalf("could not write file: %v", err)
}
if tc.tagFile != "" {
tagFile := filepath.Join(tempDir, tc.tagFile)
err = ioutil.WriteFile(tagFile, []byte(tc.content), 0666)
err = os.WriteFile(tagFile, []byte(tc.content), 0666)
if err != nil {
t.Fatalf("could not write tagfile: %v", err)
}
@@ -116,8 +114,7 @@ func TestIsExcludedByFile(t *testing.T) {
// cancel each other out. It was initially written to demonstrate a bug in
// rejectIfPresent.
func TestMultipleIsExcludedByFile(t *testing.T) {
tempDir, cleanup := test.TempDir(t)
defer cleanup()
tempDir := test.TempDir(t)
// Create some files in a temporary directory.
// Files in UPPERCASE will be used as exclusion triggers later on.
@@ -150,7 +147,7 @@ func TestMultipleIsExcludedByFile(t *testing.T) {
// create directories first, then the file
p := filepath.Join(tempDir, filepath.FromSlash(f.path))
errs = append(errs, os.MkdirAll(filepath.Dir(p), 0700))
errs = append(errs, ioutil.WriteFile(p, []byte(f.path), 0600))
errs = append(errs, os.WriteFile(p, []byte(f.path), 0600))
}
test.OKs(t, errs) // see if anything went wrong during the creation
@@ -241,8 +238,7 @@ func TestParseInvalidSizeStr(t *testing.T) {
// TestIsExcludedByFileSize is for testing the instance of
// --exclude-larger-than parameters
func TestIsExcludedByFileSize(t *testing.T) {
tempDir, cleanup := test.TempDir(t)
defer cleanup()
tempDir := test.TempDir(t)
// Max size of file is set to be 1k
maxSizeStr := "1k"

View File

@@ -5,77 +5,60 @@ import (
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/restic"
"github.com/spf13/pflag"
)
type snapshotFilterOptions struct {
Hosts []string
Tags restic.TagLists
Paths []string
}
// initMultiSnapshotFilterOptions is used for commands that work on multiple snapshots
// MUST be combined with restic.FindFilteredSnapshots or FindFilteredSnapshots
func initMultiSnapshotFilterOptions(flags *pflag.FlagSet, options *snapshotFilterOptions, addHostShorthand bool) {
hostShorthand := "H"
if !addHostShorthand {
hostShorthand = ""
}
flags.StringArrayVarP(&options.Hosts, "host", hostShorthand, nil, "only consider snapshots for this `host` (can be specified multiple times)")
flags.Var(&options.Tags, "tag", "only consider snapshots including `tag[,tag,...]` (can be specified multiple times)")
flags.StringArrayVar(&options.Paths, "path", nil, "only consider snapshots including this (absolute) `path` (can be specified multiple times)")
}
// initSingleSnapshotFilterOptions is used for commands that work on a single snapshot
// MUST be combined with restic.FindFilteredSnapshot
func initSingleSnapshotFilterOptions(flags *pflag.FlagSet, options *snapshotFilterOptions) {
flags.StringArrayVarP(&options.Hosts, "host", "H", nil, "only consider snapshots for this `host`, when snapshot ID \"latest\" is given (can be specified multiple times)")
flags.Var(&options.Tags, "tag", "only consider snapshots including `tag[,tag,...]`, when snapshot ID \"latest\" is given (can be specified multiple times)")
flags.StringArrayVar(&options.Paths, "path", nil, "only consider snapshots including this (absolute) `path`, when snapshot ID \"latest\" is given (can be specified multiple times)")
}
// FindFilteredSnapshots yields Snapshots, either given explicitly by `snapshotIDs` or filtered from the list of all snapshots.
func FindFilteredSnapshots(ctx context.Context, be restic.Lister, loader restic.LoaderUnpacked, hosts []string, tags []restic.TagList, paths []string, snapshotIDs []string) <-chan *restic.Snapshot {
out := make(chan *restic.Snapshot)
go func() {
defer close(out)
if len(snapshotIDs) != 0 {
// memorize snapshots list to prevent repeated backend listings
be, err := backend.MemorizeList(ctx, be, restic.SnapshotFile)
if err != nil {
Warnf("could not load snapshots: %v\n", err)
return
}
var (
id restic.ID
usedFilter bool
)
ids := make(restic.IDs, 0, len(snapshotIDs))
// Process all snapshot IDs given as arguments.
for _, s := range snapshotIDs {
if s == "latest" {
usedFilter = true
id, err = restic.FindLatestSnapshot(ctx, be, loader, paths, tags, hosts, nil)
if err != nil {
Warnf("Ignoring %q, no snapshot matched given filter (Paths:%v Tags:%v Hosts:%v)\n", s, paths, tags, hosts)
continue
}
} else {
id, err = restic.FindSnapshot(ctx, be, s)
if err != nil {
Warnf("Ignoring %q: %v\n", s, err)
continue
}
}
ids = append(ids, id)
}
// Give the user some indication their filters are not used.
if !usedFilter && (len(hosts) != 0 || len(tags) != 0 || len(paths) != 0) {
Warnf("Ignoring filters as there are explicit snapshot ids given\n")
}
for _, id := range ids.Uniq() {
sn, err := restic.LoadSnapshot(ctx, loader, id)
if err != nil {
Warnf("Ignoring %q, could not load snapshot: %v\n", id, err)
continue
}
select {
case <-ctx.Done():
return
case out <- sn:
}
}
return
}
snapshots, err := restic.FindFilteredSnapshots(ctx, be, loader, hosts, tags, paths)
be, err := backend.MemorizeList(ctx, be, restic.SnapshotFile)
if err != nil {
Warnf("could not load snapshots: %v\n", err)
return
}
for _, sn := range snapshots {
select {
case <-ctx.Done():
return
case out <- sn:
err = restic.FindFilteredSnapshots(ctx, be, loader, hosts, tags, paths, snapshotIDs, func(id string, sn *restic.Snapshot, err error) error {
if err != nil {
Warnf("Ignoring %q: %v\n", id, err)
} else {
select {
case <-ctx.Done():
return ctx.Err()
case out <- sn:
}
}
return nil
})
if err != nil {
Warnf("could not load snapshots: %v\n", err)
}
}()
return out

View File

@@ -1,7 +1,7 @@
package main
import (
"io/ioutil"
"io"
"testing"
)
@@ -10,7 +10,7 @@ import (
func TestFlags(t *testing.T) {
for _, cmd := range cmdRoot.Commands() {
t.Run(cmd.Name(), func(t *testing.T) {
cmd.Flags().SetOutput(ioutil.Discard)
cmd.Flags().SetOutput(io.Discard)
err := cmd.ParseFlags([]string{"--help"})
if err.Error() == "pflag: help requested" {
err = nil

View File

@@ -3,59 +3,10 @@ package main
import (
"fmt"
"os"
"time"
"github.com/restic/restic/internal/restic"
)
func formatBytes(c uint64) string {
b := float64(c)
switch {
case c > 1<<40:
return fmt.Sprintf("%.3f TiB", b/(1<<40))
case c > 1<<30:
return fmt.Sprintf("%.3f GiB", b/(1<<30))
case c > 1<<20:
return fmt.Sprintf("%.3f MiB", b/(1<<20))
case c > 1<<10:
return fmt.Sprintf("%.3f KiB", b/(1<<10))
default:
return fmt.Sprintf("%d B", c)
}
}
func formatSeconds(sec uint64) string {
hours := sec / 3600
sec -= hours * 3600
min := sec / 60
sec -= min * 60
if hours > 0 {
return fmt.Sprintf("%d:%02d:%02d", hours, min, sec)
}
return fmt.Sprintf("%d:%02d", min, sec)
}
func formatPercent(numerator uint64, denominator uint64) string {
if denominator == 0 {
return ""
}
percent := 100.0 * float64(numerator) / float64(denominator)
if percent > 100 {
percent = 100
}
return fmt.Sprintf("%3.2f%%", percent)
}
func formatDuration(d time.Duration) string {
sec := uint64(d / time.Second)
return formatSeconds(sec)
}
func formatNode(path string, n *restic.Node, long bool) string {
if !long {
return path

View File

@@ -22,6 +22,7 @@ import (
"github.com/restic/restic/internal/backend/location"
"github.com/restic/restic/internal/backend/rclone"
"github.com/restic/restic/internal/backend/rest"
"github.com/restic/restic/internal/backend/retry"
"github.com/restic/restic/internal/backend/s3"
"github.com/restic/restic/internal/backend/sftp"
"github.com/restic/restic/internal/backend/swift"
@@ -41,7 +42,7 @@ import (
"golang.org/x/term"
)
var version = "0.14.0"
var version = "0.15.0"
// TimeFormat is the format used for all timestamps printed by restic.
const TimeFormat = "2006-01-02 15:04:05"
@@ -68,7 +69,6 @@ type GlobalOptions struct {
backend.TransportOptions
limiter.Limits
ctx context.Context
password string
stdout io.Writer
stderr io.Writer
@@ -93,28 +93,26 @@ var globalOptions = GlobalOptions{
}
var isReadingPassword bool
var internalGlobalCtx context.Context
func init() {
var cancel context.CancelFunc
globalOptions.ctx, cancel = context.WithCancel(context.Background())
AddCleanupHandler(func() error {
internalGlobalCtx, cancel = context.WithCancel(context.Background())
AddCleanupHandler(func(code int) (int, error) {
// Must be called before the unlock cleanup handler to ensure that the latter is
// not blocked due to limited number of backend connections, see #1434
cancel()
return nil
return code, nil
})
// parse target pack size from env, on error the default value will be used
targetPackSize, _ := strconv.ParseUint(os.Getenv("RESTIC_PACK_SIZE"), 10, 32)
f := cmdRoot.PersistentFlags()
f.StringVarP(&globalOptions.Repo, "repo", "r", os.Getenv("RESTIC_REPOSITORY"), "`repository` to backup to or restore from (default: $RESTIC_REPOSITORY)")
f.StringVarP(&globalOptions.RepositoryFile, "repository-file", "", os.Getenv("RESTIC_REPOSITORY_FILE"), "`file` to read the repository location from (default: $RESTIC_REPOSITORY_FILE)")
f.StringVarP(&globalOptions.PasswordFile, "password-file", "p", os.Getenv("RESTIC_PASSWORD_FILE"), "`file` to read the repository password from (default: $RESTIC_PASSWORD_FILE)")
f.StringVarP(&globalOptions.KeyHint, "key-hint", "", os.Getenv("RESTIC_KEY_HINT"), "`key` ID of key to try decrypting first (default: $RESTIC_KEY_HINT)")
f.StringVarP(&globalOptions.PasswordCommand, "password-command", "", os.Getenv("RESTIC_PASSWORD_COMMAND"), "shell `command` to obtain the repository password from (default: $RESTIC_PASSWORD_COMMAND)")
f.StringVarP(&globalOptions.Repo, "repo", "r", "", "`repository` to backup to or restore from (default: $RESTIC_REPOSITORY)")
f.StringVarP(&globalOptions.RepositoryFile, "repository-file", "", "", "`file` to read the repository location from (default: $RESTIC_REPOSITORY_FILE)")
f.StringVarP(&globalOptions.PasswordFile, "password-file", "p", "", "`file` to read the repository password from (default: $RESTIC_PASSWORD_FILE)")
f.StringVarP(&globalOptions.KeyHint, "key-hint", "", "", "`key` ID of key to try decrypting first (default: $RESTIC_KEY_HINT)")
f.StringVarP(&globalOptions.PasswordCommand, "password-command", "", "", "shell `command` to obtain the repository password from (default: $RESTIC_PASSWORD_COMMAND)")
f.BoolVarP(&globalOptions.Quiet, "quiet", "q", false, "do not output comprehensive progress report")
f.CountVarP(&globalOptions.Verbose, "verbose", "v", "be verbose (specify multiple times or a level using --verbose=`n`, max level/times is 3)")
f.CountVarP(&globalOptions.Verbose, "verbose", "v", "be verbose (specify multiple times or a level using --verbose=`n`, max level/times is 2)")
f.BoolVar(&globalOptions.NoLock, "no-lock", false, "do not lock the repository, this allows some operations on read-only repositories")
f.BoolVarP(&globalOptions.JSON, "json", "", false, "set output mode to JSON for commands that support it")
f.StringVar(&globalOptions.CacheDir, "cache-dir", "", "set the cache `directory`. (default: use system default cache directory)")
@@ -124,18 +122,26 @@ func init() {
f.BoolVar(&globalOptions.InsecureTLS, "insecure-tls", false, "skip TLS certificate verification when connecting to the repository (insecure)")
f.BoolVar(&globalOptions.CleanupCache, "cleanup-cache", false, "auto remove old cache directories")
f.Var(&globalOptions.Compression, "compression", "compression mode (only available for repository format version 2), one of (auto|off|max)")
f.IntVar(&globalOptions.Limits.UploadKb, "limit-upload", 0, "limits uploads to a maximum rate in KiB/s. (default: unlimited)")
f.IntVar(&globalOptions.Limits.DownloadKb, "limit-download", 0, "limits downloads to a maximum rate in KiB/s. (default: unlimited)")
f.UintVar(&globalOptions.PackSize, "pack-size", uint(targetPackSize), "set target pack size in MiB, created pack files may be larger (default: $RESTIC_PACK_SIZE)")
f.IntVar(&globalOptions.Limits.UploadKb, "limit-upload", 0, "limits uploads to a maximum `rate` in KiB/s. (default: unlimited)")
f.IntVar(&globalOptions.Limits.DownloadKb, "limit-download", 0, "limits downloads to a maximum `rate` in KiB/s. (default: unlimited)")
f.UintVar(&globalOptions.PackSize, "pack-size", 0, "set target pack `size` in MiB, created pack files may be larger (default: $RESTIC_PACK_SIZE)")
f.StringSliceVarP(&globalOptions.Options, "option", "o", []string{}, "set extended option (`key=value`, can be specified multiple times)")
// Use our "generate" command instead of the cobra provided "completion" command
cmdRoot.CompletionOptions.DisableDefaultCmd = true
globalOptions.Repo = os.Getenv("RESTIC_REPOSITORY")
globalOptions.RepositoryFile = os.Getenv("RESTIC_REPOSITORY_FILE")
globalOptions.PasswordFile = os.Getenv("RESTIC_PASSWORD_FILE")
globalOptions.KeyHint = os.Getenv("RESTIC_KEY_HINT")
globalOptions.PasswordCommand = os.Getenv("RESTIC_PASSWORD_COMMAND")
comp := os.Getenv("RESTIC_COMPRESSION")
if comp != "" {
// ignore error as there's no good way to handle it
_ = globalOptions.Compression.Set(comp)
}
// parse target pack size from env, on error the default value will be used
targetPackSize, _ := strconv.ParseUint(os.Getenv("RESTIC_PACK_SIZE"), 10, 32)
globalOptions.PackSize = uint(targetPackSize)
restoreTerminal()
}
@@ -194,20 +200,20 @@ func restoreTerminal() {
return
}
AddCleanupHandler(func() error {
AddCleanupHandler(func(code int) (int, error) {
// Restoring the terminal configuration while restic runs in the
// background, causes restic to get stopped on unix systems with
// a SIGTTOU signal. Thus only restore the terminal settings if
// they might have been modified, which is the case while reading
// a password.
if !isReadingPassword {
return nil
return code, nil
}
err := checkErrno(term.Restore(fd, state))
if err != nil {
fmt.Fprintf(os.Stderr, "unable to restore terminal state: %v\n", err)
}
return err
return code, err
})
}
@@ -275,17 +281,6 @@ func Warnf(format string, args ...interface{}) {
}
}
// Exitf uses Warnf to write the message and then terminates the process with
// the given exit code.
func Exitf(exitcode int, format string, args ...interface{}) {
if !(strings.HasSuffix(format, "\n")) {
format += "\n"
}
Warnf(format, args...)
Exit(exitcode)
}
// resolvePassword determines the password to be used for opening the repository.
func resolvePassword(opts GlobalOptions, envStr string) (string, error) {
if opts.PasswordFile != "" && opts.PasswordCommand != "" {
@@ -423,20 +418,24 @@ func ReadRepo(opts GlobalOptions) (string, error) {
const maxKeys = 20
// OpenRepository reads the password and opens the repository.
func OpenRepository(opts GlobalOptions) (*repository.Repository, error) {
func OpenRepository(ctx context.Context, opts GlobalOptions) (*repository.Repository, error) {
repo, err := ReadRepo(opts)
if err != nil {
return nil, err
}
be, err := open(repo, opts, opts.extended)
be, err := open(ctx, repo, opts, opts.extended)
if err != nil {
return nil, err
}
be = backend.NewRetryBackend(be, 10, func(msg string, err error, d time.Duration) {
report := func(msg string, err error, d time.Duration) {
Warnf("%v returned error, retrying after %v: %v\n", msg, d, err)
})
}
success := func(msg string, retries int) {
Warnf("%v operation successful after %d retries\n", msg, retries)
}
be = retry.New(be, 10, report, success)
// wrap backend if a test specified a hook
if opts.backendTestHook != nil {
@@ -469,7 +468,7 @@ func OpenRepository(opts GlobalOptions) (*repository.Repository, error) {
continue
}
err = s.SearchKey(opts.ctx, opts.password, maxKeys, opts.KeyHint)
err = s.SearchKey(ctx, opts.password, maxKeys, opts.KeyHint)
if err != nil && passwordTriesLeft > 1 {
opts.password = ""
fmt.Fprintf(os.Stderr, "%s. Try again\n", err)
@@ -488,7 +487,11 @@ func OpenRepository(opts GlobalOptions) (*repository.Repository, error) {
id = id[:8]
}
if !opts.JSON {
Verbosef("repository %v opened (repository version %v) successfully, password is correct\n", id, s.Config().Version)
extra := ""
if s.Config().Version >= 2 {
extra = ", compression level " + opts.Compression.String()
}
Verbosef("repository %v opened (version %v%s)\n", id, s.Config().Version, extra)
}
}
@@ -686,7 +689,7 @@ func parseConfig(loc location.Location, opts options.Options) (interface{}, erro
}
// Open the backend specified by a location config.
func open(s string, gopts GlobalOptions, opts options.Options) (restic.Backend, error) {
func open(ctx context.Context, s string, gopts GlobalOptions, opts options.Options) (restic.Backend, error) {
debug.Log("parsing location %v", location.StripPassword(s))
loc, err := location.Parse(s)
if err != nil {
@@ -711,19 +714,19 @@ func open(s string, gopts GlobalOptions, opts options.Options) (restic.Backend,
switch loc.Scheme {
case "local":
be, err = local.Open(globalOptions.ctx, cfg.(local.Config))
be, err = local.Open(ctx, cfg.(local.Config))
case "sftp":
be, err = sftp.Open(globalOptions.ctx, cfg.(sftp.Config))
be, err = sftp.Open(ctx, cfg.(sftp.Config))
case "s3":
be, err = s3.Open(globalOptions.ctx, cfg.(s3.Config), rt)
be, err = s3.Open(ctx, cfg.(s3.Config), rt)
case "gs":
be, err = gs.Open(cfg.(gs.Config), rt)
case "azure":
be, err = azure.Open(cfg.(azure.Config), rt)
be, err = azure.Open(ctx, cfg.(azure.Config), rt)
case "swift":
be, err = swift.Open(globalOptions.ctx, cfg.(swift.Config), rt)
be, err = swift.Open(ctx, cfg.(swift.Config), rt)
case "b2":
be, err = b2.Open(globalOptions.ctx, cfg.(b2.Config), rt)
be, err = b2.Open(ctx, cfg.(b2.Config), rt)
case "rest":
be, err = rest.Open(cfg.(rest.Config), rt)
case "rclone":
@@ -751,7 +754,7 @@ func open(s string, gopts GlobalOptions, opts options.Options) (restic.Backend,
}
// check if config is there
fi, err := be.Stat(globalOptions.ctx, restic.Handle{Type: restic.ConfigFile})
fi, err := be.Stat(ctx, restic.Handle{Type: restic.ConfigFile})
if err != nil {
return nil, errors.Fatalf("unable to open config file: %v\nIs there a repository at the following location?\n%v", err, location.StripPassword(s))
}
@@ -764,7 +767,7 @@ func open(s string, gopts GlobalOptions, opts options.Options) (restic.Backend,
}
// Create the backend specified by URI.
func create(s string, opts options.Options) (restic.Backend, error) {
func create(ctx context.Context, s string, opts options.Options) (restic.Backend, error) {
debug.Log("parsing location %v", s)
loc, err := location.Parse(s)
if err != nil {
@@ -783,23 +786,23 @@ func create(s string, opts options.Options) (restic.Backend, error) {
switch loc.Scheme {
case "local":
return local.Create(globalOptions.ctx, cfg.(local.Config))
return local.Create(ctx, cfg.(local.Config))
case "sftp":
return sftp.Create(globalOptions.ctx, cfg.(sftp.Config))
return sftp.Create(ctx, cfg.(sftp.Config))
case "s3":
return s3.Create(globalOptions.ctx, cfg.(s3.Config), rt)
return s3.Create(ctx, cfg.(s3.Config), rt)
case "gs":
return gs.Create(cfg.(gs.Config), rt)
case "azure":
return azure.Create(cfg.(azure.Config), rt)
return azure.Create(ctx, cfg.(azure.Config), rt)
case "swift":
return swift.Open(globalOptions.ctx, cfg.(swift.Config), rt)
return swift.Open(ctx, cfg.(swift.Config), rt)
case "b2":
return b2.Create(globalOptions.ctx, cfg.(b2.Config), rt)
return b2.Create(ctx, cfg.(b2.Config), rt)
case "rest":
return rest.Create(globalOptions.ctx, cfg.(rest.Config), rt)
return rest.Create(ctx, cfg.(rest.Config), rt)
case "rclone":
return rclone.Create(globalOptions.ctx, cfg.(rclone.Config))
return rclone.Create(ctx, cfg.(rclone.Config))
}
debug.Log("invalid repository scheme: %v", s)

View File

@@ -84,9 +84,9 @@ func runDebug() error {
}
if prof != nil {
AddCleanupHandler(func() error {
AddCleanupHandler(func(code int) (int, error) {
prof.Stop()
return nil
return code, nil
})
}

View File

@@ -2,7 +2,7 @@ package main
import (
"bytes"
"io/ioutil"
"os"
"path/filepath"
"testing"
@@ -31,8 +31,7 @@ func Test_PrintFunctionsRespectsGlobalStdout(t *testing.T) {
}
func TestReadRepo(t *testing.T) {
tempDir, cleanup := test.TempDir(t)
defer cleanup()
tempDir := test.TempDir(t)
// test --repo option
var opts GlobalOptions
@@ -43,7 +42,7 @@ func TestReadRepo(t *testing.T) {
// test --repository-file option
foo := filepath.Join(tempDir, "foo")
err = ioutil.WriteFile(foo, []byte(tempDir+"\n"), 0666)
err = os.WriteFile(foo, []byte(tempDir+"\n"), 0666)
rtest.OK(t, err)
var opts2 GlobalOptions

View File

@@ -1,14 +1,7 @@
//go:build go1.16
// +build go1.16
// Before Go 1.16 filepath.Match returned early on a failed match,
// and thus did not report any later syntax error in the pattern.
// https://go.dev/doc/go1.16#path/filepath
package main
import (
"io/ioutil"
"os"
"path/filepath"
"testing"
@@ -24,14 +17,14 @@ func TestBackupFailsWhenUsingInvalidPatterns(t *testing.T) {
var err error
// Test --exclude
err = testRunBackupAssumeFailure(t, filepath.Dir(env.testdata), []string{"testdata"}, BackupOptions{Excludes: []string{"*[._]log[.-][0-9]", "!*[._]log[.-][0-9]"}}, env.gopts)
err = testRunBackupAssumeFailure(t, filepath.Dir(env.testdata), []string{"testdata"}, BackupOptions{excludePatternOptions: excludePatternOptions{Excludes: []string{"*[._]log[.-][0-9]", "!*[._]log[.-][0-9]"}}}, env.gopts)
rtest.Equals(t, `Fatal: --exclude: invalid pattern(s) provided:
*[._]log[.-][0-9]
!*[._]log[.-][0-9]`, err.Error())
// Test --iexclude
err = testRunBackupAssumeFailure(t, filepath.Dir(env.testdata), []string{"testdata"}, BackupOptions{InsensitiveExcludes: []string{"*[._]log[.-][0-9]", "!*[._]log[.-][0-9]"}}, env.gopts)
err = testRunBackupAssumeFailure(t, filepath.Dir(env.testdata), []string{"testdata"}, BackupOptions{excludePatternOptions: excludePatternOptions{InsensitiveExcludes: []string{"*[._]log[.-][0-9]", "!*[._]log[.-][0-9]"}}}, env.gopts)
rtest.Equals(t, `Fatal: --iexclude: invalid pattern(s) provided:
*[._]log[.-][0-9]
@@ -46,7 +39,7 @@ func TestBackupFailsWhenUsingInvalidPatternsFromFile(t *testing.T) {
// Create an exclude file with some invalid patterns
excludeFile := env.base + "/excludefile"
fileErr := ioutil.WriteFile(excludeFile, []byte("*.go\n*[._]log[.-][0-9]\n!*[._]log[.-][0-9]"), 0644)
fileErr := os.WriteFile(excludeFile, []byte("*.go\n*[._]log[.-][0-9]\n!*[._]log[.-][0-9]"), 0644)
if fileErr != nil {
t.Fatalf("Could not write exclude file: %v", fileErr)
}
@@ -54,14 +47,14 @@ func TestBackupFailsWhenUsingInvalidPatternsFromFile(t *testing.T) {
var err error
// Test --exclude-file:
err = testRunBackupAssumeFailure(t, filepath.Dir(env.testdata), []string{"testdata"}, BackupOptions{ExcludeFiles: []string{excludeFile}}, env.gopts)
err = testRunBackupAssumeFailure(t, filepath.Dir(env.testdata), []string{"testdata"}, BackupOptions{excludePatternOptions: excludePatternOptions{ExcludeFiles: []string{excludeFile}}}, env.gopts)
rtest.Equals(t, `Fatal: --exclude-file: invalid pattern(s) provided:
*[._]log[.-][0-9]
!*[._]log[.-][0-9]`, err.Error())
// Test --iexclude-file
err = testRunBackupAssumeFailure(t, filepath.Dir(env.testdata), []string{"testdata"}, BackupOptions{InsensitiveExcludeFiles: []string{excludeFile}}, env.gopts)
err = testRunBackupAssumeFailure(t, filepath.Dir(env.testdata), []string{"testdata"}, BackupOptions{excludePatternOptions: excludePatternOptions{InsensitiveExcludeFiles: []string{excludeFile}}}, env.gopts)
rtest.Equals(t, `Fatal: --iexclude-file: invalid pattern(s) provided:
*[._]log[.-][0-9]

View File

@@ -4,9 +4,11 @@
package main
import (
"context"
"fmt"
"os"
"path/filepath"
"sync"
"testing"
"time"
@@ -53,11 +55,12 @@ func waitForMount(t testing.TB, dir string) {
t.Errorf("subdir %q of dir %s never appeared", mountTestSubdir, dir)
}
func testRunMount(t testing.TB, gopts GlobalOptions, dir string) {
func testRunMount(t testing.TB, gopts GlobalOptions, dir string, wg *sync.WaitGroup) {
defer wg.Done()
opts := MountOptions{
TimeTemplate: time.RFC3339,
}
rtest.OK(t, runMount(opts, gopts, []string{dir}))
rtest.OK(t, runMount(context.TODO(), opts, gopts, []string{dir}))
}
func testRunUmount(t testing.TB, gopts GlobalOptions, dir string) {
@@ -86,8 +89,11 @@ func listSnapshots(t testing.TB, dir string) []string {
func checkSnapshots(t testing.TB, global GlobalOptions, repo *repository.Repository, mountpoint, repodir string, snapshotIDs restic.IDs, expectedSnapshotsInFuseDir int) {
t.Logf("checking for %d snapshots: %v", len(snapshotIDs), snapshotIDs)
go testRunMount(t, global, mountpoint)
var wg sync.WaitGroup
wg.Add(1)
go testRunMount(t, global, mountpoint, &wg)
waitForMount(t, mountpoint)
defer wg.Wait()
defer testRunUmount(t, global, mountpoint)
if !snapshotsDirExists(t, mountpoint) {
@@ -119,7 +125,7 @@ func checkSnapshots(t testing.TB, global GlobalOptions, repo *repository.Reposit
}
for _, id := range snapshotIDs {
snapshot, err := restic.LoadSnapshot(global.ctx, repo, id)
snapshot, err := restic.LoadSnapshot(context.TODO(), repo, id)
rtest.OK(t, err)
ts := snapshot.Time.Format(time.RFC3339)
@@ -160,7 +166,7 @@ func TestMount(t *testing.T) {
testRunInit(t, env.gopts)
repo, err := OpenRepository(env.gopts)
repo, err := OpenRepository(context.TODO(), env.gopts)
rtest.OK(t, err)
checkSnapshots(t, env.gopts, repo, env.mountpoint, env.repo, []restic.ID{}, 0)
@@ -205,7 +211,7 @@ func TestMountSameTimestamps(t *testing.T) {
rtest.SetupTarTestFixture(t, env.base, filepath.Join("testdata", "repo-same-timestamps.tar.gz"))
repo, err := OpenRepository(env.gopts)
repo, err := OpenRepository(context.TODO(), env.gopts)
rtest.OK(t, err)
ids := []restic.ID{

View File

@@ -2,14 +2,13 @@ package main
import (
"bytes"
"context"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"runtime"
"testing"
"github.com/restic/restic/internal/backend/retry"
"github.com/restic/restic/internal/options"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
@@ -172,8 +171,9 @@ func withTestEnvironment(t testing.TB) (env *testEnvironment, cleanup func()) {
repository.TestUseLowSecurityKDFParameters(t)
restic.TestDisableCheckPolynomial(t)
retry.TestFastRetries(t)
tempdir, err := ioutil.TempDir(rtest.TestTempDir, "restic-test-")
tempdir, err := os.MkdirTemp(rtest.TestTempDir, "restic-test-")
rtest.OK(t, err)
env = &testEnvironment{
@@ -193,7 +193,6 @@ func withTestEnvironment(t testing.TB) (env *testEnvironment, cleanup func()) {
Repo: env.repo,
Quiet: true,
CacheDir: env.cache,
ctx: context.Background(),
password: rtest.TestPassword,
stdout: os.Stdout,
stderr: os.Stderr,

View File

@@ -6,7 +6,6 @@ package main
import (
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"syscall"
@@ -57,7 +56,7 @@ func nlink(info os.FileInfo) uint64 {
func createFileSetPerHardlink(dir string) map[uint64][]string {
var stat syscall.Stat_t
linkTests := make(map[uint64][]string)
files, err := ioutil.ReadDir(dir)
files, err := os.ReadDir(dir)
if err != nil {
return nil
}

View File

@@ -6,7 +6,6 @@ package main
import (
"fmt"
"io"
"io/ioutil"
"os"
)
@@ -39,7 +38,7 @@ func inode(info os.FileInfo) uint64 {
func createFileSetPerHardlink(dir string) map[uint64][]string {
linkTests := make(map[uint64][]string)
files, err := ioutil.ReadDir(dir)
files, err := os.ReadDir(dir)
if err != nil {
return nil
}

View File

@@ -0,0 +1,73 @@
package main
import (
"context"
"path/filepath"
"testing"
"github.com/restic/restic/internal/restic"
rtest "github.com/restic/restic/internal/test"
)
func testRunRewriteExclude(t testing.TB, gopts GlobalOptions, excludes []string, forget bool) {
opts := RewriteOptions{
excludePatternOptions: excludePatternOptions{
Excludes: excludes,
},
Forget: forget,
}
rtest.OK(t, runRewrite(context.TODO(), opts, gopts, nil))
}
func createBasicRewriteRepo(t testing.TB, env *testEnvironment) restic.ID {
testSetupBackupData(t, env)
// create backup
testRunBackup(t, filepath.Dir(env.testdata), []string{"testdata"}, BackupOptions{}, env.gopts)
snapshotIDs := testRunList(t, "snapshots", env.gopts)
rtest.Assert(t, len(snapshotIDs) == 1, "expected one snapshot, got %v", snapshotIDs)
testRunCheck(t, env.gopts)
return snapshotIDs[0]
}
func TestRewrite(t *testing.T) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
createBasicRewriteRepo(t, env)
// exclude some data
testRunRewriteExclude(t, env.gopts, []string{"3"}, false)
snapshotIDs := testRunList(t, "snapshots", env.gopts)
rtest.Assert(t, len(snapshotIDs) == 2, "expected two snapshots, got %v", snapshotIDs)
testRunCheck(t, env.gopts)
}
func TestRewriteUnchanged(t *testing.T) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
snapshotID := createBasicRewriteRepo(t, env)
// use an exclude that will not exclude anything
testRunRewriteExclude(t, env.gopts, []string{"3dflkhjgdflhkjetrlkhjgfdlhkj"}, false)
newSnapshotIDs := testRunList(t, "snapshots", env.gopts)
rtest.Assert(t, len(newSnapshotIDs) == 1, "expected one snapshot, got %v", newSnapshotIDs)
rtest.Assert(t, snapshotID == newSnapshotIDs[0], "snapshot id changed unexpectedly")
testRunCheck(t, env.gopts)
}
func TestRewriteReplace(t *testing.T) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
snapshotID := createBasicRewriteRepo(t, env)
// exclude some data
testRunRewriteExclude(t, env.gopts, []string{"3"}, true)
newSnapshotIDs := testRunList(t, "snapshots", env.gopts)
rtest.Assert(t, len(newSnapshotIDs) == 1, "expected one snapshot, got %v", newSnapshotIDs)
rtest.Assert(t, snapshotID != newSnapshotIDs[0], "snapshot id should have changed")
// check forbids unused blobs, thus remove them first
testRunPrune(t, env.gopts, PruneOptions{MaxUnused: "0"})
testRunCheck(t, env.gopts)
}

View File

@@ -8,7 +8,6 @@ import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
mrand "math/rand"
"os"
"path/filepath"
@@ -23,6 +22,7 @@ import (
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/filter"
"github.com/restic/restic/internal/fs"
"github.com/restic/restic/internal/index"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
rtest "github.com/restic/restic/internal/test"
@@ -52,26 +52,26 @@ func testRunInit(t testing.TB, opts GlobalOptions) {
restic.TestDisableCheckPolynomial(t)
restic.TestSetLockTimeout(t, 0)
rtest.OK(t, runInit(InitOptions{}, opts, nil))
rtest.OK(t, runInit(context.TODO(), InitOptions{}, opts, nil))
t.Logf("repository initialized at %v", opts.Repo)
}
func testRunBackupAssumeFailure(t testing.TB, dir string, target []string, opts BackupOptions, gopts GlobalOptions) error {
ctx, cancel := context.WithCancel(gopts.ctx)
ctx, cancel := context.WithCancel(context.TODO())
defer cancel()
var wg errgroup.Group
term := termstatus.New(gopts.stdout, gopts.stderr, gopts.Quiet)
wg.Go(func() error { term.Run(ctx); return nil })
gopts.stdout = ioutil.Discard
gopts.stdout = io.Discard
t.Logf("backing up %v in %v", target, dir)
if dir != "" {
cleanup := rtest.Chdir(t, dir)
defer cleanup()
}
backupErr := runBackup(opts, gopts, term, target)
backupErr := runBackup(ctx, opts, gopts, term, target)
cancel()
@@ -95,7 +95,7 @@ func testRunList(t testing.TB, tpe string, opts GlobalOptions) restic.IDs {
globalOptions.stdout = os.Stdout
}()
rtest.OK(t, runList(cmdList, opts, []string{tpe}))
rtest.OK(t, runList(context.TODO(), cmdList, opts, []string{tpe}))
return parseIDsFromReader(t, buf)
}
@@ -106,11 +106,13 @@ func testRunRestore(t testing.TB, opts GlobalOptions, dir string, snapshotID res
func testRunRestoreLatest(t testing.TB, gopts GlobalOptions, dir string, paths []string, hosts []string) {
opts := RestoreOptions{
Target: dir,
Hosts: hosts,
Paths: paths,
snapshotFilterOptions: snapshotFilterOptions{
Hosts: hosts,
Paths: paths,
},
}
rtest.OK(t, runRestore(opts, gopts, []string{"latest"}))
rtest.OK(t, runRestore(context.TODO(), opts, gopts, []string{"latest"}))
}
func testRunRestoreExcludes(t testing.TB, gopts GlobalOptions, dir string, snapshotID restic.ID, excludes []string) {
@@ -119,7 +121,7 @@ func testRunRestoreExcludes(t testing.TB, gopts GlobalOptions, dir string, snaps
Exclude: excludes,
}
rtest.OK(t, runRestore(opts, gopts, []string{snapshotID.String()}))
rtest.OK(t, runRestore(context.TODO(), opts, gopts, []string{snapshotID.String()}))
}
func testRunRestoreIncludes(t testing.TB, gopts GlobalOptions, dir string, snapshotID restic.ID, includes []string) {
@@ -128,11 +130,11 @@ func testRunRestoreIncludes(t testing.TB, gopts GlobalOptions, dir string, snaps
Include: includes,
}
rtest.OK(t, runRestore(opts, gopts, []string{snapshotID.String()}))
rtest.OK(t, runRestore(context.TODO(), opts, gopts, []string{snapshotID.String()}))
}
func testRunRestoreAssumeFailure(t testing.TB, snapshotID string, opts RestoreOptions, gopts GlobalOptions) error {
err := runRestore(opts, gopts, []string{snapshotID})
err := runRestore(context.TODO(), opts, gopts, []string{snapshotID})
return err
}
@@ -142,7 +144,7 @@ func testRunCheck(t testing.TB, gopts GlobalOptions) {
ReadData: true,
CheckUnused: true,
}
rtest.OK(t, runCheck(opts, gopts, nil))
rtest.OK(t, runCheck(context.TODO(), opts, gopts, nil))
}
func testRunCheckOutput(gopts GlobalOptions) (string, error) {
@@ -157,7 +159,7 @@ func testRunCheckOutput(gopts GlobalOptions) (string, error) {
ReadData: true,
}
err := runCheck(opts, gopts, nil)
err := runCheck(context.TODO(), opts, gopts, nil)
return buf.String(), err
}
@@ -175,17 +177,17 @@ func testRunDiffOutput(gopts GlobalOptions, firstSnapshotID string, secondSnapsh
opts := DiffOptions{
ShowMetadata: false,
}
err := runDiff(opts, gopts, []string{firstSnapshotID, secondSnapshotID})
err := runDiff(context.TODO(), opts, gopts, []string{firstSnapshotID, secondSnapshotID})
return buf.String(), err
}
func testRunRebuildIndex(t testing.TB, gopts GlobalOptions) {
globalOptions.stdout = ioutil.Discard
globalOptions.stdout = io.Discard
defer func() {
globalOptions.stdout = os.Stdout
}()
rtest.OK(t, runRebuildIndex(RebuildIndexOptions{}, gopts))
rtest.OK(t, runRebuildIndex(context.TODO(), RebuildIndexOptions{}, gopts))
}
func testRunLs(t testing.TB, gopts GlobalOptions, snapshotID string) []string {
@@ -200,7 +202,7 @@ func testRunLs(t testing.TB, gopts GlobalOptions, snapshotID string) []string {
opts := LsOptions{}
rtest.OK(t, runLs(opts, gopts, []string{snapshotID}))
rtest.OK(t, runLs(context.TODO(), opts, gopts, []string{snapshotID}))
return strings.Split(buf.String(), "\n")
}
@@ -216,7 +218,7 @@ func testRunFind(t testing.TB, wantJSON bool, gopts GlobalOptions, pattern strin
opts := FindOptions{}
rtest.OK(t, runFind(opts, gopts, []string{pattern}))
rtest.OK(t, runFind(context.TODO(), opts, gopts, []string{pattern}))
return buf.Bytes()
}
@@ -232,7 +234,7 @@ func testRunSnapshots(t testing.TB, gopts GlobalOptions) (newest *Snapshot, snap
opts := SnapshotOptions{}
rtest.OK(t, runSnapshots(opts, globalOptions, []string{}))
rtest.OK(t, runSnapshots(context.TODO(), opts, globalOptions, []string{}))
snapshots := []Snapshot{}
rtest.OK(t, json.Unmarshal(buf.Bytes(), &snapshots))
@@ -249,7 +251,7 @@ func testRunSnapshots(t testing.TB, gopts GlobalOptions) (newest *Snapshot, snap
func testRunForget(t testing.TB, gopts GlobalOptions, args ...string) {
opts := ForgetOptions{}
rtest.OK(t, runForget(opts, gopts, args))
rtest.OK(t, runForget(context.TODO(), opts, gopts, args))
}
func testRunForgetJSON(t testing.TB, gopts GlobalOptions, args ...string) {
@@ -267,7 +269,7 @@ func testRunForgetJSON(t testing.TB, gopts GlobalOptions, args ...string) {
Last: 1,
}
rtest.OK(t, runForget(opts, gopts, args))
rtest.OK(t, runForget(context.TODO(), opts, gopts, args))
var forgets []*ForgetGroup
rtest.OK(t, json.Unmarshal(buf.Bytes(), &forgets))
@@ -286,7 +288,7 @@ func testRunPrune(t testing.TB, gopts GlobalOptions, opts PruneOptions) {
defer func() {
gopts.backendTestHook = oldHook
}()
rtest.OK(t, runPrune(opts, gopts))
rtest.OK(t, runPrune(context.TODO(), opts, gopts))
}
func testSetupBackupData(t testing.TB, env *testEnvironment) string {
@@ -416,7 +418,7 @@ func TestBackupNonExistingFile(t *testing.T) {
defer cleanup()
testSetupBackupData(t, env)
globalOptions.stderr = ioutil.Discard
globalOptions.stderr = io.Discard
defer func() {
globalOptions.stderr = os.Stderr
}()
@@ -435,25 +437,25 @@ func TestBackupNonExistingFile(t *testing.T) {
}
func removePacksExcept(gopts GlobalOptions, t *testing.T, keep restic.IDSet, removeTreePacks bool) {
r, err := OpenRepository(gopts)
r, err := OpenRepository(context.TODO(), gopts)
rtest.OK(t, err)
// Get all tree packs
rtest.OK(t, r.LoadIndex(gopts.ctx))
rtest.OK(t, r.LoadIndex(context.TODO()))
treePacks := restic.NewIDSet()
for pb := range r.Index().Each(context.TODO()) {
r.Index().Each(context.TODO(), func(pb restic.PackedBlob) {
if pb.Type == restic.TreeBlob {
treePacks.Insert(pb.PackID)
}
}
})
// remove all packs containing data blobs
rtest.OK(t, r.List(gopts.ctx, restic.PackFile, func(id restic.ID, size int64) error {
rtest.OK(t, r.List(context.TODO(), restic.PackFile, func(id restic.ID, size int64) error {
if treePacks.Has(id) != removeTreePacks || keep.Has(id) {
return nil
}
return r.Backend().Remove(gopts.ctx, restic.Handle{Type: restic.PackFile, Name: id.String()})
return r.Backend().Remove(context.TODO(), restic.Handle{Type: restic.PackFile, Name: id.String()})
}))
}
@@ -477,7 +479,7 @@ func TestBackupSelfHealing(t *testing.T) {
testRunRebuildIndex(t, env.gopts)
// now the repo is also missing the data blob in the index; check should report this
rtest.Assert(t, runCheck(CheckOptions{}, env.gopts, nil) != nil,
rtest.Assert(t, runCheck(context.TODO(), CheckOptions{}, env.gopts, nil) != nil,
"check should have reported an error")
// second backup should report an error but "heal" this situation
@@ -500,26 +502,26 @@ func TestBackupTreeLoadError(t *testing.T) {
// Backup a subdirectory first, such that we can remove the tree pack for the subdirectory
testRunBackup(t, env.testdata, []string{"test"}, opts, env.gopts)
r, err := OpenRepository(env.gopts)
r, err := OpenRepository(context.TODO(), env.gopts)
rtest.OK(t, err)
rtest.OK(t, r.LoadIndex(env.gopts.ctx))
rtest.OK(t, r.LoadIndex(context.TODO()))
treePacks := restic.NewIDSet()
for pb := range r.Index().Each(context.TODO()) {
r.Index().Each(context.TODO(), func(pb restic.PackedBlob) {
if pb.Type == restic.TreeBlob {
treePacks.Insert(pb.PackID)
}
}
})
testRunBackup(t, filepath.Dir(env.testdata), []string{filepath.Base(env.testdata)}, opts, env.gopts)
testRunCheck(t, env.gopts)
// delete the subdirectory pack first
for id := range treePacks {
rtest.OK(t, r.Backend().Remove(env.gopts.ctx, restic.Handle{Type: restic.PackFile, Name: id.String()}))
rtest.OK(t, r.Backend().Remove(context.TODO(), restic.Handle{Type: restic.PackFile, Name: id.String()}))
}
testRunRebuildIndex(t, env.gopts)
// now the repo is missing the tree blob in the index; check should report this
rtest.Assert(t, runCheck(CheckOptions{}, env.gopts, nil) != nil, "check should have reported an error")
rtest.Assert(t, runCheck(context.TODO(), CheckOptions{}, env.gopts, nil) != nil, "check should have reported an error")
// second backup should report an error but "heal" this situation
err = testRunBackupAssumeFailure(t, filepath.Dir(env.testdata), []string{filepath.Base(env.testdata)}, opts, env.gopts)
rtest.Assert(t, err != nil, "backup should have reported an error for the subdirectory")
@@ -529,7 +531,7 @@ func TestBackupTreeLoadError(t *testing.T) {
removePacksExcept(env.gopts, t, restic.NewIDSet(), true)
testRunRebuildIndex(t, env.gopts)
// now the repo is also missing the data blob in the index; check should report this
rtest.Assert(t, runCheck(CheckOptions{}, env.gopts, nil) != nil, "check should have reported an error")
rtest.Assert(t, runCheck(context.TODO(), CheckOptions{}, env.gopts, nil) != nil, "check should have reported an error")
// second backup should report an error but "heal" this situation
err = testRunBackupAssumeFailure(t, filepath.Dir(env.testdata), []string{filepath.Base(env.testdata)}, opts, env.gopts)
rtest.Assert(t, err != nil, "backup should have reported an error")
@@ -638,7 +640,7 @@ func TestBackupErrors(t *testing.T) {
}()
opts := BackupOptions{}
gopts := env.gopts
gopts.stderr = ioutil.Discard
gopts.stderr = io.Discard
err := testRunBackupAssumeFailure(t, filepath.Dir(env.testdata), []string{"testdata"}, opts, gopts)
rtest.Assert(t, err != nil, "Assumed failure, but no error occurred.")
rtest.Assert(t, err == ErrInvalidSourceData, "Wrong error returned")
@@ -759,7 +761,7 @@ func testRunCopy(t testing.TB, srcGopts GlobalOptions, dstGopts GlobalOptions) {
},
}
rtest.OK(t, runCopy(copyOpts, gopts, nil))
rtest.OK(t, runCopy(context.TODO(), copyOpts, gopts, nil))
}
func TestCopy(t *testing.T) {
@@ -901,15 +903,15 @@ func TestInitCopyChunkerParams(t *testing.T) {
password: env2.gopts.password,
},
}
rtest.Assert(t, runInit(initOpts, env.gopts, nil) != nil, "expected invalid init options to fail")
rtest.Assert(t, runInit(context.TODO(), initOpts, env.gopts, nil) != nil, "expected invalid init options to fail")
initOpts.CopyChunkerParameters = true
rtest.OK(t, runInit(initOpts, env.gopts, nil))
rtest.OK(t, runInit(context.TODO(), initOpts, env.gopts, nil))
repo, err := OpenRepository(env.gopts)
repo, err := OpenRepository(context.TODO(), env.gopts)
rtest.OK(t, err)
otherRepo, err := OpenRepository(env2.gopts)
otherRepo, err := OpenRepository(context.TODO(), env2.gopts)
rtest.OK(t, err)
rtest.Assert(t, repo.Config().ChunkerPolynomial == otherRepo.Config().ChunkerPolynomial,
@@ -918,7 +920,7 @@ func TestInitCopyChunkerParams(t *testing.T) {
}
func testRunTag(t testing.TB, opts TagOptions, gopts GlobalOptions) {
rtest.OK(t, runTag(opts, gopts, []string{}))
rtest.OK(t, runTag(context.TODO(), opts, gopts, []string{}))
}
func TestTag(t *testing.T) {
@@ -1010,7 +1012,7 @@ func testRunKeyListOtherIDs(t testing.TB, gopts GlobalOptions) []string {
globalOptions.stdout = os.Stdout
}()
rtest.OK(t, runKey(gopts, []string{"list"}))
rtest.OK(t, runKey(context.TODO(), gopts, []string{"list"}))
scanner := bufio.NewScanner(buf)
exp := regexp.MustCompile(`^ ([a-f0-9]+) `)
@@ -1031,7 +1033,7 @@ func testRunKeyAddNewKey(t testing.TB, newPassword string, gopts GlobalOptions)
testKeyNewPassword = ""
}()
rtest.OK(t, runKey(gopts, []string{"add"}))
rtest.OK(t, runKey(context.TODO(), gopts, []string{"add"}))
}
func testRunKeyAddNewKeyUserHost(t testing.TB, gopts GlobalOptions) {
@@ -1045,11 +1047,11 @@ func testRunKeyAddNewKeyUserHost(t testing.TB, gopts GlobalOptions) {
rtest.OK(t, cmdKey.Flags().Parse([]string{"--user=john", "--host=example.com"}))
t.Log("adding key for john@example.com")
rtest.OK(t, runKey(gopts, []string{"add"}))
rtest.OK(t, runKey(context.TODO(), gopts, []string{"add"}))
repo, err := OpenRepository(gopts)
repo, err := OpenRepository(context.TODO(), gopts)
rtest.OK(t, err)
key, err := repository.SearchKey(gopts.ctx, repo, testKeyNewPassword, 2, "")
key, err := repository.SearchKey(context.TODO(), repo, testKeyNewPassword, 2, "")
rtest.OK(t, err)
rtest.Equals(t, "john", key.Username)
@@ -1062,13 +1064,13 @@ func testRunKeyPasswd(t testing.TB, newPassword string, gopts GlobalOptions) {
testKeyNewPassword = ""
}()
rtest.OK(t, runKey(gopts, []string{"passwd"}))
rtest.OK(t, runKey(context.TODO(), gopts, []string{"passwd"}))
}
func testRunKeyRemove(t testing.TB, gopts GlobalOptions, IDs []string) {
t.Logf("remove %d keys: %q\n", len(IDs), IDs)
for _, id := range IDs {
rtest.OK(t, runKey(gopts, []string{"remove", id}))
rtest.OK(t, runKey(context.TODO(), gopts, []string{"remove", id}))
}
}
@@ -1098,7 +1100,7 @@ func TestKeyAddRemove(t *testing.T) {
env.gopts.password = passwordList[len(passwordList)-1]
t.Logf("testing access with last password %q\n", env.gopts.password)
rtest.OK(t, runKey(env.gopts, []string{"list"}))
rtest.OK(t, runKey(context.TODO(), env.gopts, []string{"list"}))
testRunCheck(t, env.gopts)
testRunKeyAddNewKeyUserHost(t, env.gopts)
@@ -1126,16 +1128,16 @@ func TestKeyProblems(t *testing.T) {
testKeyNewPassword = ""
}()
err := runKey(env.gopts, []string{"passwd"})
err := runKey(context.TODO(), env.gopts, []string{"passwd"})
t.Log(err)
rtest.Assert(t, err != nil, "expected passwd change to fail")
err = runKey(env.gopts, []string{"add"})
err = runKey(context.TODO(), env.gopts, []string{"add"})
t.Log(err)
rtest.Assert(t, err != nil, "expected key adding to fail")
t.Logf("testing access with initial password %q\n", env.gopts.password)
rtest.OK(t, runKey(env.gopts, []string{"list"}))
rtest.OK(t, runKey(context.TODO(), env.gopts, []string{"list"}))
testRunCheck(t, env.gopts)
}
@@ -1195,7 +1197,7 @@ func TestRestoreFilter(t *testing.T) {
if ok, _ := filter.Match(pat, filepath.Base(testFile.name)); !ok {
rtest.OK(t, err)
} else {
rtest.Assert(t, os.IsNotExist(errors.Cause(err)),
rtest.Assert(t, os.IsNotExist(err),
"expected %v to not exist in restore step %v, but it exists, err %v", testFile.name, i+1, err)
}
}
@@ -1240,7 +1242,7 @@ func TestRestoreLatest(t *testing.T) {
opts := BackupOptions{}
// chdir manually here so we can get the current directory. This is not the
// same as the temp dir returned by ioutil.TempDir() on darwin.
// same as the temp dir returned by os.MkdirTemp() on darwin.
back := rtest.Chdir(t, filepath.Dir(env.testdata))
defer back()
@@ -1281,15 +1283,15 @@ func TestRestoreLatest(t *testing.T) {
testRunRestoreLatest(t, env.gopts, filepath.Join(env.base, "restore1"), []string{filepath.Dir(p1)}, nil)
rtest.OK(t, testFileSize(p1rAbs, int64(102)))
if _, err := os.Stat(p2rAbs); os.IsNotExist(errors.Cause(err)) {
rtest.Assert(t, os.IsNotExist(errors.Cause(err)),
if _, err := os.Stat(p2rAbs); os.IsNotExist(err) {
rtest.Assert(t, os.IsNotExist(err),
"expected %v to not exist in restore, but it exists, err %v", p2rAbs, err)
}
testRunRestoreLatest(t, env.gopts, filepath.Join(env.base, "restore2"), []string{filepath.Dir(p2)}, nil)
rtest.OK(t, testFileSize(p2rAbs, int64(103)))
if _, err := os.Stat(p1rAbs); os.IsNotExist(errors.Cause(err)) {
rtest.Assert(t, os.IsNotExist(errors.Cause(err)),
if _, err := os.Stat(p1rAbs); os.IsNotExist(err) {
rtest.Assert(t, os.IsNotExist(err),
"expected %v to not exist in restore, but it exists, err %v", p1rAbs, err)
}
}
@@ -1305,7 +1307,7 @@ func TestRestoreWithPermissionFailure(t *testing.T) {
rtest.Assert(t, len(snapshots) > 0,
"no snapshots found in repo (%v)", datafile)
globalOptions.stderr = ioutil.Discard
globalOptions.stderr = io.Discard
defer func() {
globalOptions.stderr = os.Stderr
}()
@@ -1475,11 +1477,11 @@ func TestRebuildIndex(t *testing.T) {
}
func TestRebuildIndexAlwaysFull(t *testing.T) {
indexFull := repository.IndexFull
indexFull := index.IndexFull
defer func() {
repository.IndexFull = indexFull
index.IndexFull = indexFull
}()
repository.IndexFull = func(*repository.Index, bool) bool { return true }
index.IndexFull = func(*index.Index, bool) bool { return true }
testRebuildIndex(t, nil)
}
@@ -1539,7 +1541,7 @@ func TestRebuildIndexFailsOnAppendOnly(t *testing.T) {
datafile := filepath.Join("..", "..", "internal", "checker", "testdata", "duplicate-packs-in-index-test-repo.tar.gz")
rtest.SetupTarTestFixture(t, env.base, datafile)
globalOptions.stdout = ioutil.Discard
globalOptions.stdout = io.Discard
defer func() {
globalOptions.stdout = os.Stdout
}()
@@ -1547,7 +1549,7 @@ func TestRebuildIndexFailsOnAppendOnly(t *testing.T) {
env.gopts.backendTestHook = func(r restic.Backend) (restic.Backend, error) {
return &appendOnlyBackend{r}, nil
}
err := runRebuildIndex(RebuildIndexOptions{}, env.gopts)
err := runRebuildIndex(context.TODO(), RebuildIndexOptions{}, env.gopts)
if err == nil {
t.Error("expected rebuildIndex to fail")
}
@@ -1643,18 +1645,18 @@ func testPrune(t *testing.T, pruneOpts PruneOptions, checkOpts CheckOptions) {
testRunForgetJSON(t, env.gopts)
testRunForget(t, env.gopts, firstSnapshot[0].String())
testRunPrune(t, env.gopts, pruneOpts)
rtest.OK(t, runCheck(checkOpts, env.gopts, nil))
rtest.OK(t, runCheck(context.TODO(), checkOpts, env.gopts, nil))
}
var pruneDefaultOptions = PruneOptions{MaxUnused: "5%"}
func listPacks(gopts GlobalOptions, t *testing.T) restic.IDSet {
r, err := OpenRepository(gopts)
r, err := OpenRepository(context.TODO(), gopts)
rtest.OK(t, err)
packs := restic.NewIDSet()
rtest.OK(t, r.List(gopts.ctx, restic.PackFile, func(id restic.ID, size int64) error {
rtest.OK(t, r.List(context.TODO(), restic.PackFile, func(id restic.ID, size int64) error {
packs.Insert(id)
return nil
}))
@@ -1695,7 +1697,7 @@ func TestPruneWithDamagedRepository(t *testing.T) {
env.gopts.backendTestHook = oldHook
}()
// prune should fail
rtest.Assert(t, runPrune(pruneDefaultOptions, env.gopts) == errorPacksMissing,
rtest.Assert(t, runPrune(context.TODO(), pruneDefaultOptions, env.gopts) == errorPacksMissing,
"prune should have reported index not complete error")
}
@@ -1767,7 +1769,7 @@ func testEdgeCaseRepo(t *testing.T, tarfile string, optionsCheck CheckOptions, o
if checkOK {
testRunCheck(t, env.gopts)
} else {
rtest.Assert(t, runCheck(optionsCheck, env.gopts, nil) != nil,
rtest.Assert(t, runCheck(context.TODO(), optionsCheck, env.gopts, nil) != nil,
"check should have reported an error")
}
@@ -1775,7 +1777,7 @@ func testEdgeCaseRepo(t *testing.T, tarfile string, optionsCheck CheckOptions, o
testRunPrune(t, env.gopts, optionsPrune)
testRunCheck(t, env.gopts)
} else {
rtest.Assert(t, runPrune(optionsPrune, env.gopts) != nil,
rtest.Assert(t, runPrune(context.TODO(), optionsPrune, env.gopts) != nil,
"prune should have reported an error")
}
}
@@ -1846,10 +1848,10 @@ func TestListOnce(t *testing.T) {
testRunForgetJSON(t, env.gopts)
testRunForget(t, env.gopts, firstSnapshot[0].String())
testRunPrune(t, env.gopts, pruneOpts)
rtest.OK(t, runCheck(checkOpts, env.gopts, nil))
rtest.OK(t, runCheck(context.TODO(), checkOpts, env.gopts, nil))
rtest.OK(t, runRebuildIndex(RebuildIndexOptions{}, env.gopts))
rtest.OK(t, runRebuildIndex(RebuildIndexOptions{ReadAllPacks: true}, env.gopts))
rtest.OK(t, runRebuildIndex(context.TODO(), RebuildIndexOptions{}, env.gopts))
rtest.OK(t, runRebuildIndex(context.TODO(), RebuildIndexOptions{ReadAllPacks: true}, env.gopts))
}
func TestHardLink(t *testing.T) {
@@ -1859,7 +1861,7 @@ func TestHardLink(t *testing.T) {
datafile := filepath.Join("testdata", "test.hl.tar.gz")
fd, err := os.Open(datafile)
if os.IsNotExist(errors.Cause(err)) {
if os.IsNotExist(err) {
t.Skipf("unable to find data file %q, skipping", datafile)
return
}
@@ -2202,7 +2204,7 @@ func TestFindListOnce(t *testing.T) {
testRunBackup(t, "", []string{filepath.Join(env.testdata, "0", "0", "9", "3")}, opts, env.gopts)
thirdSnapshot := restic.NewIDSet(testRunList(t, "snapshots", env.gopts)...)
repo, err := OpenRepository(env.gopts)
repo, err := OpenRepository(context.TODO(), env.gopts)
rtest.OK(t, err)
snapshotIDs := restic.NewIDSet()

View File

@@ -2,32 +2,36 @@ package main
import (
"context"
"fmt"
"sync"
"time"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
)
type lockContext struct {
cancel context.CancelFunc
refreshWG sync.WaitGroup
}
var globalLocks struct {
locks []*restic.Lock
cancelRefresh chan struct{}
refreshWG sync.WaitGroup
locks map[*restic.Lock]*lockContext
sync.Mutex
sync.Once
}
func lockRepo(ctx context.Context, repo *repository.Repository) (*restic.Lock, error) {
func lockRepo(ctx context.Context, repo restic.Repository) (*restic.Lock, context.Context, error) {
return lockRepository(ctx, repo, false)
}
func lockRepoExclusive(ctx context.Context, repo *repository.Repository) (*restic.Lock, error) {
func lockRepoExclusive(ctx context.Context, repo restic.Repository) (*restic.Lock, context.Context, error) {
return lockRepository(ctx, repo, true)
}
func lockRepository(ctx context.Context, repo *repository.Repository, exclusive bool) (*restic.Lock, error) {
// lockRepository wraps the ctx such that it is cancelled when the repository is unlocked
// cancelling the original context also stops the lock refresh
func lockRepository(ctx context.Context, repo restic.Repository, exclusive bool) (*restic.Lock, context.Context, error) {
// make sure that a repository is unlocked properly and after cancel() was
// called by the cleanup handler in global.go
globalLocks.Do(func() {
@@ -41,53 +45,114 @@ func lockRepository(ctx context.Context, repo *repository.Repository, exclusive
lock, err := lockFn(ctx, repo)
if err != nil {
return nil, errors.WithMessage(err, "unable to create lock in backend")
return nil, ctx, fmt.Errorf("unable to create lock in backend: %w", err)
}
debug.Log("create lock %p (exclusive %v)", lock, exclusive)
globalLocks.Lock()
if globalLocks.cancelRefresh == nil {
debug.Log("start goroutine for lock refresh")
globalLocks.cancelRefresh = make(chan struct{})
globalLocks.refreshWG = sync.WaitGroup{}
globalLocks.refreshWG.Add(1)
go refreshLocks(&globalLocks.refreshWG, globalLocks.cancelRefresh)
ctx, cancel := context.WithCancel(ctx)
lockInfo := &lockContext{
cancel: cancel,
}
lockInfo.refreshWG.Add(2)
refreshChan := make(chan struct{})
globalLocks.locks = append(globalLocks.locks, lock)
globalLocks.Lock()
globalLocks.locks[lock] = lockInfo
go refreshLocks(ctx, lock, lockInfo, refreshChan)
go monitorLockRefresh(ctx, lock, lockInfo, refreshChan)
globalLocks.Unlock()
return lock, err
return lock, ctx, err
}
var refreshInterval = 5 * time.Minute
func refreshLocks(wg *sync.WaitGroup, done <-chan struct{}) {
debug.Log("start")
defer func() {
wg.Done()
globalLocks.Lock()
globalLocks.cancelRefresh = nil
globalLocks.Unlock()
}()
// consider a lock refresh failed a bit before the lock actually becomes stale
// the difference allows to compensate for a small time drift between clients.
var refreshabilityTimeout = restic.StaleLockTimeout - refreshInterval*3/2
func refreshLocks(ctx context.Context, lock *restic.Lock, lockInfo *lockContext, refreshed chan<- struct{}) {
debug.Log("start")
ticker := time.NewTicker(refreshInterval)
lastRefresh := lock.Time
defer func() {
ticker.Stop()
// ensure that the context was cancelled before removing the lock
lockInfo.cancel()
// remove the lock from the repo
debug.Log("unlocking repository with lock %v", lock)
if err := lock.Unlock(); err != nil {
debug.Log("error while unlocking: %v", err)
Warnf("error while unlocking: %v", err)
}
lockInfo.refreshWG.Done()
}()
for {
select {
case <-done:
case <-ctx.Done():
debug.Log("terminate")
return
case <-ticker.C:
if time.Since(lastRefresh) > refreshabilityTimeout {
// the lock is too old, wait until the expiry monitor cancels the context
continue
}
debug.Log("refreshing locks")
globalLocks.Lock()
for _, lock := range globalLocks.locks {
err := lock.Refresh(context.TODO())
if err != nil {
Warnf("unable to refresh lock: %v\n", err)
err := lock.Refresh(context.TODO())
if err != nil {
Warnf("unable to refresh lock: %v\n", err)
} else {
lastRefresh = lock.Time
// inform monitor gorountine about successful refresh
select {
case <-ctx.Done():
case refreshed <- struct{}{}:
}
}
globalLocks.Unlock()
}
}
}
func monitorLockRefresh(ctx context.Context, lock *restic.Lock, lockInfo *lockContext, refreshed <-chan struct{}) {
// time.Now() might use a monotonic timer which is paused during standby
// convert to unix time to ensure we compare real time values
lastRefresh := time.Now().UnixNano()
pollDuration := 1 * time.Second
if refreshInterval < pollDuration {
// require for TestLockFailedRefresh
pollDuration = refreshInterval / 5
}
// timers are paused during standby, which is a problem as the refresh timeout
// _must_ expire if the host was too long in standby. Thus fall back to periodic checks
// https://github.com/golang/go/issues/35012
timer := time.NewTimer(pollDuration)
defer func() {
timer.Stop()
lockInfo.cancel()
lockInfo.refreshWG.Done()
}()
for {
select {
case <-ctx.Done():
debug.Log("terminate expiry monitoring")
return
case <-refreshed:
lastRefresh = time.Now().UnixNano()
case <-timer.C:
if time.Now().UnixNano()-lastRefresh < refreshabilityTimeout.Nanoseconds() {
// restart timer
timer.Reset(pollDuration)
continue
}
Warnf("Fatal: failed to refresh lock in time\n")
return
}
}
}
@@ -98,40 +163,35 @@ func unlockRepo(lock *restic.Lock) {
}
globalLocks.Lock()
defer globalLocks.Unlock()
lockInfo, exists := globalLocks.locks[lock]
delete(globalLocks.locks, lock)
globalLocks.Unlock()
for i := 0; i < len(globalLocks.locks); i++ {
if lock == globalLocks.locks[i] {
// remove the lock from the repo
debug.Log("unlocking repository with lock %v", lock)
if err := lock.Unlock(); err != nil {
debug.Log("error while unlocking: %v", err)
Warnf("error while unlocking: %v", err)
return
}
// remove the lock from the list of locks
globalLocks.locks = append(globalLocks.locks[:i], globalLocks.locks[i+1:]...)
return
}
if !exists {
debug.Log("unable to find lock %v in the global list of locks, ignoring", lock)
return
}
debug.Log("unable to find lock %v in the global list of locks, ignoring", lock)
lockInfo.cancel()
lockInfo.refreshWG.Wait()
}
func unlockAll() error {
func unlockAll(code int) (int, error) {
globalLocks.Lock()
defer globalLocks.Unlock()
locks := globalLocks.locks
debug.Log("unlocking %d locks", len(globalLocks.locks))
for _, lock := range globalLocks.locks {
if err := lock.Unlock(); err != nil {
debug.Log("error while unlocking: %v", err)
return err
}
debug.Log("successfully removed lock")
for _, lockInfo := range globalLocks.locks {
lockInfo.cancel()
}
globalLocks.locks = globalLocks.locks[:0]
globalLocks.locks = make(map[*restic.Lock]*lockContext)
globalLocks.Unlock()
return nil
for _, lockInfo := range locks {
lockInfo.refreshWG.Wait()
}
return code, nil
}
func init() {
globalLocks.locks = make(map[*restic.Lock]*lockContext)
}

170
cmd/restic/lock_test.go Normal file
View File

@@ -0,0 +1,170 @@
package main
import (
"context"
"fmt"
"testing"
"time"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
rtest "github.com/restic/restic/internal/test"
)
func openTestRepo(t *testing.T, wrapper backendWrapper) (*repository.Repository, func(), *testEnvironment) {
env, cleanup := withTestEnvironment(t)
if wrapper != nil {
env.gopts.backendTestHook = wrapper
}
testRunInit(t, env.gopts)
repo, err := OpenRepository(context.TODO(), env.gopts)
rtest.OK(t, err)
return repo, cleanup, env
}
func checkedLockRepo(ctx context.Context, t *testing.T, repo restic.Repository) (*restic.Lock, context.Context) {
lock, wrappedCtx, err := lockRepo(ctx, repo)
rtest.OK(t, err)
rtest.OK(t, wrappedCtx.Err())
if lock.Stale() {
t.Fatal("lock returned stale lock")
}
return lock, wrappedCtx
}
func TestLock(t *testing.T) {
repo, cleanup, _ := openTestRepo(t, nil)
defer cleanup()
lock, wrappedCtx := checkedLockRepo(context.Background(), t, repo)
unlockRepo(lock)
if wrappedCtx.Err() == nil {
t.Fatal("unlock did not cancel context")
}
}
func TestLockCancel(t *testing.T) {
repo, cleanup, _ := openTestRepo(t, nil)
defer cleanup()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
lock, wrappedCtx := checkedLockRepo(ctx, t, repo)
cancel()
if wrappedCtx.Err() == nil {
t.Fatal("canceled parent context did not cancel context")
}
// unlockRepo should not crash
unlockRepo(lock)
}
func TestLockUnlockAll(t *testing.T) {
repo, cleanup, _ := openTestRepo(t, nil)
defer cleanup()
lock, wrappedCtx := checkedLockRepo(context.Background(), t, repo)
_, err := unlockAll(0)
rtest.OK(t, err)
if wrappedCtx.Err() == nil {
t.Fatal("canceled parent context did not cancel context")
}
// unlockRepo should not crash
unlockRepo(lock)
}
func TestLockConflict(t *testing.T) {
repo, cleanup, env := openTestRepo(t, nil)
defer cleanup()
repo2, err := OpenRepository(context.TODO(), env.gopts)
rtest.OK(t, err)
lock, _, err := lockRepoExclusive(context.Background(), repo)
rtest.OK(t, err)
defer unlockRepo(lock)
_, _, err = lockRepo(context.Background(), repo2)
if err == nil {
t.Fatal("second lock should have failed")
}
}
type writeOnceBackend struct {
restic.Backend
written bool
}
func (b *writeOnceBackend) Save(ctx context.Context, h restic.Handle, rd restic.RewindReader) error {
if b.written {
return fmt.Errorf("fail after first write")
}
b.written = true
return b.Backend.Save(ctx, h, rd)
}
func TestLockFailedRefresh(t *testing.T) {
repo, cleanup, _ := openTestRepo(t, func(r restic.Backend) (restic.Backend, error) {
return &writeOnceBackend{Backend: r}, nil
})
defer cleanup()
// reduce locking intervals to be suitable for testing
ri, rt := refreshInterval, refreshabilityTimeout
refreshInterval = 20 * time.Millisecond
refreshabilityTimeout = 100 * time.Millisecond
defer func() {
refreshInterval, refreshabilityTimeout = ri, rt
}()
lock, wrappedCtx := checkedLockRepo(context.Background(), t, repo)
select {
case <-wrappedCtx.Done():
// expected lock refresh failure
case <-time.After(time.Second):
t.Fatal("failed lock refresh did not cause context cancellation")
}
// unlockRepo should not crash
unlockRepo(lock)
}
type loggingBackend struct {
restic.Backend
t *testing.T
}
func (b *loggingBackend) Save(ctx context.Context, h restic.Handle, rd restic.RewindReader) error {
b.t.Logf("save %v @ %v", h, time.Now())
return b.Backend.Save(ctx, h, rd)
}
func TestLockSuccessfulRefresh(t *testing.T) {
repo, cleanup, _ := openTestRepo(t, func(r restic.Backend) (restic.Backend, error) {
return &loggingBackend{
Backend: r,
t: t,
}, nil
})
defer cleanup()
t.Logf("test for successful lock refresh %v", time.Now())
// reduce locking intervals to be suitable for testing
ri, rt := refreshInterval, refreshabilityTimeout
refreshInterval = 40 * time.Millisecond
refreshabilityTimeout = 200 * time.Millisecond
defer func() {
refreshInterval, refreshabilityTimeout = ri, rt
}()
lock, wrappedCtx := checkedLockRepo(context.Background(), t, repo)
select {
case <-wrappedCtx.Done():
t.Fatal("lock refresh failed")
case <-time.After(2 * refreshabilityTimeout):
// expected lock refresh to work
}
// unlockRepo should not crash
unlockRepo(lock)
}

View File

@@ -85,17 +85,15 @@ func needsPassword(cmd string) bool {
var logBuffer = bytes.NewBuffer(nil)
func init() {
func main() {
// install custom global logger into a buffer, if an error occurs
// we can show the logs
log.SetOutput(logBuffer)
}
func main() {
debug.Log("main %#v", os.Args)
debug.Log("restic %s compiled with %v on %v/%v",
version, runtime.Version(), runtime.GOOS, runtime.GOARCH)
err := cmdRoot.Execute()
err := cmdRoot.ExecuteContext(internalGlobalCtx)
switch {
case restic.IsAlreadyLocked(err):

View File

@@ -7,6 +7,7 @@ import (
"strings"
"time"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/ui/progress"
"github.com/restic/restic/internal/ui/termstatus"
)
@@ -39,10 +40,11 @@ func newProgressMax(show bool, max uint64, description string) *progress.Counter
return progress.New(interval, max, func(v uint64, max uint64, d time.Duration, final bool) {
var status string
if max == 0 {
status = fmt.Sprintf("[%s] %d %s", formatDuration(d), v, description)
status = fmt.Sprintf("[%s] %d %s",
ui.FormatDuration(d), v, description)
} else {
status = fmt.Sprintf("[%s] %s %d / %d %s",
formatDuration(d), formatPercent(v, max), v, max, description)
ui.FormatDuration(d), ui.FormatPercent(v, max), v, max, description)
}
printProgress(status, canUpdateStatus)

View File

@@ -24,11 +24,11 @@ type secondaryRepoOptions struct {
}
func initSecondaryRepoOptions(f *pflag.FlagSet, opts *secondaryRepoOptions, repoPrefix string, repoUsage string) {
f.StringVarP(&opts.LegacyRepo, "repo2", "", os.Getenv("RESTIC_REPOSITORY2"), repoPrefix+" `repository` "+repoUsage+" (default: $RESTIC_REPOSITORY2)")
f.StringVarP(&opts.LegacyRepositoryFile, "repository-file2", "", os.Getenv("RESTIC_REPOSITORY_FILE2"), "`file` from which to read the "+repoPrefix+" repository location "+repoUsage+" (default: $RESTIC_REPOSITORY_FILE2)")
f.StringVarP(&opts.LegacyPasswordFile, "password-file2", "", os.Getenv("RESTIC_PASSWORD_FILE2"), "`file` to read the "+repoPrefix+" repository password from (default: $RESTIC_PASSWORD_FILE2)")
f.StringVarP(&opts.LegacyKeyHint, "key-hint2", "", os.Getenv("RESTIC_KEY_HINT2"), "key ID of key to try decrypting the "+repoPrefix+" repository first (default: $RESTIC_KEY_HINT2)")
f.StringVarP(&opts.LegacyPasswordCommand, "password-command2", "", os.Getenv("RESTIC_PASSWORD_COMMAND2"), "shell `command` to obtain the "+repoPrefix+" repository password from (default: $RESTIC_PASSWORD_COMMAND2)")
f.StringVarP(&opts.LegacyRepo, "repo2", "", "", repoPrefix+" `repository` "+repoUsage+" (default: $RESTIC_REPOSITORY2)")
f.StringVarP(&opts.LegacyRepositoryFile, "repository-file2", "", "", "`file` from which to read the "+repoPrefix+" repository location "+repoUsage+" (default: $RESTIC_REPOSITORY_FILE2)")
f.StringVarP(&opts.LegacyPasswordFile, "password-file2", "", "", "`file` to read the "+repoPrefix+" repository password from (default: $RESTIC_PASSWORD_FILE2)")
f.StringVarP(&opts.LegacyKeyHint, "key-hint2", "", "", "key ID of key to try decrypting the "+repoPrefix+" repository first (default: $RESTIC_KEY_HINT2)")
f.StringVarP(&opts.LegacyPasswordCommand, "password-command2", "", "", "shell `command` to obtain the "+repoPrefix+" repository password from (default: $RESTIC_PASSWORD_COMMAND2)")
// hide repo2 options
_ = f.MarkDeprecated("repo2", "use --repo or --from-repo instead")
@@ -37,11 +37,23 @@ func initSecondaryRepoOptions(f *pflag.FlagSet, opts *secondaryRepoOptions, repo
_ = f.MarkHidden("key-hint2")
_ = f.MarkHidden("password-command2")
f.StringVarP(&opts.Repo, "from-repo", "", os.Getenv("RESTIC_FROM_REPOSITORY"), "source `repository` "+repoUsage+" (default: $RESTIC_FROM_REPOSITORY)")
f.StringVarP(&opts.RepositoryFile, "from-repository-file", "", os.Getenv("RESTIC_FROM_REPOSITORY_FILE"), "`file` from which to read the source repository location "+repoUsage+" (default: $RESTIC_FROM_REPOSITORY_FILE)")
f.StringVarP(&opts.PasswordFile, "from-password-file", "", os.Getenv("RESTIC_FROM_PASSWORD_FILE"), "`file` to read the source repository password from (default: $RESTIC_FROM_PASSWORD_FILE)")
f.StringVarP(&opts.KeyHint, "from-key-hint", "", os.Getenv("RESTIC_FROM_KEY_HINT"), "key ID of key to try decrypting the source repository first (default: $RESTIC_FROM_KEY_HINT)")
f.StringVarP(&opts.PasswordCommand, "from-password-command", "", os.Getenv("RESTIC_FROM_PASSWORD_COMMAND"), "shell `command` to obtain the source repository password from (default: $RESTIC_FROM_PASSWORD_COMMAND)")
opts.LegacyRepo = os.Getenv("RESTIC_REPOSITORY2")
opts.LegacyRepositoryFile = os.Getenv("RESTIC_REPOSITORY_FILE2")
opts.LegacyPasswordFile = os.Getenv("RESTIC_PASSWORD_FILE2")
opts.LegacyKeyHint = os.Getenv("RESTIC_KEY_HINT2")
opts.LegacyPasswordCommand = os.Getenv("RESTIC_PASSWORD_COMMAND2")
f.StringVarP(&opts.Repo, "from-repo", "", "", "source `repository` "+repoUsage+" (default: $RESTIC_FROM_REPOSITORY)")
f.StringVarP(&opts.RepositoryFile, "from-repository-file", "", "", "`file` from which to read the source repository location "+repoUsage+" (default: $RESTIC_FROM_REPOSITORY_FILE)")
f.StringVarP(&opts.PasswordFile, "from-password-file", "", "", "`file` to read the source repository password from (default: $RESTIC_FROM_PASSWORD_FILE)")
f.StringVarP(&opts.KeyHint, "from-key-hint", "", "", "key ID of key to try decrypting the source repository first (default: $RESTIC_FROM_KEY_HINT)")
f.StringVarP(&opts.PasswordCommand, "from-password-command", "", "", "shell `command` to obtain the source repository password from (default: $RESTIC_FROM_PASSWORD_COMMAND)")
opts.Repo = os.Getenv("RESTIC_FROM_REPOSITORY")
opts.RepositoryFile = os.Getenv("RESTIC_FROM_REPOSITORY_FILE")
opts.PasswordFile = os.Getenv("RESTIC_FROM_PASSWORD_FILE")
opts.KeyHint = os.Getenv("RESTIC_FROM_KEY_HINT")
opts.PasswordCommand = os.Getenv("RESTIC_FROM_PASSWORD_COMMAND")
}
func fillSecondaryGlobalOpts(opts secondaryRepoOptions, gopts GlobalOptions, repoPrefix string) (GlobalOptions, bool, error) {

View File

@@ -1,7 +1,7 @@
package main
import (
"io/ioutil"
"os"
"path/filepath"
"testing"
@@ -160,14 +160,12 @@ func TestFillSecondaryGlobalOpts(t *testing.T) {
}
//Create temp dir to create password file.
dir, cleanup := rtest.TempDir(t)
defer cleanup()
cleanup = rtest.Chdir(t, dir)
dir := rtest.TempDir(t)
cleanup := rtest.Chdir(t, dir)
defer cleanup()
//Create temporary password file
err := ioutil.WriteFile(filepath.Join(dir, "passwordFileDst"), []byte("secretDst"), 0666)
err := os.WriteFile(filepath.Join(dir, "passwordFileDst"), []byte("secretDst"), 0666)
rtest.OK(t, err)
// Test all valid cases

View File

@@ -265,16 +265,11 @@ binary, you can get it with `docker pull` like this:
$ docker pull restic/restic
.. note::
| Another docker container which offers more configuration options is
| available as a contribution (Thank you!). You can find it at
| https://github.com/Lobaro/restic-backup-docker
From Source
***********
restic is written in the Go programming language and you need at least
Go version 1.15. Building restic may also work with older versions of Go,
Go version 1.18. Building restic may also work with older versions of Go,
but that's not supported. See the `Getting
started <https://golang.org/doc/install>`__ guide of the Go project for
instructions how to install Go.
@@ -313,14 +308,14 @@ compiler. Building restic with gccgo may work, but is not supported.
Autocompletion
**************
Restic can write out man pages and bash/fish/zsh compatible autocompletion scripts:
Restic can write out man pages and bash/fish/zsh/powershell compatible autocompletion scripts:
.. code-block:: console
$ ./restic generate --help
The "generate" command writes automatically generated files (like the man pages
and the auto-completion files for bash, fish and zsh).
and the auto-completion files for bash, fish, zsh and powershell).
Usage:
restic generate [flags] [command]
@@ -330,6 +325,7 @@ Restic can write out man pages and bash/fish/zsh compatible autocompletion scrip
--fish-completion file write fish completion file
-h, --help help for generate
--man directory write man pages to directory
--powershell-completion write powershell completion file
--zsh-completion file write zsh completion file
Example for using sudo to write a bash completion script directly to the system-wide location:

View File

@@ -58,9 +58,9 @@ versions.
+--------------------+-------------------------+---------------------+------------------+
| Repository version | Required restic version | Major new features | Comment |
+====================+=========================+=====================+==================+
| ``1`` | Any | | Current default |
| ``1`` | Any | | |
+--------------------+-------------------------+---------------------+------------------+
| ``2`` | 0.14.0 or newer | Compression support | |
| ``2`` | 0.14.0 or newer | Compression support | Current default |
+--------------------+-------------------------+---------------------+------------------+
@@ -86,10 +86,11 @@ command and enter the same password twice:
.. warning::
On Linux, storing the backup repository on a CIFS (SMB) share is not
recommended due to compatibility issues. Either use another backend
or set the environment variable `GODEBUG` to `asyncpreemptoff=1`.
Refer to GitHub issue `#2659 <https://github.com/restic/restic/issues/2659>`_ for further explanations.
On Linux, storing the backup repository on a CIFS (SMB) share or backing up
data from a CIFS share is not recommended due to compatibility issues in
older Linux kernels. Either use another backend or set the environment
variable `GODEBUG` to `asyncpreemptoff=1`. Refer to GitHub issue
`#2659 <https://github.com/restic/restic/issues/2659>`_ for further explanations.
SFTP
****
@@ -221,6 +222,8 @@ REST server uses exactly the same directory structure as local backend,
so you should be able to access it both locally and via HTTP, even
simultaneously.
.. _Amazon S3:
Amazon S3
*********
@@ -301,7 +304,7 @@ credentials of your Minio Server.
.. code-block:: console
$ export AWS_ACCESS_KEY_ID=<YOUR-MINIO-ACCESS-KEY-ID>
$ export AWS_SECRET_ACCESS_KEY= <YOUR-MINIO-SECRET-ACCESS-KEY>
$ export AWS_SECRET_ACCESS_KEY=<YOUR-MINIO-SECRET-ACCESS-KEY>
Now you can easily initialize restic to use Minio server as a backend with
this command.
@@ -464,6 +467,19 @@ The policy of the new container created by restic can be changed using environme
Backblaze B2
************
.. warning::
Due to issues with error handling in the current B2 library that restic uses,
the recommended way to utilize Backblaze B2 is by using its S3-compatible API.
Follow the documentation to `generate S3-compatible access keys`_ and then
setup restic as described at :ref:`Amazon S3`. This is expected to work better
than using the Backblaze B2 backend directly.
Different from the B2 backend, restic's S3 backend will only hide no longer
necessary files. Thus, make sure to setup lifecycle rules to eventually
delete hidden files.
Restic can backup data to any Backblaze B2 bucket. You need to first setup the
following environment variables with the credentials you can find in the
dashboard on the "Buckets" page when signed into your B2 account:
@@ -502,11 +518,13 @@ The number of concurrent connections to the B2 service can be set with the ``-o
b2.connections=10`` switch. By default, at most five parallel connections are
established.
.. _generate S3-compatible access keys: https://help.backblaze.com/hc/en-us/articles/360047425453-Getting-Started-with-the-S3-Compatible-API
Microsoft Azure Blob Storage
****************************
You can also store backups on Microsoft Azure Blob Storage. Export the Azure
account name and key as follows:
Blob Storage account name and key as follows:
.. code-block:: console
@@ -618,6 +636,13 @@ initiate a new repository in the path ``bar`` in the remote ``foo``:
Restic takes care of starting and stopping rclone.
.. note:: If you get an error message saying "cannot implicitly run relative
executable rclone found in current directory", this means that an
rclone executable was found in the current directory. For security
reasons restic will not run this implicitly, instead you have to
use the ``-o rclone.program=./rclone`` extended option to override
this security check and explicitly tell restic to use the executable.
As a more concrete example, suppose you have configured a remote named
``b2prod`` for Backblaze B2 with rclone, with a bucket called ``yggdrasil``.
You can then use rclone to list files in the bucket like this:

View File

@@ -204,6 +204,7 @@ Combined with ``--verbose``, you can see a list of changes:
modified /archive.tar.gz, saved in 0.140s (25.542 MiB added)
Would be added to the repository: 25.551 MiB
.. _backup-excluding-files:
Excluding Files
***************
@@ -299,7 +300,7 @@ directory, then selectively add back some of them.
::
$HOME/**/*
$HOME/*
!$HOME/Documents
!$HOME/code
!$HOME/.emacs.d
@@ -555,6 +556,7 @@ environment variables. The following lists these environment variables:
RESTIC_COMPRESSION Compression mode (only available for repository format version 2)
RESTIC_PROGRESS_FPS Frames per second by which the progress bar is updated
RESTIC_PACK_SIZE Target size for pack files
RESTIC_READ_CONCURRENCY Concurrency for file reads
TMPDIR Location for temporary files

Some files were not shown because too many files have changed in this diff Show More