users would find it helpful to know how to adjust the "local" backend and they might not get the idea that the local backend is just called local... which in turn leads them to think restic is slow as they can't adjust away from 2 threads for restore and backup.
* bugfix: write pprof file for `--{cpu,mem,...}-profile` even on error code
Before this, if `restic backup --cpu-profile dir/ backup-dir/` couldn't
read some of the input files (e.g. they weren't readable by the user
restic was running under), the `cpu.pprof` file it outputs would be
empty.
https://github.com/spf13/cobra/issues/1893
* drop changelog as it's not relevant for end users
---------
Co-authored-by: Michael Eischer <michael.eischer@fau.de>
This adds proper support for filenames that include directories. For
example, `/foo/bar` would result in an error when trying to open `/foo`.
The directory tree is now build upfront. This ensures let's the
directory tree construction be handled only once. All accessors then
only have to look up the constructed directory entries.
Since cloud.google.com/go/storage v1.44.0 the GRPC API is enabled by
default. However, this causes the restic binary size to explode by 20MB.
So just disable it again.
The timeout for all blobs starts to run after the delete calls have been
issue. Thus, use the same start time for all blobs instead of individual
timeouts.
* cmd_ls: one more test: ls --json to check the JSON lines
validate that the individual JSON lines are valid JSON statements.
Check for snap ID and the path names in the backup.
Refactor AtLeast method to correctly handle version comparisons by:
- Checking major version first
- Handling minor version comparisons
- Ensuring correct comparison of patch versions
cmd_prune.go: added option `--repack-smaller-than`
prune.go: added field `SmallPackBytes` to `PruneOptions`, including checking and processing
prune_test.go: added test `TestPruneSmall`
doc/060_forget.rst: added description of enhancement
changelog/unreleased/issue-5109: description of enhancement
Enhancement: create ability to sort output of restic ls -l by
name, size, atime, ctime, mtime, time(=mtime), X(=extension), extension
---------
Co-authored-by: Michael Eischer <michael.eischer@fau.de>
* sometimes, the report function may absorb the error and return nil, in those cases the bar.Add(1) method would execute even if the file deletion had failed
* feat(backends/s3): add warmup support before repacks and restores
This commit introduces basic support for transitioning pack files stored
in cold storage to hot storage on S3 and S3-compatible providers.
To prevent unexpected behavior for existing users, the feature is gated
behind new flags:
- `s3.enable-restore`: opt-in flag (defaults to false)
- `s3.restore-days`: number of days for the restored objects to remain
in hot storage (defaults to `7`)
- `s3.restore-timeout`: maximum time to wait for a single restoration
(default to `1 day`)
- `s3.restore-tier`: retrieval tier at which the restore will be
processed. (default to `Standard`)
As restoration times can be lengthy, this implementation preemptively
restores selected packs to prevent incessant restore-delays during
downloads. This is slightly sub-optimal as we could process packs
out-of-order (as soon as they're transitioned), but this would really
add too much complexity for a marginal gain in speed.
To maintain simplicity and prevent resources exhautions with lots of
packs, no new concurrency mechanisms or goroutines were added. This just
hooks gracefully into the existing routines.
**Limitations:**
- Tests against the backend were not written due to the lack of cold
storage class support in MinIO. Testing was done manually on
Scaleway's S3-compatible object storage. If necessary, we could
explore testing with LocalStack or mocks, though this requires further
discussion.
- Currently, this feature only warms up before restores and repacks
(prune/copy), as those are the two main use-cases I came across.
Support for other commands may be added in future iterations, as long
as affected packs can be calculated in advance.
- The feature is gated behind a new alpha `s3-restore` feature flag to
make it explicit that the feature is still wet behind the ears.
- There is no explicit user notification for ongoing pack restorations.
While I think it is not necessary because of the opt-in flag, showing
some notice may improve usability (but would probably require major
refactoring in the progress bar which I didn't want to start). Another
possibility would be to add a flag to send restores requests and fail
early.
See https://github.com/restic/restic/issues/3202
* ui: warn user when files are warming up from cold storage
* refactor: remove the PacksWarmer struct
It's easier to handle multiple handles in the backend directly, and it
may open the door to reducing the number of requests made to the backend
in the future.
The old sorting behaviour was to sort snapshots from oldest to newest.
The new sorting order is from newest to oldest. If one wants to revert to the
old behaviour, use the option --reverse.
---------
Co-authored-by: Michael Eischer <michael.eischer@fau.de>
The size comparison for `--max-unused` only accounted for unused but not
for duplicate data. For repositories with a large amount of duplicates
this can result in a situation where no data gets pruned even though
the amount of unused data is much higher than specified.
* tag: output the original ID and new snapshotID
tag: print changed snapshot information immediately
* print changed snapshot immediately after it has been saved
* add message type to the changedSnapshot
* add a summary type which will share the JSON output of the numer of changed snapshots
* updated verbosity of the changed snapshot in text mode to only work when verbosity > 2
* also use the terminal status printer for a standard handling for stdout messages
Those methods now only allow modifying snapshots. Internal data types
used by the repository are now read-only. The repository-internal code
can bypass the restrictions by wrapping the repository in an
`internalRepository` type.
The restriction itself is implemented by using a new datatype
WriteableFileType in the SaveUnpacked and RemoveUnpacked methods. This
statically ensures that code cannot bypass the access restrictions.
The test changes are somewhat noisy as some of them modify repository
internals and therefore require some way to bypass the access
restrictions. This works by capturing an `internalRepository` or
`Backend` when creating the Repository using a test helper function.
This fixes an issue where restic cannot find the tree when trying to find the
tree id of a snapshot.
---------
Co-authored-by: Albin Vass <albinvass@gmail.com>
Co-authored-by: Michael Eischer <michael.eischer@fau.de>
* On Linux restore only user.* xattrs by default
* restore all for other OSs
* Update docs and changelog about the new restore
flags --exclude-xattr and --include-xattr
Signed-off-by: Tesshu Flower <tflower@redhat.com>
* Refactor ea and sd helpers to use go-winio
Import go-winio and instead of copying the functions to encode/decode extended attributes and enable process privileges for security descriptors, call the functions defined in go-winio.
* the id-token of the GitHub Actions workflow will be used for image signing
* replace branch-based tagging with SHA-based tagging since, branch names are mutable, SLSA provenance requires immutable tagging
* use official SLSA framework Github Reusable workflow
docker: fix incorrect registry name in image output step
* use REGISTRY environment variable instead of IMAGE_REGISTRY
docker: revert change to remove branch tag
* ui: restore --delete indicates number of deleted files
* adds new field `FilesDeleted` to the State struct, JSON and text progress updaters
* increment FilesDeleted count when ReportedDeletedFile
* ui: collect the files to be deleted, delete, then update the count post deletion
* docs: update scripting output fields for restore command
ui: report deleted directories and refactor function name to ReportDeletion
Only the `Sys()` value from os.FileInfo is kept as field `sys` to
support Windows. The os.FileInfo removal ensures that for values like
`ModTime` that existed in both data structures there's no more confusion
which value is actually used.
Depending on the change packages, the VSS tests from ./cmd/restic and
the fs package may overlap in time. This causes the snapshot creation to
fail. Add retries in that case.
The actual implementation still relies on file paths, but with the
abstraction layer in place, an FS implementation can ensure atomic file
accesses in the future.
The basics around how to format commits and PR settings are primarily
relevant when opening a PR for the first time. But for repeated
contributors it is tedious to always tick those checkboxes.
Co-authored-by: rawtaz <rawtaz@users.noreply.github.com>
A particular node should always be represented by a single instance.
This is necessary to allow the fuse library to assign a stable nodeId to
a node. macOS Sonoma trips over the previous, unstable behavior when
using fuse-t.
By now missing files are not endlessly retried by the retry backend such
that it can be enabled right from the start.
In addition, this change also enables the retry backend for the `init`
command.
Extended attributes and security descriptors apparently cannot be
retrieved from a vss volume. Fix the volume check to correctly detect
vss volumes and just completely disable extended attributes for volumes.
Paths that only contain the volume shadow copy snapshot name require
special treatment. These paths must end with a slash for regular file
operations to work.
Extended attributes and security descriptors apparently cannot be
retrieved from a vss volume. Fix the volume check to correctly detect
vss volumes and just completely disable extended attributes for volumes.
Paths that only contain the volume shadow copy snapshot name require
special treatment. These paths must end with a slash for regular file
operations to work.
Previously, nodeRestoreTimestamps would do something like
if node.Type == restic.NodeTypeSymlink {
return nodeRestoreSymlinkTimestamps(...)
}
return syscall.UtimesNano(...)
where nodeRestoreSymlinkTimestamps was either a no-op or a
reimplementation of syscall.UtimesNano that handles symlinks, with some
repeated converting between timestamp types. The Linux implementation
was a bit clumsy, requiring three syscalls to set the timestamps.
In this new setup, there is a function utimesNano that has three
implementations:
* on Linux, it's a modified syscall.UtimesNano that uses
AT_SYMLINK_NOFOLLOW and AT_FDCWD so it can handle any type in a single
call;
* on other Unix platforms, it just calls the syscall function after
skipping symlinks;
* on Windows, it's the modified UtimesNano that was previously called
nodeRestoreSymlinkTimestamps, except with different arguments.
Previously, NodeFromFileInfo used the original file path to create the
node, which also meant that extended metadata was read from there
instead of within the vss snapshot.
This change is a temporary solution for restic 0.17.2 and will be
replaced with a clean fix in restic 0.18.0.
Add two new test cases, TestBackendAzureAccountToken and
TestBackendAzureContainerToken, that ensure that the authorization using
both types of token works.
This introduces two new environment variables,
RESTIC_TEST_AZURE_ACCOUNT_SAS and RESTIC_TEST_AZURE_CONTAINER_SAS, that
contain the tokens to use when testing restic. If an environment
variable is missing, the related test is skipped.
Ignore AuthorizationFailure caused by using a container level SAS/SAT
token when calling GetProperties during the Create() call. This is because the
GetProperties call expects an Account Level token, and the container
level token simply lacks the appropriate permissions. Supressing the
Authorization Failure is OK, because if the token is actually invalid,
this is caught elsewhere when we try to actually use the token to do
work.
Previously, NodeFromFileInfo used the original file path to create the
node, which also meant that extended metadata was read from there
instead of within the vss snapshot.
Add two new test cases, TestBackendAzureAccountToken and
TestBackendAzureContainerToken, that ensure that the authorization using
both types of token works.
This introduces two new environment variables,
RESTIC_TEST_AZURE_ACCOUNT_SAS and RESTIC_TEST_AZURE_CONTAINER_SAS, that
contain the tokens to use when testing restic. If an environment
variable is missing, the related test is skipped.
Ignore AuthorizationFailure caused by using a container level SAS/SAT
token when calling GetProperties during the Create() call. This is because the
GetProperties call expects an Account Level token, and the container
level token simply lacks the appropriate permissions. Supressing the
Authorization Failure is OK, because if the token is actually invalid,
this is caught elsewhere when we try to actually use the token to do
work.
This does not produce exactly the same messages, as it inserts newlines
instead of "; ". But given how long our error messages can be, that
might be a good thing.
One place where IDSet.Clone is useful was reinventing it, using a
conversion to list, a sort, and a conversion back to map.
Also, use the stdlib "maps" package to implement as much of IDSet as
possible. This requires changing one caller, which assumed that cloning
nil would return a non-nil IDSet.
This changes Dumper.writeNode to spawn loader goroutines as needed
instead of as a pool. The code is shorter, fewer goroutines are spawned
for small files, and crash dumps (also for unrelated errors) should be
smaller.
When adding a new feature, the problem description often just says that
feature Y was missing, followed by saying that feature Y is now
supported.
This duplication just makes the changelog entries unnecessarily verbose.
A particular node should always be represented by a single instance.
This is necessary to allow the fuse library to assign a stable nodeId to
a node. macOS Sonoma trips over the previous, unstable behavior when
using fuse-t.
Now, a snapshot is only marked as oldest if it's the last in the list AND its values matches the last seen value for that bucket.
Also, updated the corresponding golden files for the tests.
Depending on parameters the paths in a snapshot do not directly
correspond to real paths on the filesystem. Therefore, reject funcs must
use the FS interface to work correctly.
The temp files used by the packer manager are either delete after
creation (unix) or marked as delete on close (windows). Thus, no
explicit cleanup is necessary.
The retry code path did not filter `ERROR_NOT_SUPPORTED`. Just call the
original function a second time to correctly follow the low privilege
code path.
Calling `Load()` twice for an atomic variable can return different
values each time. This resulted in trying to read the security
descriptor with high privileges, but then not entering the code path to
switch to low privileges when another thread has already done so
concurrently.
The HTTP client can only retry HTTP2 requests after receiving a GOAWAY
response if it can rewind the body. As we use a custom data type,
explicitly provide an implementation of `GetBody`.
Split description for non-Amazon S3 providers into separate section. The
section now also includes the `s3.bucket-lookup` extended option. Switch
to using regional URLs for Amazon S3 to replace the need for setting the
region.
Failed locking attempts were immediately retried up to three times
without any delay between the retries. If a lock file is not found while
checking for other locks, with the reworked backend retries there is no
delay between those retries. This is a problem if a backend requires a
few seconds to reflect file deletions in the file listings. To work
around this problem, introduce a short exponentially increasing delay
between the retries. The number of retries is now increased to 4. This
results in delays of 5, 10 and 20 seconds between the retries.
When the context used for a load operation is canceled, then the result
is always an error independent of whether the file could be retrieved
from the backend. Do not false positively trip the circuit breaker in
this case.
The old behavior was problematic when trying to lock a repository. When
`Lock.checkForOtherLocks` listed multiple lock files in parallel and one
of them fails to load, then all other loads were canceled. This
cancelation was remembered by the circuit breaker, such that locking
retries would fail.
* removes files which are no longer in the repository, including index files, snapshot files and pack files from the cache.
cache: fix ids set initialisation with NewIDSet()
The retry code path did not filter `ERROR_NOT_SUPPORTED`. Just call the
original function a second time to correctly follow the low privilege
code path.
Calling `Load()` twice for an atomic variable can return different
values each time. This resulted in trying to read the security
descriptor with high privileges, but then not entering the code path to
switch to low privileges when another thread has already done so
concurrently.
Failed locking attempts were immediately retried up to three times
without any delay between the retries. If a lock file is not found while
checking for other locks, with the reworked backend retries there is no
delay between those retries. This is a problem if a backend requires a
few seconds to reflect file deletions in the file listings. To work
around this problem, introduce a short exponentially increasing delay
between the retries. The number of retries is now increased to 4. This
results in delays of 5, 10 and 20 seconds between the retries.
When the context used for a load operation is canceled, then the result
is always an error independent of whether the file could be retrieved
from the backend. Do not false positively trip the circuit breaker in
this case.
The old behavior was problematic when trying to lock a repository. When
`Lock.checkForOtherLocks` listed multiple lock files in parallel and one
of them fails to load, then all other loads were canceled. This
cancelation was remembered by the circuit breaker, such that locking
retries would fail.
The HTTP client can only retry HTTP2 requests after receiving a GOAWAY
response if it can rewind the body. As we use a custom data type,
explicitly provide an implementation of `GetBody`.
Split description for non-Amazon S3 providers into separate section. The
section now also includes the `s3.bucket-lookup` extended option. Switch
to using regional URLs for Amazon S3 to replace the need for setting the
region.
* removes files which are no longer in the repository, including index files, snapshot files and pack files from the cache.
cache: fix ids set initialisation with NewIDSet()
Files were not included in the backup if the extended metadata for the
file could not be read. This is rather drastic. Instead settle on
returning a warning but still including the file in the backup.
This keeps backwards compatibility with the previous empty structs.
And maybe we'd want to put other fields into the inner struct later,
rather than the outer message.
Previously, an error JSON fragment would look like:
{"message_type": "error", "error": {}}
This is because encoding/json cannot marshal an error interface.
Instead, we now call .Error() to get the string value.
- Fix a logic error that instead of reporting the *first*
metadata-setting error that appears, we were instead reporting the
*last* error (and only if the lchown call failed!).
- Don't show any errors when setting metadata for files in non-root
mode (things like timestamps, attributes). Previously, only lchown
errors were skipped. But other kinds of attribute errors make sense
to skip as well. The code path happened to work correctly before
because of the above logic error. But once that was fixed, this
change needed to happen too.
-Format all commit messages in the same style as [the other commits in the repository](https://github.com/restic/restic/blob/master/CONTRIBUTING.md#git-commits).
-->
- [ ] I have read the [contribution guidelines](https://github.com/restic/restic/blob/master/CONTRIBUTING.md#providing-patches).
- [ ] I have [enabled maintainer edits](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/allowing-changes-to-a-pull-request-branch-created-from-a-fork).
- [ ] I have added tests for all code changes.
- [ ] I have added documentation for relevant changes (in the manual).
- [ ] There's a new file in `changelog/unreleased/` that describes the changes for our users (see [template](https://github.com/restic/restic/blob/master/changelog/TEMPLATE)).
- [ ] I have run `gofmt` on the code in all commits.
- [ ] All commit messages are formatted in the same style as [the other commits in the repo](https://github.com/restic/restic/blob/master/CONTRIBUTING.md#git-commits).
- [ ] I'm done! This pull request is ready for review.
* Bugfix #5107: Fix metadata error on Windows for backups using VSS
Since restic 0.17.2, when creating a backup on Windows using
`--use-fs-snapshot`, restic would report an error like the following:
```
error: incomplete metadata for C:\: get EA failed while opening file handle for path \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopyXX\, with: The process cannot access the file because it is being used by another process.
```
This has now been fixed by correctly handling paths that refer to volume shadow
copy snapshots.
https://github.com/restic/restic/issues/5107
https://github.com/restic/restic/pull/5110
https://github.com/restic/restic/pull/5112
* Enhancement #5096: Allow `prune --dry-run` without lock
The `prune --dry-run --no-lock` now allows performing a dry-run without locking
the repository. Note that if the repository is modified concurrently, `prune`
may return inaccurate statistics or errors.
https://github.com/restic/restic/pull/5096
# Changelog for restic 0.17.2 (2024-10-27)
The following sections list the changes in restic 0.17.2 relevant to
restic users. The changes are ordered by importance.
## Summary
* Fix #4004: Support container-level SAS/SAT tokens for Azure backend
* Fix #5047: Resolve potential error during concurrent cache cleanup
* Fix #5050: Return error if `tag` fails to lock repository
* Fix #5057: Exclude irregular files from backups
* Fix #5063: Correctly `backup` extended metadata when using VSS on Windows
## Details
* Bugfix #4004: Support container-level SAS/SAT tokens for Azure backend
Restic previously expected SAS/SAT tokens to be generated at the account level,
which prevented tokens created at the container level from being used to
initialize a repository. This caused an error when attempting to initialize a
repository with container-level tokens.
Restic now supports both account-level and container-level SAS/SAT tokens for
initializing a repository.
https://github.com/restic/restic/issues/4004
https://github.com/restic/restic/pull/5093
* Bugfix #5047: Resolve potential error during concurrent cache cleanup
When multiple restic processes ran concurrently, they could compete to remove
obsolete snapshots from the local backend cache, sometimes leading to a "no such
file or directory" error. Restic now suppresses this error to prevent issues
during cache cleanup.
https://github.com/restic/restic/pull/5047
* Bugfix #5050: Return error if `tag` fails to lock repository
Since restic 0.17.0, the `tag` command did not return an error when it failed to
open or lock the repository. This issue has now been fixed.
https://github.com/restic/restic/issues/5050
https://github.com/restic/restic/pull/5056
* Bugfix #5057: Exclude irregular files from backups
Since restic 0.17.1, files with the type `irregular` could mistakenly be
included in snapshots, especially when backing up special file types on Windows
that restic cannot process. This issue has now been fixed.
Previously, this bug caused the `check` command to report errors like the
following one:
```
tree 12345678[...]: node "example.zip" with invalid type "irregular"
```
To repair affected snapshots, upgrade to restic 0.17.2 and run:
```
restic repair snapshots --forget
```
This will remove the `irregular` files from the snapshots (creating a new
* Bugfix #4945: Include missing backup error text with `--json`
Previously, when running a backup with the `--json` option, restic failed to
include the actual error message in the output, resulting in `"error": {}` being
displayed.
This has now been fixed, and restic now includes the error text in JSON output.
https://github.com/restic/restic/issues/4945
https://github.com/restic/restic/pull/4946
* Bugfix #4953: Correctly handle long paths on older Windows versions
On older Windows versions, like Windows Server 2012, restic 0.17.0 failed to
back up files with long paths. This problem has now been resolved.
https://github.com/restic/restic/issues/4953
https://github.com/restic/restic/pull/4954
* Bugfix #4957: Fix delayed cancellation of certain commands
Since restic 0.17.0, some commands did not immediately respond to cancellation
via Ctrl-C (SIGINT) and continued running for a short period. The most affected
commands were `diff`,`find`, `ls`, `stats` and `rewrite`. This is now resolved.
https://github.com/restic/restic/issues/4957
https://github.com/restic/restic/pull/4960
* Bugfix #4958: Don't ignore metadata-setting errors during restore
Previously, restic used to ignore errors when setting timestamps, attributes, or
file modes during a restore. It now reports those errors, except for permission
related errors when running without root privileges.
https://github.com/restic/restic/pull/4958
* Bugfix #4969: Correctly restore timestamp for files with resource forks on macOS
On macOS, timestamps were not restored for files with resource forks. This has
now been fixed.
https://github.com/restic/restic/issues/4969
https://github.com/restic/restic/pull/5006
* Bugfix #4975: Prevent `backup --stdin-from-command` from panicking
Restic would previously crash if `--stdin-from-command` was specified without
providing a command. This issue has now been fixed.
https://github.com/restic/restic/issues/4975
https://github.com/restic/restic/pull/4976
* Bugfix #4980: Skip extended attribute processing on unsupported Windows volumes
With restic 0.17.0, backups of certain Windows paths, such as network drives,
failed due to errors while fetching extended attributes.
Restic now skips extended attribute processing for volumes where they are not
supported.
https://github.com/restic/restic/issues/4955
https://github.com/restic/restic/issues/4950
https://github.com/restic/restic/pull/4980
https://github.com/restic/restic/pull/4998
* Bugfix #5004: Fix spurious "A Required Privilege Is Not Held by the Client" error
On Windows, creating a backup could sometimes trigger the following error:
```
error: nodeFromFileInfo [...]: get named security info failed with: a required privilege is not held by the client.
```
This has now been fixed.
https://github.com/restic/restic/issues/5004
https://github.com/restic/restic/pull/5019
* Bugfix #5005: Fix rare failures to retry locking a repository
Restic 0.17.0 could in rare cases fail to retry locking a repository if one of
the lock files failed to load, resulting in the error:
```
unable to create lock in backend: circuit breaker open for file <lock/1234567890>
```
This issue has now been addressed. The error handling now properly retries the
locking operation. In addition, restic waits a few seconds between locking
retries to increase chances of successful locking.
https://github.com/restic/restic/issues/5005
https://github.com/restic/restic/pull/5011
https://github.com/restic/restic/pull/5012
* Bugfix #5018: Improve HTTP/2 support for REST backend
If `rest-server` tried to gracefully shut down an HTTP/2 connection still in use
by the client, it could result in the following error:
```
http2: Transport: cannot retry err [http2: Transport received Server's graceful shutdown GOAWAY] after Request.Body was written; define Request.GetBody to avoid this error
If `rest-server` tried to gracefully shut down an HTTP/2 connection still in
use by the client, it could result in the following error:
```
http2: Transport: cannot retry err [http2: Transport received Server's graceful shutdown GOAWAY] after Request.Body was written; define Request.GetBody to avoid this error
Bugfix: Fix metadata error on Windows for backups using VSS
Since restic 0.17.2, when creating a backup on Windows using `--use-fs-snapshot`,
restic would report an error like the following:
```
error: incomplete metadata for C:\: get EA failed while opening file handle for path \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopyXX\, with: The process cannot access the file because it is being used by another process.
```
This has now been fixed by correctly handling paths that refer to volume
f.StringVar(&opts.Parent,"parent","","use this parent `snapshot` (default: latest snapshot in the group determined by --group-by and not newer than the timestamp determined by --time)")
f.VarP(&opts.GroupBy,"group-by","g","`group` snapshots by host, paths and/or tags, separated by comma (disable grouping with '')")
f.BoolVarP(&opts.Force,"force","f",false,`force re-reading the source files/directories (overrides the "parent" flag)`)
// ErrInvalidSourceData is used to report an incomplete backup
varErrInvalidSourceData=errors.New("at least one source file could not be read")
opts.ExcludePatternOptions.Add(f)
funcinit(){
cmdRoot.AddCommand(cmdBackup)
f:=cmdBackup.Flags()
f.StringVar(&backupOptions.Parent,"parent","","use this parent `snapshot` (default: latest snapshot in the group determined by --group-by and not newer than the timestamp determined by --time)")
f.BoolVarP(&backupOptions.ExcludeOtherFS,"one-file-system","x",false,"exclude other file systems, don't cross filesystem boundaries and subvolumes")
f.StringArrayVar(&backupOptions.ExcludeIfPresent,"exclude-if-present",nil,"takes `filename[:header]`, exclude contents of directories containing filename (except filename itself) if header of that file is as provided (can be specified multiple times)")
f.BoolVar(&backupOptions.ExcludeCaches,"exclude-caches",false,`excludes cache directories that are marked with a CACHEDIR.TAG file. See https://bford.info/cachedir/ for the Cache Directory Tagging Standard`)
f.StringVar(&backupOptions.ExcludeLargerThan,"exclude-larger-than","","max `size` of the files to be backed up (allowed suffixes: k/K, m/M, g/G, t/T)")
f.BoolVar(&backupOptions.Stdin,"stdin",false,"read backup from stdin")
f.StringVar(&backupOptions.StdinFilename,"stdin-filename","stdin","`filename` to use when reading from stdin")
f.BoolVar(&backupOptions.StdinCommand,"stdin-from-command",false,"interpret arguments as command to execute and store its stdout")
f.Var(&backupOptions.Tags,"tag","add `tags` for the new snapshot in the format `tag[,tag,...]` (can be specified multiple times)")
f.UintVar(&backupOptions.ReadConcurrency,"read-concurrency",0,"read `n` files concurrently (default: $RESTIC_READ_CONCURRENCY or 2)")
f.StringVarP(&backupOptions.Host,"host","H","","set the `hostname` for the snapshot manually (default: $RESTIC_HOST). To prevent an expensive rescan use the \"parent\" flag")
f.StringVar(&backupOptions.Host,"hostname","","set the `hostname` for the snapshot manually")
f.BoolVarP(&opts.ExcludeOtherFS,"one-file-system","x",false,"exclude other file systems, don't cross filesystem boundaries and subvolumes")
f.StringArrayVar(&opts.ExcludeIfPresent,"exclude-if-present",nil,"takes `filename[:header]`, exclude contents of directories containing filename (except filename itself) if header of that file is as provided (can be specified multiple times)")
f.BoolVar(&opts.ExcludeCaches,"exclude-caches",false,`excludes cache directories that are marked with a CACHEDIR.TAG file. See https://bford.info/cachedir/ for the Cache Directory Tagging Standard`)
f.StringVar(&opts.ExcludeLargerThan,"exclude-larger-than","","max `size` of the files to be backed up (allowed suffixes: k/K, m/M, g/G, t/T)")
f.BoolVar(&opts.Stdin,"stdin",false,"read backup from stdin")
f.StringVar(&opts.StdinFilename,"stdin-filename","stdin","`filename` to use when reading from stdin")
f.BoolVar(&opts.StdinCommand,"stdin-from-command",false,"interpret arguments as command to execute and store its stdout")
f.Var(&opts.Tags,"tag","add `tags` for the new snapshot in the format `tag[,tag,...]` (can be specified multiple times)")
f.UintVar(&opts.ReadConcurrency,"read-concurrency",0,"read `n` files concurrently (default: $RESTIC_READ_CONCURRENCY or 2)")
f.StringVarP(&opts.Host,"host","H","","set the `hostname` for the snapshot manually (default: $RESTIC_HOST). To prevent an expensive rescan use the \"parent\" flag")
f.StringVar(&opts.Host,"hostname","","set the `hostname` for the snapshot manually")
err:=f.MarkDeprecated("hostname","use --host")
iferr!=nil{
// MarkDeprecated only returns an error when the flag could not be found
panic(err)
}
f.StringArrayVar(&backupOptions.FilesFrom,"files-from",nil,"read the files to backup from `file` (can be combined with file args; can be specified multiple times)")
f.StringArrayVar(&backupOptions.FilesFromVerbatim,"files-from-verbatim",nil,"read the files to backup from `file` (can be combined with file args; can be specified multiple times)")
f.StringArrayVar(&backupOptions.FilesFromRaw,"files-from-raw",nil,"read the files to backup from `file` (can be combined with file args; can be specified multiple times)")
f.StringVar(&backupOptions.TimeStamp,"time","","`time` of the backup (ex. '2012-11-01 22:08:41') (default: now)")
f.BoolVar(&backupOptions.WithAtime,"with-atime",false,"store the atime for all files and directories")
f.BoolVar(&backupOptions.IgnoreInode,"ignore-inode",false,"ignore inode number and ctime changes when checking for modified files")
f.BoolVar(&backupOptions.IgnoreCtime,"ignore-ctime",false,"ignore ctime changes when checking for modified files")
f.BoolVarP(&backupOptions.DryRun,"dry-run","n",false,"do not upload or write any data, just show what would be done")
f.BoolVar(&backupOptions.NoScan,"no-scan",false,"do not run scanner to estimate size of backup")
f.StringArrayVar(&opts.FilesFrom,"files-from",nil,"read the files to backup from `file` (can be combined with file args; can be specified multiple times)")
f.StringArrayVar(&opts.FilesFromVerbatim,"files-from-verbatim",nil,"read the files to backup from `file` (can be combined with file args; can be specified multiple times)")
f.StringArrayVar(&opts.FilesFromRaw,"files-from-raw",nil,"read the files to backup from `file` (can be combined with file args; can be specified multiple times)")
f.StringVar(&opts.TimeStamp,"time","","`time` of the backup (ex. '2012-11-01 22:08:41') (default: now)")
f.BoolVar(&opts.WithAtime,"with-atime",false,"store the atime for all files and directories")
f.BoolVar(&opts.IgnoreInode,"ignore-inode",false,"ignore inode number and ctime changes when checking for modified files")
f.BoolVar(&opts.IgnoreCtime,"ignore-ctime",false,"ignore ctime changes when checking for modified files")
f.BoolVarP(&opts.DryRun,"dry-run","n",false,"do not upload or write any data, just show what would be done")
f.BoolVar(&opts.NoScan,"no-scan",false,"do not run scanner to estimate size of backup")
ifruntime.GOOS=="windows"{
f.BoolVar(&backupOptions.UseFsSnapshot,"use-fs-snapshot",false,"use filesystem snapshot where possible (currently only Windows VSS)")
f.BoolVar(&opts.UseFsSnapshot,"use-fs-snapshot",false,"use filesystem snapshot where possible (currently only Windows VSS)")
f.BoolVar(&opts.ExcludeCloudFiles,"exclude-cloud-files",false,"excludes online-only cloud files (such as OneDrive Files On-Demand)")
}
f.BoolVar(&backupOptions.SkipIfUnchanged,"skip-if-unchanged",false,"skip snapshot creation if identical to parent snapshot")
f.BoolVar(&opts.SkipIfUnchanged,"skip-if-unchanged",false,"skip snapshot creation if identical to parent snapshot")
// parse read concurrency from env, on error the default value will be used
// CheckOptions bundles all options for the 'check' command.
@@ -59,14 +75,9 @@ type CheckOptions struct {
WithCachebool
}
varcheckOptionsCheckOptions
funcinit(){
cmdRoot.AddCommand(cmdCheck)
f:=cmdCheck.Flags()
f.BoolVar(&checkOptions.ReadData,"read-data",false,"read all data blobs")
f.StringVar(&checkOptions.ReadDataSubset,"read-data-subset","","read a `subset` of data packs, specified as 'n/t' for specific part, or either 'x%' or 'x.y%' or a size in bytes with suffixes k/K, m/M, g/G, t/T for a random subset")
func(opts*CheckOptions)AddFlags(f*pflag.FlagSet){
f.BoolVar(&opts.ReadData,"read-data",false,"read all data blobs")
f.StringVar(&opts.ReadDataSubset,"read-data-subset","","read a `subset` of data packs, specified as 'n/t' for specific part, or either 'x%' or 'x.y%' or a size in bytes with suffixes k/K, m/M, g/G, t/T for a random subset")
printer.E("error: repository still uses the S3 legacy layout\nYou must run `restic migrate s3legacy` to correct this.\n")
}else{
errorsFound=true
printer.E("%v\n",err)
}
}
iforphanedPacks>0&&!errorsFound{
// hide notice if repository is damaged
printer.P("%d additional files were found in the repo, which likely contain duplicate data.\nThis is non-critical, you can run `restic prune` to correct this.\n",orphanedPacks)
iforphanedPacks>0{
summary.HintPrune=true
if!errorsFound{
// hide notice if repository is damaged
printer.P("%d additional files were found in the repo, which likely contain duplicate data.\nThis is non-critical, you can run `restic prune` to correct this.\n",orphanedPacks)
printer.P("read %d bytes (%.1f%%) of data packs\n",subsetSize,percentage)
}
ifpacks==nil{
returnerrors.Fatal("internal error: failed to select packs to check")
returnsummary, errors.Fatal("internal error: failed to select packs to check")
}
doReadData(packs)
}
iflen(salvagePacks)>0{
printer.E("\nThe repository contains damaged pack files. These damaged files must be removed to repair the repository. This can be done using the following commands. Please read the troubleshooting guide at https://restic.readthedocs.io/en/stable/077_troubleshooting.html first.\n\n")
printer.E("Damaged pack files can be caused by backend problems, hardware problems or bugs in restic. Please open an issue at https://github.com/restic/restic/issues/new/choose for further troubleshooting!\n")
}
ifctx.Err()!=nil{
returnctx.Err()
returnsummary, ctx.Err()
}
iferrorsFound{
iflen(salvagePacks)==0{
printer.E("\nThe repository is damaged and must be repaired. Please follow the troubleshooting guide at https://restic.readthedocs.io/en/stable/077_troubleshooting.html .\n\n")
f.VarP(&opts.Last,"keep-last","l","keep the last `n` snapshots (use 'unlimited' to keep all snapshots)")
f.VarP(&opts.Hourly,"keep-hourly","H","keep the last `n` hourly snapshots (use 'unlimited' to keep all hourly snapshots)")
f.VarP(&opts.Daily,"keep-daily","d","keep the last `n` daily snapshots (use 'unlimited' to keep all daily snapshots)")
f.VarP(&opts.Weekly,"keep-weekly","w","keep the last `n` weekly snapshots (use 'unlimited' to keep all weekly snapshots)")
f.VarP(&opts.Monthly,"keep-monthly","m","keep the last `n` monthly snapshots (use 'unlimited' to keep all monthly snapshots)")
f.VarP(&opts.Yearly,"keep-yearly","y","keep the last `n` yearly snapshots (use 'unlimited' to keep all yearly snapshots)")
f.VarP(&opts.Within,"keep-within","","keep snapshots that are newer than `duration` (eg. 1y5m7d2h) relative to the latest snapshot")
f.VarP(&opts.WithinHourly,"keep-within-hourly","","keep hourly snapshots that are newer than `duration` (eg. 1y5m7d2h) relative to the latest snapshot")
f.VarP(&opts.WithinDaily,"keep-within-daily","","keep daily snapshots that are newer than `duration` (eg. 1y5m7d2h) relative to the latest snapshot")
f.VarP(&opts.WithinWeekly,"keep-within-weekly","","keep weekly snapshots that are newer than `duration` (eg. 1y5m7d2h) relative to the latest snapshot")
f.VarP(&opts.WithinMonthly,"keep-within-monthly","","keep monthly snapshots that are newer than `duration` (eg. 1y5m7d2h) relative to the latest snapshot")
f.VarP(&opts.WithinYearly,"keep-within-yearly","","keep yearly snapshots that are newer than `duration` (eg. 1y5m7d2h) relative to the latest snapshot")
f.Var(&opts.KeepTags,"keep-tag","keep snapshots with this `taglist` (can be specified multiple times)")
f.BoolVar(&opts.UnsafeAllowRemoveAll,"unsafe-allow-remove-all",false,"allow deleting all snapshots of a snapshot group")
funcinit(){
cmdRoot.AddCommand(cmdForget)
f:=cmdForget.Flags()
f.VarP(&forgetOptions.Last,"keep-last","l","keep the last `n` snapshots (use 'unlimited' to keep all snapshots)")
f.VarP(&forgetOptions.Hourly,"keep-hourly","H","keep the last `n` hourly snapshots (use 'unlimited' to keep all hourly snapshots)")
f.VarP(&forgetOptions.Daily,"keep-daily","d","keep the last `n` daily snapshots (use 'unlimited' to keep all daily snapshots)")
f.VarP(&forgetOptions.Weekly,"keep-weekly","w","keep the last `n` weekly snapshots (use 'unlimited' to keep all weekly snapshots)")
f.VarP(&forgetOptions.Monthly,"keep-monthly","m","keep the last `n` monthly snapshots (use 'unlimited' to keep all monthly snapshots)")
f.VarP(&forgetOptions.Yearly,"keep-yearly","y","keep the last `n` yearly snapshots (use 'unlimited' to keep all yearly snapshots)")
f.VarP(&forgetOptions.Within,"keep-within","","keep snapshots that are newer than `duration` (eg. 1y5m7d2h) relative to the latest snapshot")
f.VarP(&forgetOptions.WithinHourly,"keep-within-hourly","","keep hourly snapshots that are newer than `duration` (eg. 1y5m7d2h) relative to the latest snapshot")
f.VarP(&forgetOptions.WithinDaily,"keep-within-daily","","keep daily snapshots that are newer than `duration` (eg. 1y5m7d2h) relative to the latest snapshot")
f.VarP(&forgetOptions.WithinWeekly,"keep-within-weekly","","keep weekly snapshots that are newer than `duration` (eg. 1y5m7d2h) relative to the latest snapshot")
f.VarP(&forgetOptions.WithinMonthly,"keep-within-monthly","","keep monthly snapshots that are newer than `duration` (eg. 1y5m7d2h) relative to the latest snapshot")
f.VarP(&forgetOptions.WithinYearly,"keep-within-yearly","","keep yearly snapshots that are newer than `duration` (eg. 1y5m7d2h) relative to the latest snapshot")
f.Var(&forgetOptions.KeepTags,"keep-tag","keep snapshots with this `taglist` (can be specified multiple times)")
f.BoolVar(&forgetOptions.UnsafeAllowRemoveAll,"unsafe-allow-remove-all",false,"allow deleting all snapshots of a snapshot group")
f.BoolVar(&initOptions.CopyChunkerParameters,"copy-chunker-params",false,"copy chunker parameters from the secondary repository (useful with the copy command)")
f.StringVar(&initOptions.RepositoryVersion,"repository-version","stable","repository format version to use, allowed values are a format version, 'latest' and 'stable'")
f.BoolVar(&opts.CopyChunkerParameters,"copy-chunker-params",false,"copy chunker parameters from the secondary repository (useful with the copy command)")
f.StringVar(&opts.RepositoryVersion,"repository-version","stable","repository format version to use, allowed values are a format version, 'latest' and 'stable'")
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.