* sometimes, the report function may absorb the error and return nil, in those cases the bar.Add(1) method would execute even if the file deletion had failed
* feat(backends/s3): add warmup support before repacks and restores
This commit introduces basic support for transitioning pack files stored
in cold storage to hot storage on S3 and S3-compatible providers.
To prevent unexpected behavior for existing users, the feature is gated
behind new flags:
- `s3.enable-restore`: opt-in flag (defaults to false)
- `s3.restore-days`: number of days for the restored objects to remain
in hot storage (defaults to `7`)
- `s3.restore-timeout`: maximum time to wait for a single restoration
(default to `1 day`)
- `s3.restore-tier`: retrieval tier at which the restore will be
processed. (default to `Standard`)
As restoration times can be lengthy, this implementation preemptively
restores selected packs to prevent incessant restore-delays during
downloads. This is slightly sub-optimal as we could process packs
out-of-order (as soon as they're transitioned), but this would really
add too much complexity for a marginal gain in speed.
To maintain simplicity and prevent resources exhautions with lots of
packs, no new concurrency mechanisms or goroutines were added. This just
hooks gracefully into the existing routines.
**Limitations:**
- Tests against the backend were not written due to the lack of cold
storage class support in MinIO. Testing was done manually on
Scaleway's S3-compatible object storage. If necessary, we could
explore testing with LocalStack or mocks, though this requires further
discussion.
- Currently, this feature only warms up before restores and repacks
(prune/copy), as those are the two main use-cases I came across.
Support for other commands may be added in future iterations, as long
as affected packs can be calculated in advance.
- The feature is gated behind a new alpha `s3-restore` feature flag to
make it explicit that the feature is still wet behind the ears.
- There is no explicit user notification for ongoing pack restorations.
While I think it is not necessary because of the opt-in flag, showing
some notice may improve usability (but would probably require major
refactoring in the progress bar which I didn't want to start). Another
possibility would be to add a flag to send restores requests and fail
early.
See https://github.com/restic/restic/issues/3202
* ui: warn user when files are warming up from cold storage
* refactor: remove the PacksWarmer struct
It's easier to handle multiple handles in the backend directly, and it
may open the door to reducing the number of requests made to the backend
in the future.
The size comparison for `--max-unused` only accounted for unused but not
for duplicate data. For repositories with a large amount of duplicates
this can result in a situation where no data gets pruned even though
the amount of unused data is much higher than specified.
Those methods now only allow modifying snapshots. Internal data types
used by the repository are now read-only. The repository-internal code
can bypass the restrictions by wrapping the repository in an
`internalRepository` type.
The restriction itself is implemented by using a new datatype
WriteableFileType in the SaveUnpacked and RemoveUnpacked methods. This
statically ensures that code cannot bypass the access restrictions.
The test changes are somewhat noisy as some of them modify repository
internals and therefore require some way to bypass the access
restrictions. This works by capturing an `internalRepository` or
`Backend` when creating the Repository using a test helper function.
* Refactor ea and sd helpers to use go-winio
Import go-winio and instead of copying the functions to encode/decode extended attributes and enable process privileges for security descriptors, call the functions defined in go-winio.
* ui: restore --delete indicates number of deleted files
* adds new field `FilesDeleted` to the State struct, JSON and text progress updaters
* increment FilesDeleted count when ReportedDeletedFile
* ui: collect the files to be deleted, delete, then update the count post deletion
* docs: update scripting output fields for restore command
ui: report deleted directories and refactor function name to ReportDeletion
Only the `Sys()` value from os.FileInfo is kept as field `sys` to
support Windows. The os.FileInfo removal ensures that for values like
`ModTime` that existed in both data structures there's no more confusion
which value is actually used.
Depending on the change packages, the VSS tests from ./cmd/restic and
the fs package may overlap in time. This causes the snapshot creation to
fail. Add retries in that case.
The actual implementation still relies on file paths, but with the
abstraction layer in place, an FS implementation can ensure atomic file
accesses in the future.