Compare commits

..

973 Commits

Author SHA1 Message Date
Alexander Neumann
4d93da9f68 Add VERSION file for 0.3.3 2017-01-08 10:46:43 +01:00
Alexander Neumann
4a6086a14b Merge pull request #718 from mholt/flag-priority
CLI options now override env vars
2017-01-02 20:31:20 +01:00
Matthew Holt
0a34a2d5d8 Consider the environment 2017-01-02 12:21:30 -07:00
Matthew Holt
a394b675b0 CLI options now override env vars 2017-01-02 11:14:22 -07:00
Alexander Neumann
04846b10bc Merge pull request #717 from restic/fix-367
Only add entries to indexes inside PackerManager
2017-01-02 17:18:59 +01:00
Alexander Neumann
f9501e97a2 Only add entries to indexes inside PackerManager
This was a nasty bug. Users reported that restic aborts with panic:

    panic: store new item in finalized index

The code calling panic() is in the Store() method of an index and guards
the failure case that an index is to be modified while it has already
been saved in the repo.

What happens here (at least that's what I suspect): PackerManager calls
Current() on a MasterIndex, which yields one index A. Concurrently,
another goroutine calls Repository.SaveFullIndex(), which in turn calls
MasterIndex.FullIndexes(), which (among others) yields the index A. Then
all indexes are marked as final. Then the other goroutine is executed
which adds an entry to the index A, which is now marked as final. Then
the panic occurs.

The commit solves this by removing MasterIndex.Current() and adding a
Store() method that stores the entry in one non-finalized index. This
method uses the same RWMutex as the other methods (e.g. FullIndexes()),
thereby ensuring that the full indexes can only be processed before or
after Store() is called.

Closes #367
2017-01-02 14:14:51 +01:00
Alexander Neumann
3ef788765a Merge pull request #715 from zcalusic/master
Document REST backend
2017-01-02 11:13:35 +01:00
Alexander Neumann
8e16931949 Merge pull request #716 from zcalusic/rest-server-new-location
Rest server moved to https://github.com/restic/rest-server
2017-01-02 11:12:37 +01:00
Zlatko Čalušić
2267aca296 Rest server moved to https://github.com/restic/rest-server 2017-01-01 16:22:46 +01:00
Zlatko Čalušić
c70bc7ed0b Document REST backend
Closes #644
2016-12-31 13:14:44 +01:00
Alexander Neumann
8e3b81c5ec Merge pull request #713 from restic/update-travis
Update .travis.yml
2016-12-30 17:21:27 +01:00
Alexander Neumann
30975f7116 Update appveyor configuration 2016-12-30 17:07:42 +01:00
Alexander Neumann
0ef463d56a Update .travis.yml 2016-12-30 15:21:49 +01:00
Alexander Neumann
5132f5bfe6 Merge pull request #709 from restic/fix-708
Make sure cleanup is executed before exiting
2016-12-28 18:28:07 +01:00
Alexander Neumann
80457018d7 Make sure cleanup is executed before exiting
Closes #708
2016-12-28 10:53:31 +01:00
Alexander Neumann
b0997d05fb Merge pull request #704 from restic/remove-timestamp
Remove timestamp from `version` command
2016-12-19 22:22:43 +01:00
Alexander Neumann
3add2f0acb Merge pull request #703 from sjoerdsimons/master
Avoid duplicate backup paths
2016-12-19 22:21:18 +01:00
Alexander Neumann
166d1811a1 Remove timestamp from version command
This enables reproducible builds, for details see
https://reproducible-builds.org/docs/timestamps/
2016-12-19 21:14:12 +01:00
Sjoerd Simons
e1fc455079 Avoid duplicate backup paths
Target directories from the from-files argument get added to the command
line args, after which all command line args were appended to the same
variable again causing duplicates. Split the used variables to avoid
this.

Signed-off-by: Sjoerd Simons <sjoerd@luon.net>
2016-12-18 23:23:57 +01:00
Alexander Neumann
98237bf942 Add VERSION file for 0.3.2 2016-12-18 18:53:03 +01:00
Alexander Neumann
75f21f23ff Merge pull request #700 from restic/debug-panic
Make sure SaveFile always returns a node
2016-12-14 21:29:04 +01:00
Alexander Neumann
9885aeac3b Make sure SaveFile always returns a node 2016-12-14 18:56:11 +01:00
Alexander Neumann
85c87b9ab9 Add VERSION file for 0.3.1 2016-12-13 21:36:22 +01:00
Alexander Neumann
51cd78e16c Merge pull request #691 from restic/fix-604
Correctly save modified files
2016-12-10 17:31:20 +01:00
Alexander Neumann
e6a40af06d Treat changed files as a warning, not an error 2016-12-10 17:14:13 +01:00
Alexander Neumann
3fcbb4ac25 Use new Node if file has changed
Closes #604
2016-12-10 16:54:20 +01:00
Alexander Neumann
7d71bad4eb Test if modified files are correctly saved 2016-12-10 16:36:58 +01:00
Alexander Neumann
dbdfed6343 Merge pull request #690 from zcalusic/master
Even if file changes size during backup, still save it
2016-12-10 12:36:56 +01:00
Alexander Neumann
5e48c1fadc Merge pull request #688 from restic/fix-686
Save snapshot after saving all pack files
2016-12-10 12:33:58 +01:00
Zlatko Čalušić
deb6dd7f72 Even if file changes size during backup, still save it
Previously such files (typically log files) wouldn't be backed up at
all!

The proper behaviour is to backup what we can, and warn the operator
that file is possibly not complete. But it is a warning, not an error.

Closes #689
2016-12-10 12:24:45 +01:00
Alexander Neumann
c265673c8e Save snapshot after saving all pack files
Closes #686
2016-12-10 11:49:09 +01:00
Alexander Neumann
0fceeb20f1 Merge pull request #685 from jannic/patch-1
Update debug message
2016-12-06 08:16:33 +01:00
Jan Niehusmann
c5897e0d62 Update debug message
Since client.BucketExists was changed to return a separate 'found' value, instead of reporting an error when the bucket doesn't exist, the error code path does no longer imply a call to client.MakeBucket. So the second part of the debug message, "...trying to create the bucket" doesn't apply any more.
Also, changed the name of the return value from 'ok' to 'found', matching the API documentation at https://docs.minio.io/docs/golang-client-api-reference#BucketExists.
2016-12-05 23:12:30 +01:00
Alexander Neumann
8d13f22c50 Merge pull request #683 from jannic/pr1
Omit "archived as %v" messages in quiet mode.
2016-12-03 11:15:24 +01:00
Alexander Neumann
1815536534 Update build.go 2016-12-03 11:14:30 +01:00
Jan Niehusmann
9267c25aa0 Omit "archived as %v" messages in quiet mode. 2016-12-03 10:28:49 +01:00
Alexander Neumann
281cbbdf2e Merge pull request #682 from jpmens/patch-1
Small typo in dry-run of remove snapshot
2016-12-03 10:11:58 +01:00
JP Mens
5996d671a0 Small typo in dry-run of remove snapshot 2016-12-02 17:33:05 +01:00
Alexander Neumann
ef9b974bcd Merge pull request #681 from zcalusic/master
Stop trying to detect Go version
2016-12-02 11:15:29 +01:00
Zlatko Čalušić
7e66b73ce0 Stop trying to detect Go version
It fails on pre-release versions, anyway.  It's enough to mention the oldest
supported version in README.md.  Anything older than two latest Go releases
is bad idea, anyway, 'cause it's unsupported by Go development team.

Closes #680
2016-12-01 20:06:23 +01:00
Alexander Neumann
505a2097ad Manual: Add note about s3 bucket locations 2016-11-27 20:18:57 +01:00
Alexander Neumann
07380878fb Merge pull request #678 from restic/fix-676
Update github.com/elithrar/simple-scrypt
2016-11-19 19:22:44 +01:00
Alexander Neumann
3b29ae3c99 Update github.com/elithrar/simple-scrypt
Closes #676
2016-11-19 17:13:13 +01:00
Alexander Neumann
e5617b5fd1 Merge pull request #675 from restic/parent-check-hostname
Use the hostname filter to find a parent snasphot
2016-11-19 12:42:40 +01:00
Alexander Neumann
11f23ae663 Merge pull request #673 from Novex/restore-directory-metadata-for-existing-directories
Don't consider a pre-existing directory in the restore path to be a failure
2016-11-19 12:42:31 +01:00
Alexander Neumann
2828003d60 Test that existing files and dirs are restored 2016-11-15 21:41:41 +01:00
Alexander Neumann
16cef3b4c6 Use the hostname filter to find a parent snasphot
Closes #674
2016-11-15 21:04:51 +01:00
Alexander Neumann
699f39e3cf FindLatestSnapshot: Rename parameter to clarify meaning 2016-11-15 21:03:54 +01:00
Seb Patane
33b6a7381b Don't consider a pre-existing directory in the restore path to be a failure
* When a directory already exists, CreateDirAt returns an error stating so
  * This means that the restoreMetadata step is skipped, so for directories which already exist no file permissions, owners, groups, etc will be restored on them
* Not returning the error if it's a "directory exists" error means the metadata will get restored
  * It also removes the superfluous "error for ...: mkdir ...: file exists" messages
* This makes the behaviour of directories consistent with that of files (which always have their content & metadata restored, regardless of whether they existed or not)
2016-11-14 17:53:09 +10:00
Alexander Neumann
190673b24a Merge pull request #657 from AlexanderThaller/read_backup_files_from_file
Read files to backup from a file
2016-11-12 21:47:11 +01:00
Alexander Thaller
b7b03dbd4a Added new flag to backup subcommand that reads the files to backup from a file 2016-11-12 15:45:32 +01:00
Alexander Neumann
56009dd16e Merge pull request #670 from restic/remove-fadvise
Remove fadvise
2016-11-10 23:42:21 +01:00
Alexander Neumann
b56bde3f61 Remove fadvise
This commit removes the use of FADV_DONTNEED, which also purges active
cached pages for other processes.
2016-11-10 22:21:22 +01:00
Alexander Neumann
b1ed74eb43 Merge pull request #669 from zcalusic/master
Fix REST backend HTTP keepalive
2016-11-10 21:05:14 +01:00
Zlatko Čalušić
d8f0e7cbd1 Fix REST backend HTTP keepalive
This is subtle.  A combination od fast client disk (read: SSD) with lots
of files and fast network connection to restic-server would suddenly
start getting lots of "dial tcp: connect: cannot assign requested
address" errors during backup stage.  Further inspection revealed that
client machine was plagued with TCP sockets in TIME_WAIT state.  When
ephemeral port range was finally exhausted, no more sockets could be
opened, so restic would freak out.

To understand the magnitude of this problem, with ~18k ports and default
timeout of 60 seconds, it means more than 300 HTTP connections per
seconds were created and teared down.  Yeah, restic-server is that
fast. :)

As it turns out, this behavior was product of 2 subtle issues:

1) The body of HTTP response wasn't read completely with io.ReadFull()
   at the end of the Load() function.  This deactivated HTTP keepalive,
   so already open connections were not reused, but closed instead, and
   new ones opened for every new request.  io.Copy(ioutil.Discard,
   resp.Body) before resp.Body.Close() remedies this.

2) Even with the above fix, somehow having MaxIdleConnsPerHost at its
   default value of 2 wasn't enough to stop reconnecting.  It is hard to
   understand why this would be so detrimental, it could even be some
   subtle Go runtime bug.  Anyhow, setting this value to match the
   connection limit, as set by connLimit global variable, finally nails
   this ugly bug.

I fixed several other places where the response body wasn't read in
full (or at all).  For example, json.NewDecoder() is also known not to
read the whole body of response.

Unfortunately, this is not over yet. :( The check command is firing up
to 40 simultaneous connections to the restic-server.  Then, once again,
MaxIdleConnsPerHost is too low to support keepalive, and sockets in the
TIME_WAIT state pile up.  But, as this kind of concurrency absolutely
kill the poor disk on the server side, this is a completely different
bug then.
2016-11-10 09:32:07 +01:00
Alexander Neumann
5e721afb5d doc/mkdocs: Improve code hilighting
Additionally, refresh the restic sample output.
2016-11-08 20:23:39 +01:00
Alexander Neumann
149c01a86a Merge pull request #659 from restic/device-freebsd
fs.DeviceID(): Return errors whehn fi is nil
2016-11-05 13:35:16 +01:00
Alexander Neumann
51322a1055 selectFunc: handle nil 2016-11-05 12:38:33 +01:00
Alexander Neumann
c5bc802ff0 fs.DeviceID(): Return errors when fi is nil 2016-11-05 12:38:17 +01:00
Alexander Neumann
6b88d3b5d0 Merge pull request #651 from justinclift/issue649v1
Remove redundant check of error var e
2016-10-26 16:06:14 +02:00
Justin Clift
ecc1f92787 Remove redundant check of error var e
As per #649
2016-10-25 18:10:53 +01:00
Alexander Neumann
d4f76fbe26 Merge pull request #650 from restic/forget-remove-index-load
forget: do not load index
2016-10-24 14:33:11 +02:00
Alexander Neumann
1dd72693f9 forget: Remove unneeded index loading 2016-10-24 14:01:23 +02:00
Alexander Neumann
fe1013e779 cmds/ls: Format timestamp 2016-10-19 22:11:37 +02:00
Alexander Neumann
84ca5172f0 Remove Debian UID from GPG key printout 2016-10-17 13:10:16 +02:00
Alexander Neumann
7c49255c2a Add hints how to use the go tool and direnv 2016-10-17 13:09:56 +02:00
Alexander Neumann
a5a9c42185 Merge pull request #646 from stakewinner00/master
don't print status info when running in the background
2016-10-15 20:09:42 +02:00
David
5f8a6cea6f don't print status info if running in the background
clean

fix OS issues & format code

fix issues
2016-10-15 18:12:19 +00:00
Alexander Neumann
50212805aa Merge pull request #643 from restic/update-poly1305
Update golang.org/x/crypto/poly1305
2016-10-14 15:51:57 +02:00
Alexander Neumann
cd7feb0148 Update golang.org/x/crypto/poly1305 2016-10-14 12:44:06 +02:00
Alexander Neumann
974f2f78a9 Merge pull request #641 from restic/fix-640
Improve error message for 'forget'
2016-10-13 20:41:29 +02:00
Alexander Neumann
250b36eeb1 Improve error message for 'forget'
$ bin/restic forget /d 7 /w 4 /m 12
    argument "/d" is not a snapshot ID, ignoring
    argument "7" is not a snapshot ID, ignoring
    argument "/w" is not a snapshot ID, ignoring
    argument "4" is not a snapshot ID, ignoring
    argument "/m" is not a snapshot ID, ignoring
    cound not find a snapshot for ID "12", ignoring
2016-10-10 20:55:02 +02:00
Alexander Neumann
6f72164bbe Merge pull request #638 from hmsdao/patch-fixpath
Added long paths fix for samba network shares
2016-10-05 17:07:53 +02:00
Daniel Örn
ba8d960c8f using backtics instead of doublequotes 2016-10-05 08:26:32 +02:00
Daniel Örn
84421a7c68 structured file with gofmt 2016-10-05 07:30:46 +02:00
Daniel Örn
5c7325f44a Added long paths fix for samba network shares 2016-10-05 07:09:56 +02:00
Alexander Neumann
c45b498a8b Merge pull request #637 from ckemper67/s3-join
Use path.Join to create the s3 object name within the bucket.
2016-10-03 22:38:28 +02:00
Christian Kemper
a4261dcc9c Use path.Join to create the s3 object name within the bucket.
path.Join already automatically skips empty path segments when
joining, so this simplifies the s3Path code.
2016-10-02 16:56:07 -07:00
Alexander Neumann
d1ecdf7441 Add VERSION file for 0.3.0 2016-10-02 18:24:14 +02:00
Alexander Neumann
088ca033f8 Reword README 2016-10-02 16:08:05 +02:00
Alexander Neumann
5b7dd32c20 Manual: Reword section about fuse support 2016-10-02 16:03:02 +02:00
Alexander Neumann
eb94395f3d Merge pull request #635 from restic/fix-633
Fix short-hand option clash
2016-09-29 21:39:05 +02:00
Alexander Neumann
22f5fc5739 Improve help text for slice options 2016-09-29 20:39:55 +02:00
Alexander Neumann
e994cacbfe Fix short-hand option clash 2016-09-29 20:37:45 +02:00
Alexander Neumann
3114d41cb7 Merge pull request #632 from restic/rework-debug
Rework debug message printing
2016-09-28 21:11:02 +02:00
Alexander Neumann
968b2ece43 Add section to the manual about debug message filters 2016-09-28 20:22:22 +02:00
Alexander Neumann
feed54caef Remove timing, simplify function matching 2016-09-28 20:10:40 +02:00
Alexander Neumann
4eddcb344e Update calls to debug.Log() 2016-09-28 19:56:03 +02:00
Alexander Neumann
2ae06a7a01 Rework debug log function 2016-09-28 19:56:03 +02:00
Alexander Neumann
25945718a1 Fix recursive call to debug.Log 2016-09-28 19:56:03 +02:00
Alexander Neumann
254188f38f Merge pull request #631 from restic/switch-to-cobra
Switch to cobra/pflag for CLI
2016-09-28 19:54:59 +02:00
Alexander Neumann
3601c39177 Add comments 2016-09-27 20:22:01 +02:00
Alexander Neumann
02f7bb0d4c Add mousetrap library for Windows 2016-09-27 20:13:22 +02:00
Alexander Neumann
565d72ef36 Use cobra for all commands 2016-09-27 19:53:03 +02:00
Alexander Neumann
3806623c23 Vendor cobra and pflag 2016-09-27 19:52:48 +02:00
Alexander Neumann
0fa12839a5 Remove go-flags 2016-09-27 19:52:48 +02:00
Alexander Neumann
a257a613d7 Fix debug log 2016-09-27 19:52:48 +02:00
Alexander Neumann
0a752b9fab test helpers: Always print stack trace 2016-09-27 19:50:26 +02:00
Alexander Neumann
eeec0d63c2 Merge pull request #630 from restic/remove-unused
Remove unused bits and pieces
2016-09-24 12:03:26 +02:00
Alexander Neumann
04d6b5da2f Remove more unused bits 2016-09-21 20:45:18 +02:00
Alexander Neumann
1dfd3b8aa3 Remove unused bits and pieces
Reported by https://github.com/dominikh/go-unused
2016-09-21 20:22:32 +02:00
Alexander Neumann
0873821b98 Add section about --one-file-system to manual 2016-09-18 20:18:52 +02:00
Alexander Neumann
0a9cbd47c7 Merge pull request #626 from rfjakob/master
Add "-x", "--one-file-system" option
2016-09-18 20:03:58 +02:00
Alexander Neumann
b61027b48d Merge pull request #627 from restic/fix-fuse-test
fuse: fix tests for snapshots with same timestamps
2016-09-18 19:55:12 +02:00
Jakob Unterwurzacher
53701891a1 Add "-x", "--one-file-system" option
Equivalent to rsync's "-x" option.

Notes to the naming:

"--exclude-other-filesystems"
is used by Duplicity,

"--one-file-system"
is used rsync and tar.

This latter should be more familiar to the user.
2016-09-18 18:52:30 +02:00
Alexander Neumann
68b462d057 fuse: Add test for same timestamps 2016-09-18 18:30:25 +02:00
Alexander Neumann
649f789190 fuse: Fix test for timestamps with same second 2016-09-18 18:13:39 +02:00
Alexander Neumann
7b3e319398 Merge pull request #625 from restic/fix-624
fuse: correctly handle snapshots
2016-09-18 15:35:50 +02:00
Alexander Neumann
5494c1858e fuse: correctly handle snapshots
The fuse code kept adding snapshots to the top-level dir "snapshots". In
addition, snapshots with the same timestamp (same second) were not added
correctly, they will now be suffixed by an incrementing counter, e.g.:

    dr-xr-xr-x 1 fd0 users 0 Sep 18 15:01 2016-09-18T15:01:44+02:00
    dr-xr-xr-x 1 fd0 users 0 Sep 18 15:01 2016-09-18T15:01:48+02:00
    dr-xr-xr-x 1 fd0 users 0 Sep 18 15:01 2016-09-18T15:01:48+02:00-1

Closes #624
2016-09-18 15:04:39 +02:00
Alexander Neumann
c5763e59d5 Merge pull request #623 from restic/fix-622
Improve error messages for open repo
2016-09-18 14:04:30 +02:00
Alexander Neumann
b090c73bd4 Remove wrapper functions in errors package
This way, our own errors package does not appear in the stack traces.
2016-09-18 13:28:59 +02:00
Alexander Neumann
2b9a408ccc Return a fatal for location.Parse 2016-09-18 13:28:41 +02:00
Alexander Neumann
83c35bd6b5 Do not print stack trace when open repo failed
Closes #622
2016-09-18 13:24:46 +02:00
Alexander Neumann
98b012a04e Merge pull request #620 from restic/watch-529
Add verbose error when marshalling a node fails
2016-09-17 11:05:00 +02:00
Alexander Neumann
a9af896ddd Add verbose error when marshalling a node fails
This code is introduced to watch for issue #529, in which two users
describe that restic failed encoding a time in a node to JSON with the
error message:

    panic: json: error calling MarshalJSON for type *restic.Node: json: error calling MarshalJSON for type time.Time: Time.MarshalJSON: year outside of range [0,9999]

The error message now is:

    panic: Marshal: json: error calling MarshalJSON for type *restic.Node: node /home/fd0/shared/work/restic/restic/.git/hooks/applypatch-msg.sample has invalid ModTime year -1: -0001-01-02 03:04:05.000000006 +0053 LMT
2016-09-17 10:43:04 +02:00
Alexander Neumann
309dca8179 Merge pull request #619 from restic/update-deps
Update all vendored dependencies
2016-09-15 22:50:44 +02:00
Alexander Neumann
8144cd24d6 Add golang.org/x/crypto/ed25519 2016-09-15 22:36:49 +02:00
Alexander Neumann
0ce8191be5 Add golang.org/x/crypto/curve25519 2016-09-15 22:36:29 +02:00
Alexander Neumann
595f2582fa Update golang.org/x/sys/unix 2016-09-15 22:35:45 +02:00
Alexander Neumann
da83bd8265 Upadte golang.org/x/net/context 2016-09-15 22:34:06 +02:00
Alexander Neumann
799cc37c22 Update golang.org/x/crypto/ssh 2016-09-15 22:33:32 +02:00
Alexander Neumann
35ba817128 Update golang.org/x/crypto/scrypt 2016-09-15 22:32:38 +02:00
Alexander Neumann
29a61950dd Update golang.org/x/crypto/poly1305 2016-09-15 22:32:17 +02:00
Alexander Neumann
acd39eaab5 Update golang.org/x/crypto/pbkdf2 2016-09-15 22:31:49 +02:00
Alexander Neumann
3d55b54f3d Update github.com/pkg/sftp 2016-09-15 22:31:18 +02:00
Alexander Neumann
daae3500dd Update branch for github.com/kr/fs 2016-09-15 22:30:27 +02:00
Alexander Neumann
64fe9ec048 Update github.com/jessevdk/go-flags 2016-09-15 22:29:49 +02:00
Alexander Neumann
cb80a70aca Update bazil.org/fuse 2016-09-15 22:26:23 +02:00
Alexander Neumann
24398d2b9d Merge pull request #618 from restic/rework-ci-fuse-tests
Cleanup CI tests for fuse
2016-09-15 21:53:44 +02:00
Alexander Neumann
d4a2d70089 Retry umount for integration tests 2016-09-15 21:37:50 +02:00
Alexander Neumann
9add72e9d6 Exclude unneeded test run without fuse tests 2016-09-15 21:37:50 +02:00
Alexander Neumann
e7fc908ff1 Run fuse tests on Linux 2016-09-15 21:25:59 +02:00
Alexander Neumann
4ffca0f4b4 Improve integration tests for fuse 2016-09-15 21:17:20 +02:00
Alexander Neumann
a0f3e94655 fuse: handle duplicate timestamps for snapshots
This closes #606, which fails because several snapshots are created with
exactly the same timestamp, and the code checks that for each snapshot
there is a dir in the fuse mount. This fails for colliding timestamps,
so we now add a suffix "-1", "-2" etc for each duplicate timestamp.
2016-09-15 21:15:49 +02:00
Alexander Neumann
6485a6cdc0 Simplify mount logic 2016-09-15 19:59:07 +02:00
Alexander Neumann
931f5cdd33 Merge pull request #616 from restic/add-snapshot-tags
Add tags to snapshots
2016-09-14 20:58:12 +02:00
Alexander Neumann
3975d76f23 Correct filenames for expire policy tests 2016-09-13 21:20:41 +02:00
Alexander Neumann
bf6602bc1b Update golden file 2016-09-13 21:19:57 +02:00
Alexander Neumann
a85ffc66ae Add documentation for tags 2016-09-13 21:09:55 +02:00
Alexander Neumann
828267aaa3 Fix status for stdin archiver 2016-09-13 21:01:29 +02:00
Alexander Neumann
a77c615909 Fix 'forget' command with tags 2016-09-13 20:56:18 +02:00
Alexander Neumann
cfdf4c92f7 Add --keep-tag to forget command 2016-09-13 20:37:11 +02:00
Alexander Neumann
0f9fb37c78 Add tags to forget command 2016-09-13 20:20:55 +02:00
Alexander Neumann
673bce936e Add tags to 'backup' and 'snapshots' commands 2016-09-13 20:20:52 +02:00
Alexander Neumann
1f83635267 Add tags to snapshots and filter 2016-09-13 20:12:55 +02:00
Alexander Neumann
2d7e1b5804 Merge pull request #615 from kerel-fs/fix/manual
doc/Manual: Update usage help listing
2016-09-12 21:20:18 +02:00
Fabian P. Schmidt
085cf36199 doc/Manual: Update usage help listing 2016-09-12 20:42:38 +02:00
Alexander Neumann
ceb4a3ecc0 Merge pull request #613 from restic/read-password-from-file
Read password from file
2016-09-12 20:37:08 +02:00
Alexander Neumann
cf7795ce64 Merge pull request #614 from restic/improve-prune-stats
Improve statistics for `prune`
2016-09-12 20:37:00 +02:00
Alexander Neumann
223dc78acb Improve statistics for prune
Sample:

    counting files in repo
    building new index for repo
    [0:00] 100.00%  22 / 22 packs
    repository contains 22 packs (1377 blobs) with 90.610 MiB bytes
    processed 1377 blobs: 0 duplicate blobs, 0B duplicate
    load all snapshots
    find data that is still in use for 1 snapshots
    [0:00] 100.00%  1 / 1 snapshots
    found 409 of 1377 data blobs still in use, removing 968 blobs
    will delete 10 packs and rewrite 10 packs, this frees 64.232 MiB
    creating new index
    [0:00] 100.00%  7 / 7 packs
    saved new index as df467c6e
    done

Closes #581
2016-09-12 14:26:47 +02:00
Alexander Neumann
f63cd12569 Document new option 2016-09-12 14:10:36 +02:00
Alexander Neumann
65afeba19a Add option to read the password from a file 2016-09-12 14:09:22 +02:00
Alexander Neumann
791f73e0db Merge pull request #608 from rosetree/patch-1
Fix a small typo in `stdin` example
2016-09-05 19:27:26 +02:00
Micha Rosenbaum
8ded453ab0 Fix a small typo in stdin example
s/--stdin-filenam/--stdin-filename/
2016-09-05 19:01:53 +02:00
Alexander Neumann
e443454c4b Add OS and Arch to 'version' output 2016-09-04 15:46:50 +02:00
Alexander Neumann
1dd9a58e5a Merge pull request #600 from restic/restructure
WIP: restructure code
2016-09-04 15:36:26 +02:00
Alexander Neumann
b628bcee27 Remove redundant ParseID 2016-09-04 14:38:18 +02:00
Alexander Neumann
dfc0cbf3a8 Use one test password 2016-09-04 14:30:14 +02:00
Alexander Neumann
512a92895f Rename WithTestEnvironment -> Env 2016-09-04 14:29:04 +02:00
Alexander Neumann
6ab425f130 Remove SetupRepo 2016-09-04 13:24:51 +02:00
Alexander Neumann
f5b9ee53a3 Fix mock.Repository 2016-09-04 13:18:25 +02:00
Alexander Neumann
ea073f58cf Correct comment 2016-09-04 13:08:09 +02:00
Alexander Neumann
bef5c4acb8 Add mock.Repository, Rework SetupRepo 2016-09-04 12:52:43 +02:00
Alexander Neumann
b5b3c0eaf8 Add repository.SaveTree 2016-09-03 21:10:25 +02:00
Alexander Neumann
1fb80bf0e2 Fix fuse mount 2016-09-03 21:10:25 +02:00
Alexander Neumann
436332d5f2 LoadDataBlob -> LoadBlob 2016-09-03 21:10:25 +02:00
Alexander Neumann
fe8c12c798 Replace repolitoy.SaveAndEncrypt to SaveBlob() 2016-09-03 21:10:25 +02:00
Alexander Neumann
1cc59010f5 Remove LoadJSONPack, un-export loadBlob 2016-09-03 21:10:25 +02:00
Alexander Neumann
878c1cd936 Add more comments 2016-09-03 21:10:25 +02:00
Alexander Neumann
5170c4898a Address hound comments 2016-09-03 21:10:25 +02:00
Alexander Neumann
2054e3c026 Fix tests 2016-09-03 21:10:25 +02:00
Alexander Neumann
ffbe05af9b Rework crypto, use restic.Repository everywhere 2016-09-03 21:10:25 +02:00
Alexander Neumann
84f95a09d7 Introduce LoadTreeBlob and LoadDataBlob 2016-09-03 21:10:25 +02:00
Alexander Neumann
573410afab Fix archiver test 2016-09-03 21:10:25 +02:00
Alexander Neumann
619939ccd9 Reorder methods in interface Repository 2016-09-03 21:10:25 +02:00
Alexander Neumann
714a5d1dc4 Move tree walker to restic/walk 2016-09-03 21:10:25 +02:00
Alexander Neumann
bc42dbdf87 Create package restic/errors 2016-09-03 21:10:24 +02:00
Alexander Neumann
765b5437bd Fix command 'dump' 2016-09-03 21:10:24 +02:00
Alexander Neumann
5d7b38cabf Remove sentinel errors 2016-09-03 21:10:24 +02:00
Alexander Neumann
debf1fce54 Remove IDSize, TestRandomID -> NewRandomID 2016-09-03 21:10:24 +02:00
Alexander Neumann
0045f2fb61 Remove functions 2016-09-03 21:10:24 +02:00
Alexander Neumann
5764b55aee Rename Node.FileType -> Type 2016-09-03 21:10:24 +02:00
Alexander Neumann
5e3a41dbd2 Rename struct member FileType -> Type 2016-09-03 21:10:24 +02:00
Alexander Neumann
88d0f24ce7 Reduce lock timeout to zero 2016-09-03 21:10:24 +02:00
Alexander Neumann
eb6e3ba8b3 Fix imported package 2016-09-03 21:10:24 +02:00
Alexander Neumann
528c301891 Last fixes for integration tests 2016-09-03 21:10:24 +02:00
Alexander Neumann
f7ae0cb78f Fix cmds/restic for new structure 2016-09-03 21:10:24 +02:00
Alexander Neumann
3695ba5882 Tests pass for restic/ 2016-09-03 21:10:24 +02:00
Alexander Neumann
4c95d2cfdc wip 2016-09-03 21:10:24 +02:00
Alexander Neumann
cc6a8b6e15 wip 2016-09-03 21:10:24 +02:00
Alexander Neumann
51d8e6aa28 wip 2016-09-03 21:10:24 +02:00
Alexander Neumann
f0600c1d5f wip 2016-09-03 21:10:24 +02:00
Alexander Neumann
90da66261a Copy ID from backend to restic 2016-09-03 21:10:24 +02:00
Alexander Neumann
82c2dafb23 Copy interfaces and basic types to package restic/ 2016-09-03 21:10:24 +02:00
Alexander Neumann
bfdd26c541 Remove (unused) cache implementation 2016-09-03 21:10:24 +02:00
Alexander Neumann
e699f6d1bd Update doc comment 2016-09-03 21:10:24 +02:00
Alexander Neumann
fae65ebc61 Merge pull request #602 from restic/update-chunker
Update chunker
2016-09-03 21:10:18 +02:00
Alexander Neumann
f744c3534d Update chunker 2016-09-03 20:56:21 +02:00
Alexander Neumann
9ce40761c8 Remove coveralls.io 2016-09-03 11:06:09 +02:00
Alexander Neumann
48924009fe Add codecov.io 2016-09-03 10:44:37 +02:00
Alexander Neumann
d497fb6966 Merge pull request #599 from restic/remove-lowlevel-syscall
Replace lowlevel syscall to restore symlink times
2016-08-31 19:28:23 +02:00
Alexander Neumann
5bc7f150f8 Merge pull request #598 from restic/update-minio-go
Update minio-go
2016-08-31 19:28:17 +02:00
Alexander Neumann
a6eda344a4 Update minio-go 2016-08-31 18:08:43 +02:00
Alexander Neumann
1aa52e5e1e Replace lowlevel syscall to restore symlink times 2016-08-30 21:45:16 +02:00
Alexander Neumann
769f06cea2 Merge pull request #580 from restic/remove-juju-errors
Change errors library
2016-08-30 21:23:53 +02:00
Alexander Neumann
8d90588020 Add better error message for 'cat' 2016-08-30 21:19:04 +02:00
Alexander Neumann
9cf63c99cf Wrap errors #3 2016-08-29 22:16:58 +02:00
Alexander Neumann
4a0f77650b Wrap errors #2 2016-08-29 21:54:50 +02:00
Alexander Neumann
b53679a24d Wrap errors 2016-08-29 21:38:34 +02:00
Alexander Neumann
b06845c545 Always use errors.Cause() for testing error values 2016-08-29 19:52:03 +02:00
Alexander Neumann
c55b6ee544 Add restic.Fatal/f
This is a new error which implements the restic.Fataler interface.
Errors of this type are written to stderr, the restic exits. For all
other errors, restic prints the stack trace (if available).
2016-08-29 19:52:00 +02:00
Alexander Neumann
045f545085 repository: Handle errors correctly 2016-08-29 19:23:50 +02:00
Alexander Neumann
038b63f7f7 CI: Check for packages importing "errors" from stdlib 2016-08-29 19:23:50 +02:00
Alexander Neumann
d3f4c816c7 Print error stack if available 2016-08-29 19:23:50 +02:00
Alexander Neumann
72aa6be38d Replace fmt.Errorf() by errors.Errorf() 2016-08-29 19:23:50 +02:00
Alexander Neumann
444a268ce0 Replace stdlib errors with github.com/pkg/errors 2016-08-29 19:23:50 +02:00
Alexander Neumann
17a38faa43 Drop dependency github.com/juju/errors 2016-08-29 19:23:50 +02:00
Alexander Neumann
24385ff56e Merge pull request #597 from restic/fix-panic-596
Fix panic for debug.Log() with empty string
2016-08-29 17:13:37 +02:00
Alexander Neumann
f51bc8e9b9 Fix panic for debug.Log() with empty string 2016-08-28 22:43:05 +02:00
Alexander Neumann
6f5bf45212 Merge pull request #595 from restic/fix-cat
Fix the cat command
2016-08-28 22:28:25 +02:00
Alexander Neumann
3af8f53097 Allow 'cat' for tree blobs 2016-08-28 21:23:46 +02:00
Alexander Neumann
6c6b0e2395 cat: Add warning when pack was modified 2016-08-28 21:21:04 +02:00
Alexander Neumann
26351522c5 Merge pull request #594 from restic/fix-checker
Remove check for filemode 0
2016-08-28 21:09:02 +02:00
Alexander Neumann
dec2e4788e Remove flaky test 2016-08-28 21:06:27 +02:00
Alexander Neumann
f9cd736b33 Fix flaky test 2016-08-28 21:04:35 +02:00
Alexander Neumann
553dd00741 Merge pull request #592 from restic/fix-587
Fix panic when parsing sftp URIs
2016-08-28 20:14:17 +02:00
Alexander Neumann
88634dac3a Remove check for filemode 0 2016-08-28 20:04:09 +02:00
Alexander Neumann
83924d0864 Improve error message when sftp fails
Also add a prefix for all errors written to stderr by the client
2016-08-28 19:56:46 +02:00
Alexander Neumann
22bde5b277 sftp: Add debug log messages 2016-08-28 19:47:12 +02:00
Alexander Neumann
cdbdf74811 Remove debug output for tests 2016-08-28 19:30:56 +02:00
Alexander Neumann
db16702263 Report errors to stderr for tests 2016-08-28 19:30:56 +02:00
Alexander Neumann
5dd137d53e Improve error handling with the ssh subprocess 2016-08-28 19:30:56 +02:00
Alexander Neumann
8de06bd453 Vendor github.com/pkg/errors 2016-08-28 19:30:56 +02:00
Alexander Neumann
a7e64afc0d Update sftp library 2016-08-28 19:30:56 +02:00
Alexander Neumann
ed09887d9e Fix panic when parsing sftp URIs
Closes #587
2016-08-28 19:30:56 +02:00
Alexander Neumann
d097d40237 Merge pull request #593 from restic/correct-backend-errors
local/sftp: Fix broken error handling
2016-08-28 19:30:50 +02:00
Alexander Neumann
196bbbd25b local/sftp: Fix broken error handling
This yields the error messages for a full backup location:

    panic: write /home/fd0/mnt/temp/tmp/temp-987810174: no space left on device

Closes #540

Also connected to #574
2016-08-28 18:54:58 +02:00
Alexander Neumann
93e62c6f18 Merge pull request #591 from viric/packs-not-files
On prune report, packs instead of files + fix counter
2016-08-27 21:57:09 +02:00
Lluís Batlle i Rossell
3acf03986a On prune report, packs instead of files + fix counter 2016-08-27 20:04:35 +02:00
Alexander Neumann
12a904eb4b Fix reading password from stdin
This fixes a bug introduced in #585, it must by checked for stdin and
stdout separately whether it is a terminal.
2016-08-27 18:31:46 +02:00
Alexander Neumann
7f06ec98b8 Merge pull request #585 from trbs/progress_without_terminal
show progress every second when run non interactively
2016-08-27 10:10:18 +02:00
Alexander Neumann
d62264c837 Merge pull request #584 from restic/fix-panic
Add more safety checks for Unpacker
2016-08-27 10:09:57 +02:00
Alexander Neumann
b2a67d458c Remove unneeded packs without repacking 2016-08-25 22:35:22 +02:00
Alexander Neumann
de88fb2022 Simplify pack.List 2016-08-25 22:25:55 +02:00
trbs
71263b5090 show progress every second when run non interactively 2016-08-25 22:13:47 +02:00
Alexander Neumann
3fd1e4a992 Add backend.ReaderAt 2016-08-25 21:49:00 +02:00
Alexander Neumann
9f752b8306 Rework function for listing packs 2016-08-25 21:08:16 +02:00
Alexander Neumann
e07ae7631c Add more safety checks for Unpacker 2016-08-23 22:21:29 +02:00
Alexander Neumann
9fd941f6fc Merge pull request #583 from stuertz/windowsoutput
Fix progress output on Windows
2016-08-23 21:18:09 +02:00
Jan Stürtz
91c458bf74 Fixed gofmt 2016-08-22 22:07:10 +02:00
Jan Stürtz
374b1144de Dont't guess the max width, get it from the terminal 2016-08-22 17:27:58 +02:00
Jan Stürtz
f05b0871e9 fixed maxlen computation (off by one) on small terminals 2016-08-22 17:27:03 +02:00
Jan Stürtz
4cb8fe3210 Fixed style hints from hound
- no else, when if has a return
- Improve Comment on Function
2016-08-21 23:10:28 +02:00
Jan Stürtz
08eb5b42eb Fix progress output on Windows
The windows cmd shell is not aware of ANSI escape sequences and
does print them uninterpreted to the console. This is ugly.
Added a function to generate platform specific string for the escape sequence. On Windows this will be 79 white spaces with
a trailing \r.
2016-08-21 22:38:22 +02:00
Alexander Neumann
1c703e4161 Merge pull request #579 from restic/debug-544
Properly close connections to s3 backend on Stat()
2016-08-21 17:10:07 +02:00
Alexander Neumann
ebd3723a06 Properly close the minio object on Stat()
Closes #544
2016-08-21 16:15:41 +02:00
Alexander Neumann
06b23edb39 Fix code for newer minio-go 2016-08-21 16:14:58 +02:00
Alexander Neumann
e893be3dec Update minio-go 2016-08-21 16:14:22 +02:00
Alexander Neumann
ca14942c80 Merge pull request #578 from restic/fix-build-on-arm
Fix build on linux/arm
2016-08-21 15:09:46 +02:00
Alexander Neumann
11d01fcd32 Merge pull request #577 from restic/dynamic-scrypt
Dynamically calibrate scrypt parameters
2016-08-21 15:00:24 +02:00
Alexander Neumann
5061607e77 x/sys/unix: Manually add FADV_* constants for Linux/arm 2016-08-21 14:59:15 +02:00
Alexander Neumann
69d8fe5b4f Add check for cross-compilation 2016-08-21 14:21:19 +02:00
Alexander Neumann
916efa4e1a Merge pull request #576 from restic/fix-documentation-forget
Improve documentation, add explanation and weekly
2016-08-21 13:51:38 +02:00
Alexander Neumann
a3492d69dd Use low-security scrypt KDF parameters for testing 2016-08-21 13:42:04 +02:00
Alexander Neumann
8e24c51233 Fix commets for constants 2016-08-21 13:13:05 +02:00
Alexander Neumann
d8107f77aa Limit the number of key files checked on SearchKey 2016-08-21 13:10:16 +02:00
Alexander Neumann
79e950b710 Remove dead code 2016-08-21 13:10:15 +02:00
Alexander Neumann
f0d7f3f1bd Calibrate scrypt for the current hardware
Closes #17
2016-08-21 13:10:08 +02:00
Alexander Neumann
9afec53c55 Remove crypto reader/writer (unused) 2016-08-21 13:10:08 +02:00
Alexander Neumann
11098d6eb0 Move KDF() to kdf.go 2016-08-21 13:10:08 +02:00
Alexander Neumann
7e6fc15ece Vendor github.com/elithrar/simple-scrypt 2016-08-21 13:10:08 +02:00
Alexander Neumann
78c0995853 Improve documentation, add explanation and weekly 2016-08-21 11:53:05 +02:00
Alexander Neumann
84c14e623d Merge pull request #575 from restic/remove-constants
Remove POSIX constants, reduce code duplication
2016-08-21 11:02:06 +02:00
Alexander Neumann
d965d703d1 Reduce duplicate code in wrappers for os 2016-08-21 10:42:07 +02:00
Alexander Neumann
b20921d836 Use constants from /x/sys/unix 2016-08-21 10:36:20 +02:00
Alexander Neumann
a78493f549 Update golang.org/x/sys/unix 2016-08-21 10:35:12 +02:00
Alexander Neumann
2be0aa9dbc Merge pull request #518 from restic/implement-prune
Implement prune
2016-08-21 09:22:22 +02:00
Alexander Neumann
aa29c68189 Fix progress for new index 2016-08-20 20:44:57 +02:00
Alexander Neumann
d3da30e8fb Use UTC for snapshot time based tests 2016-08-20 18:49:02 +02:00
Alexander Neumann
3337b5d3c4 Add prune/forget to the manual 2016-08-20 18:38:16 +02:00
Alexander Neumann
458448357c Add help texts which cross-line prune/forget 2016-08-20 18:33:24 +02:00
Alexander Neumann
27d0909302 forget: Remove message when no policy is specified 2016-08-20 18:15:36 +02:00
Alexander Neumann
5f0ebb71b2 forget: Allow filtering for a hostname 2016-08-20 17:59:47 +02:00
Alexander Neumann
00f647dc92 forget: Join paths by ":" 2016-08-20 17:59:10 +02:00
Alexander Neumann
8e7202bd6a Rename function in debug 'dump' command 2016-08-20 17:54:27 +02:00
Alexander Neumann
5cf7c827b8 forget: Do nothing if no policy is configured 2016-08-20 17:53:03 +02:00
Alexander Neumann
71f7f4f543 Add ExpirePolicy.Empty() 2016-08-20 17:51:48 +02:00
Alexander Neumann
bf47dba1c4 Add 'forget' command 2016-08-20 17:43:25 +02:00
Alexander Neumann
cbd457e557 Add Hourly expire functions 2016-08-20 15:55:23 +02:00
Alexander Neumann
6cf4b81558 Add functions to filter snapshots 2016-08-20 15:22:40 +02:00
Alexander Neumann
bb84d351f1 Revert "ID: move Str() to non-pointer receiver"
This reverts commit f102406cd7.
2016-08-19 20:45:19 +02:00
Alexander Neumann
a107e3cc84 Correct comment 2016-08-19 20:36:24 +02:00
Alexander Neumann
e934966b54 Merge pull request #573 from restic/fix-osxfuse-travis
Fix osxfuse on Travis/darwin
2016-08-19 19:23:24 +02:00
Alexander Neumann
bd9f23f1d2 Fix osxfuse on Travis/darwin 2016-08-19 19:04:02 +02:00
Alexander Neumann
2a2fb74ba8 Merge pull request #569 from restic/fix-568
Use the platform-independent function for joining
2016-08-19 17:53:09 +02:00
Alexander Neumann
bd819a5e81 Fix panic 2016-08-16 21:59:43 +02:00
Alexander Neumann
162629571d Add BenchmarkFindUsedBlobs 2016-08-16 21:30:14 +02:00
Alexander Neumann
2c04ad3c29 TestCreateSnapshot: free buffer 2016-08-16 21:30:14 +02:00
Alexander Neumann
238d3807e9 prune: Format duplicate bytes properly 2016-08-16 21:30:14 +02:00
Alexander Neumann
7f9d227725 Use progress in prune command 2016-08-16 21:30:14 +02:00
Alexander Neumann
8de6e5a627 Add progress option to index 2016-08-16 21:30:14 +02:00
Alexander Neumann
8d735cf6a9 Explicitely specify supersedes for new index 2016-08-16 21:30:14 +02:00
Alexander Neumann
29bb845f0e Rebuild index at the end of prune 2016-08-16 21:30:14 +02:00
Alexander Neumann
1bb2d59e38 Add Save() method to Index 2016-08-16 21:30:14 +02:00
Alexander Neumann
3ceb2ad3cf Progress: Call OnUpdate before OnDone 2016-08-16 21:30:14 +02:00
Alexander Neumann
009c803c8a prune: Use new Index 2016-08-16 21:30:14 +02:00
Alexander Neumann
c0ef1ec6fd Add RemovePack for index 2016-08-16 21:30:14 +02:00
Alexander Neumann
69c2e8ce7e Add first version of the prune command 2016-08-16 21:30:14 +02:00
Alexander Neumann
f102406cd7 ID: move Str() to non-pointer receiver 2016-08-16 21:30:14 +02:00
Alexander Neumann
302619a11a Move interfaces to package restic/types 2016-08-16 21:30:14 +02:00
Alexander Neumann
80bcae44e2 Decouple ListAllPacks from repository 2016-08-16 21:30:14 +02:00
Alexander Neumann
1f263a7683 Decouple index/ and repository/ 2016-08-16 21:30:14 +02:00
Alexander Neumann
3b57075109 Add global interface Repository 2016-08-16 21:30:14 +02:00
Alexander Neumann
3fa7304e94 Add interfaces to ListAllPacks 2016-08-16 21:30:14 +02:00
Alexander Neumann
47950b82a0 Add test for loading index from documentation 2016-08-16 21:30:14 +02:00
Alexander Neumann
9ecf7070af Implement Lookup() and Save() for new Index 2016-08-16 21:30:14 +02:00
Alexander Neumann
2310773798 Compute negative offsets ourselves in the s3 backend 2016-08-16 21:30:14 +02:00
Alexander Neumann
a60e3b5030 Make backend tests less verbose 2016-08-16 21:30:14 +02:00
Alexander Neumann
b350b443d0 Stop backend tests early on failure 2016-08-16 21:30:14 +02:00
Alexander Neumann
2c517e4a33 Add Index structures for Blobs 2016-08-16 21:30:14 +02:00
Alexander Neumann
4bdd59b4ad Index: Add DuplicateBlobs() 2016-08-16 21:30:14 +02:00
Alexander Neumann
f5daf33322 Add pack size to ListAllPacks 2016-08-16 21:30:14 +02:00
Alexander Neumann
1058a91b39 Add option to create duplicate blobs in TestCreateSnapshot 2016-08-16 21:30:14 +02:00
Alexander Neumann
240b8f273a Add more index tests 2016-08-16 21:30:14 +02:00
Alexander Neumann
6808523d34 Add String() for Blob 2016-08-16 21:30:14 +02:00
Alexander Neumann
bad6184ab5 Add new Index data structure 2016-08-16 21:30:14 +02:00
Alexander Neumann
6b384287f3 Return error when it occurs 2016-08-16 21:30:14 +02:00
Alexander Neumann
ef33cf12ca Fix Unpacker for packs < 2048 byte 2016-08-16 21:30:14 +02:00
Alexander Neumann
a5cbbb8b5a Fix BufferLoader for negative offset 2016-08-16 21:30:14 +02:00
Alexander Neumann
71924fb7c0 Add tests for Load() with negative offset 2016-08-16 21:30:14 +02:00
Alexander Neumann
b0565015cc Remove ReadSeeker 2016-08-16 21:30:14 +02:00
Alexander Neumann
fa283c6ecd Remove unused GetReader() 2016-08-16 21:30:14 +02:00
Alexander Neumann
94d157d97a Introduce interface pack.Loader 2016-08-16 21:30:14 +02:00
Alexander Neumann
f72f3dbc6a Buffer last 2048 bytes of a file for unpack 2016-08-16 21:28:55 +02:00
Alexander Neumann
3c3a180417 Move RandomID() to backend package 2016-08-16 21:28:55 +02:00
Alexander Neumann
fd6c854a21 Add TestResetRepository and BenchmarkCreateSnapshot 2016-08-16 21:28:55 +02:00
Alexander Neumann
e9cddc0be5 Fix TestFindUsedBlobs 2016-08-16 21:28:55 +02:00
Alexander Neumann
d7e5f11b78 Export FindUsedBlobs 2016-08-16 21:28:55 +02:00
Alexander Neumann
2b1b6d8c2a Export ListAllPacks 2016-08-16 21:28:55 +02:00
Alexander Neumann
acc2fa5816 Fix TestRepack
* Decrease number of blobs for use in test
 * Fail the test when there's a duplicate blob
2016-08-16 21:28:54 +02:00
Alexander Neumann
6285f31604 Use pack.BlobSet instead of backend.IDSet 2016-08-16 21:28:54 +02:00
Alexander Neumann
3cca831b2e Fix invalid type in newly created packs 2016-08-16 21:28:54 +02:00
Alexander Neumann
cff6fea32a Fix 'cat' command 2016-08-16 21:28:54 +02:00
Alexander Neumann
17e1872544 Switch order of parameters to repo.LoadBlob() 2016-08-16 21:28:54 +02:00
Alexander Neumann
246302375d Index: Add multiple packs per blob, pack.Type
Change the index so that a blob can be contained in multiple packs.

Require passing the blob type to all lookup functions.
2016-08-16 21:28:54 +02:00
Alexander Neumann
231da4ff80 Remove old repacking code 2016-08-16 21:28:54 +02:00
Alexander Neumann
1b4b469440 Add pack.Handle and pack.Handles 2016-08-16 21:28:54 +02:00
Alexander Neumann
35e3762e37 Remove dead code 2016-08-16 21:28:54 +02:00
Alexander Neumann
7e732dbd2d Allow multiple entries in the index 2016-08-16 21:28:54 +02:00
Alexander Neumann
8b4d4ec25f Fix TestCreateSnapshot, do not store duplicate data 2016-08-16 21:28:54 +02:00
Alexander Neumann
035d0aeb31 Do not create duplicate content for tests 2016-08-16 21:28:54 +02:00
Alexander Neumann
f1bc181c5b Add more checks for tests 2016-08-16 21:28:54 +02:00
Alexander Neumann
50b724ca23 Fix stylistic issues with FindUsedBlobs 2016-08-16 21:28:54 +02:00
Alexander Neumann
6227821b4e Move functions to correct file 2016-08-16 21:28:54 +02:00
Alexander Neumann
810056c2bc Correct packages for tests 2016-08-16 21:28:54 +02:00
Alexander Neumann
34b3e3a095 Split index/repack functions to different files 2016-08-16 21:28:54 +02:00
Alexander Neumann
bdd085e9f1 Prevent loops when finding used blobs 2016-08-16 21:28:54 +02:00
Alexander Neumann
ffc3503e6f Add first version of FindUsedBlobs 2016-08-16 21:28:54 +02:00
Alexander Neumann
51b16ad57d Add handy functions to backend.IDSet 2016-08-16 21:28:54 +02:00
Alexander Neumann
723592d923 Move FindUsedBlobs to package restic 2016-08-16 21:28:54 +02:00
Alexander Neumann
22aa17091b Add test for FindUsedBlobs 2016-08-16 21:28:54 +02:00
Alexander Neumann
4720a7d807 Allow specifying chunker polynomial for tests 2016-08-16 21:28:54 +02:00
Alexander Neumann
d5323223f4 Change repository Init() function to allow better testing 2016-08-16 21:28:54 +02:00
Alexander Neumann
fe79177b40 Make TestCreateSnapshot return the snapshot itself 2016-08-16 21:28:54 +02:00
Alexander Neumann
5c32ae15c2 Move test checking repo code to checker package 2016-08-16 21:28:54 +02:00
Alexander Neumann
6c2334f505 Make TestCreateSnapshot less verbose 2016-08-16 21:28:54 +02:00
Alexander Neumann
b55ac2afd6 Make test files in test repo less random 2016-08-16 21:28:54 +02:00
Alexander Neumann
d9012b4a64 Add trees recursively to test snapshot 2016-08-16 21:28:54 +02:00
Alexander Neumann
952f124238 Use RandReader instead of rand directly
This is a fix to be backwards-compatible with Go < 1.6.
2016-08-16 21:28:54 +02:00
Alexander Neumann
14db71d3fa Move RandReader to repository package 2016-08-16 21:28:54 +02:00
Alexander Neumann
f59ffcaeae Correct comment 2016-08-16 21:28:54 +02:00
Alexander Neumann
d609e4a986 Extended plaintext buffer if necessary 2016-08-16 21:28:54 +02:00
Alexander Neumann
0e6c72ad1d Implement Repack() 2016-08-16 21:28:54 +02:00
Alexander Neumann
d5f42201c5 Fix test for Repack 2016-08-16 21:28:54 +02:00
Alexander Neumann
122a0944a6 Do not repack blobs that shouldn't be kept 2016-08-16 21:28:54 +02:00
Alexander Neumann
fa26ecc8f9 Make rebuild-index use the code in package repository 2016-08-16 21:28:54 +02:00
Alexander Neumann
00139648a0 Implement Repack() 2016-08-16 21:28:54 +02:00
Alexander Neumann
6ba38e9a38 Add tests for Repack() 2016-08-16 21:28:54 +02:00
Alexander Neumann
812cb0ba77 Update Go version in manual 2016-08-16 21:24:48 +02:00
Alexander Neumann
b5c397435c Merge pull request #571 from restic/raise-go-version
Require Go 1.6 or greater
2016-08-16 21:20:59 +02:00
Alexander Neumann
043424824c Only test cross-compilation on Go 1.7 2016-08-16 21:02:30 +02:00
Alexander Neumann
c88c48a29f Do not build toolchain with gox for Go >= 1.5 2016-08-16 20:51:46 +02:00
Alexander Neumann
2fa93b291a Update default Go version in Dockerfile 2016-08-16 20:51:31 +02:00
Alexander Neumann
1ad4d1aafd Require Go 1.6 or greater 2016-08-16 20:32:58 +02:00
Jan Stürtz
b108966b12 Fix 567 (#570)
* Patch for  https://github.com/restic/restic/issues/567
Backup also files on windows with longer pathnames than 255 chars (e.g. from node).

as fd0 says "So, as far as I can see, we need to have custom methods for all functions that accept a path, so that on Windows we can substitute the normal (possibly relative) path used within restic by an (absolute) UNC path, and only then call the underlying functions like os.Stat(), os.Lstat(), os.Open() and so on.

I've already thought about adding a generic abstraction for the file system (so we can mock this easier in tests), and this looks like a good opportunity to build it."

* fixed building tests

* Restructured patches
Add Wrapper for filepath.Walk

* using \\?\ requires absolute pathes to be used.
Now all tests run

* used gofmt on the code

* Restructured Code. No patches dir, integrate the file functions into restic/fs/

There is still an issue, because restic.fs.Open has a different api the os.Open, which returns the result of OpenFile, but takes only a string

* Changed the last os.Open() calls to fs.Open() after extending the File interface

* fixed name-clash of restic.fs and fuse.fs detected by travis

* fixed fmt with gofmt

* c&p failure: removed fixpath() call.

* missing include

* fixed includes in linux variant

* Fix for Linux. Fd() is required on File interface

* done gofmt
2016-08-15 21:59:13 +02:00
Alexander Neumann
1fe8deeb6e Use new URI syntax in documentation 2016-08-11 20:37:49 +02:00
Alexander Neumann
fa4570bde8 Always use forward slashes in file names 2016-08-11 19:41:47 +02:00
Alexander Neumann
f6c2787d80 Use the platform-independent function for joining 2016-08-11 19:37:22 +02:00
Alexander Neumann
4b8b625b90 Merge pull request #562 from damekr/implement-gomaxprocs-env
Issue-535: restic respect GOMAXPROCS env variable depending on go version
2016-08-11 19:09:54 +02:00
damekr
be00d91967 Respect GOMAXPROCS variable
Closes #535
2016-08-08 21:37:20 +02:00
Alexander Neumann
e4a9905d6f Merge pull request #563 from restic/fix-build-script
Invert go version test for ldflags
2016-08-04 19:37:30 +02:00
Alexander Neumann
68ec29e7ec Invert go version test for ldflags 2016-08-03 22:04:03 +02:00
Alexander Neumann
d860ce0570 Merge pull request #559 from vrischmann/master
Fix the debug environment variable name in the manual
2016-08-02 22:12:18 +02:00
Alexander Neumann
fc9b27c533 Revert "Fix TestCreateSnapshot, do not generate duplicate data"
This reverts commit 628fb0fb72.
2016-08-02 22:11:55 +02:00
Vincent Rischmann
d4a9b546c1 Fix the debug environment variable name in the manual 2016-08-01 22:23:42 +02:00
Alexander Neumann
628fb0fb72 Fix TestCreateSnapshot, do not generate duplicate data 2016-08-01 22:01:34 +02:00
Alexander Neumann
2de233fe8b Merge pull request #558 from vrischmann/master
Detect a devel version correctly in LDFlags()
2016-08-01 21:49:20 +02:00
Vincent Rischmann
d2834b61fb Detect a devel version correctly in LDFlags() 2016-08-01 20:47:33 +02:00
Alexander Neumann
c7f5ac22eb add VERSION file for 0.2.0 2016-07-30 11:24:52 +02:00
Alexander Neumann
959df5cc14 Merge pull request #554 from restic/debug-fuse-panic
Fix fuse panic with empty files
2016-07-29 21:51:58 +02:00
Alexander Neumann
e575494353 Correct goreportcard badge URLs 2016-07-29 21:35:06 +02:00
Alexander Neumann
c0fb2c306d Merge pull request #553 from restic/update-minio-go
Update minio-go
2016-07-29 21:23:39 +02:00
Alexander Neumann
8418fed18e Handle empty files correctly 2016-07-29 21:18:32 +02:00
Alexander Neumann
3de989b7bb Fix panic with empty files 2016-07-29 21:05:36 +02:00
Alexander Neumann
5afda94a3c Handle reads with large offsets 2016-07-29 20:55:09 +02:00
Alexander Neumann
56dd4c0595 Update minio-go 2016-07-29 20:28:44 +02:00
Alexander Neumann
a9729eeb1b Merge pull request #552 from benagricola/fix-connection-leak
Explicitly Close() obj after ReadFull()
2016-07-29 20:28:22 +02:00
Ben Agricola
edb1843f24 Explicitly Close() obj after ReadFull()
Signed-off-by: Ben Agricola <bagricola@squiz.co.uk>
2016-07-29 14:18:02 +01:00
Alexander Neumann
e1960cadb2 Merge pull request #548 from mappu/patch-1
idset.go: micro-optimise away redundant scan
2016-07-28 20:05:23 +02:00
mappu
32985f7904 idset.go: micro-optimise away redundant scan 2016-07-26 09:36:31 +12:00
Alexander Neumann
e8e45fe2e3 Merge pull request #545 from restic/fix-528
Don't report valid types as invalid
2016-07-20 21:17:43 +02:00
Alexander Neumann
6b7ddf1b03 Don't report valid types as invalid
Closes #528
2016-07-20 20:46:57 +02:00
Alexander Neumann
b8c7622a8a Merge pull request #543 from mappu/master
Updates for build.go
2016-07-17 21:32:13 +02:00
mappu
983e509388 build.go: gofmt 2016-07-16 10:14:41 +12:00
mappu
c9400d5c61 build.go: Support cross-compilation via new --goos and --goarch flags 2016-07-16 09:42:41 +12:00
mappu
f967e90a96 build.go: Strip harder (add -w flag) 2016-07-16 09:42:22 +12:00
mappu
38d4522ea5 build.go: Add go1.7 to list of linkers requiring -Xfoo=bar syntax 2016-07-16 09:42:02 +12:00
Alexander Neumann
c9ab75a44c Fix build tag in run_integration_tests.go 2016-06-29 09:36:40 +02:00
Alexander Neumann
8b47ca5f98 Merge pull request #537 from xet7/master
Fix typo in Manual.md
2016-06-26 13:58:12 +02:00
Lauri Ojansivu
4e98c951e0 Fix typo in Manual.md 2016-06-26 14:35:48 +03:00
Alexander Neumann
814424fa6e Merge pull request #531 from restic/update-minio-go
Update minio-go
2016-06-08 22:00:16 +02:00
Alexander Neumann
902f619a06 Fix call to minio.New()
The last parameter changed semantics from `insecure` to `secure`.
2016-06-08 21:33:18 +02:00
Alexander Neumann
d66a98c2db Update minio-go
This fixes #520.
2016-06-08 21:11:48 +02:00
Alexander Neumann
e1b5593e07 Merge pull request #525 from koolhead17/miniorestic
Added Minio.io configuration steps to run as backend for restic.
2016-05-31 22:32:33 +02:00
koolhead17
789b8c8b49 Added minor modification to the doc as suggested by the author. 2016-05-25 02:26:00 +05:30
Alexander Neumann
795e3d5b6c Merge pull request #503 from gerdus/restore-latest
Add option to restore latest snapshot with optional path and source filters
2016-05-11 20:48:56 +02:00
Gerdus van Zyl
73e9cac5c4 gofmt + small doc fix 2016-05-10 22:20:03 +02:00
Gerdus van Zyl
8010a0d90c fix 2016-05-10 21:57:30 +02:00
Gerdus van Zyl
3cb68ddb0d Add option to restore latest snapshot with optional path and source filters
eg restic -r r1 restore latest --target restore2 --path "D:\dev\restic\bin\s1"
path and source filters also added to snapshot cmd
eg restic -r r1 snapshots --source nucore --path="D:\dev\restic\bin\s1"

Add option to restore latest snapshot with optional path and source filters

eg restic -r r1 restore latest --target restore2 --path "D:\dev\restic\bin\s1"
path and source filters also added to snapshot cmd
eg restic -r r1 snapshots --source nucore --path="D:\dev\restic\bin\s1"
2016-05-10 21:41:26 +02:00
Gerdus van Zyl
49f82f54b0 rebase, change source to host and add description to manual 2016-05-10 21:40:32 +02:00
Alexander Neumann
6bc7a71e55 Merge pull request #516 from restic/fix-flaky-test
Fix flaky worker cancel test
2016-05-09 22:12:39 +02:00
Alexander Neumann
2c1e590e47 Merge pull request #509 from restic/read-from-stdin
Allow reading data from stdin
2016-05-09 22:12:22 +02:00
Alexander Neumann
84f7d28abf Merge pull request #515 from viric/fix_traverse_order
Traverse paths in the same order as parent snapshot
2016-05-09 22:10:36 +02:00
Alexander Neumann
cb75737770 Merge pull request #514 from viric/fix_parent_search
Better backup parent snapshot search. Part of #513
2016-05-09 22:10:32 +02:00
Alexander Neumann
fb45ea139d Add barrier 2016-05-09 21:29:13 +02:00
Alexander Neumann
bce0bbeda2 Add Benchmark for ArchiveReader 2016-05-09 21:16:59 +02:00
Alexander Neumann
c6d934a685 Fix flaky worker cancel test 2016-05-09 20:41:55 +02:00
Alexander Neumann
143fde66bc Add documentation 2016-05-09 20:11:32 +02:00
Alexander Neumann
4146c09a04 Add test for ArchiveReader() 2016-05-09 20:11:32 +02:00
Alexander Neumann
43f9c2d36e backup: Save file size when reading from stdin 2016-05-09 20:11:32 +02:00
Alexander Neumann
5e0813ca04 fuse: Use correct file size in case it's zero 2016-05-09 20:11:32 +02:00
Alexander Neumann
6ee9baa9c5 fuse: Add debug logs 2016-05-09 20:11:32 +02:00
Alexander Neumann
7c76ff3aaf Allow reading backups from stdin 2016-05-09 20:11:32 +02:00
Alexander Neumann
deae1e7e29 Merge pull request #511 from restic/cleanups
Cleanups and test functions
2016-05-09 20:10:39 +02:00
Lluís Batlle i Rossell
4818a8e356 Fix gofmt 2016-05-09 16:31:59 +02:00
Lluís Batlle i Rossell
83aa63365a Not exporting baseNameSlice. Noone else wants it. 2016-05-09 14:46:14 +02:00
Lluís Batlle i Rossell
aed73be93d Improve comment according to hound guidelines 2016-05-09 14:44:03 +02:00
Lluís Batlle i Rossell
4ea62ecbcc Traverse paths in the same order as parent snapshot
This is the 2nd partial fix to #513.

The archivepipe requires the snapshot paths and the backup paths to be
traversed in the same order, and they were sorted differently: the backup paths
by full path, and the snapshot by basename path.
2016-05-09 14:32:17 +02:00
Lluís Batlle i Rossell
60c8c90d35 Better backup parent snapshot search. Part of #513
I look for the newest snapshot that contains all supplied paths to backup.
2016-05-09 12:42:12 +02:00
Alexander Neumann
c523b38abb Remove darwin/arm 2016-05-08 23:42:13 +02:00
Alexander Neumann
f928b30caa CI: Use gox with -osarch
This allows not to cross-compile to darwin/arm, which fails at the
moment.
2016-05-08 23:25:30 +02:00
Alexander Neumann
20afed4058 Checker: handle symlinks 2016-05-08 23:16:17 +02:00
Alexander Neumann
a2224e380b Address style issues identified by Hound 2016-05-08 22:38:38 +02:00
Alexander Neumann
31030baca3 Add comment 2016-05-08 13:51:33 +02:00
Alexander Neumann
173940cbdf Add repository.ListPack 2016-05-08 13:51:21 +02:00
Alexander Neumann
6fc3590838 Remove repository.SaveFrom() 2016-05-08 13:13:29 +02:00
Alexander Neumann
43f7a1fcd9 Correct log statement 2016-05-08 13:09:36 +02:00
Alexander Neumann
7faf272996 Progress: Use reference to sync.Once 2016-05-08 13:04:58 +02:00
Alexander Neumann
514a43f74b Add more tests 2016-05-08 12:25:01 +02:00
Alexander Neumann
6655511ab8 checker: test file mode 2016-05-08 12:25:01 +02:00
Alexander Neumann
168cfc2f6d Add testing helper functions 2016-05-08 12:25:01 +02:00
Alexander Neumann
6cfa0d502d Add LoadAllSnapshots() 2016-05-08 12:25:01 +02:00
Alexander Neumann
a996dbb9d6 check: Add more checks for nodes 2016-05-08 12:25:01 +02:00
Alexander Neumann
7572586ded Fix image URL for readthedocs.org 2016-05-08 12:24:09 +02:00
Alexander Neumann
a0ab9f2fdf Merge pull request #507 from restic/debug-minio-on-darwin
Update minio-go
2016-05-08 12:20:25 +02:00
Alexander Neumann
1b50d55d0c Update minio-go 2016-05-08 11:24:24 +02:00
Alexander Neumann
7dc7f0d295 Vagrantfile: Fix network for darwin 2016-05-07 23:38:41 +02:00
Alexander Neumann
3f8da47a0c Fix restic s3 backend for new minio-go version 2016-05-07 23:38:41 +02:00
Alexander Neumann
72fdd0bc09 Update minio-go 2016-05-07 23:38:41 +02:00
Alexander Neumann
be04a3b683 Travis: Update Go version, set ulimit 2016-05-07 23:38:30 +02:00
Alexander Neumann
60f1fbe35b Update domain for readthedocs 2016-05-05 13:25:06 +02:00
Alexander Neumann
4531456be5 Merge pull request #497 from Thor77/excludeFileExpandEnv
Expand environment-variables in exclude-files
2016-04-18 21:40:30 +02:00
Alexander Neumann
59ec393be1 Merge pull request #502 from restic/update-poly1305
Update crypto library
2016-04-18 21:40:16 +02:00
Alexander Neumann
306c0fea16 Update crypto library
This allows building restic with Go 1.3 on ARM.

Closes #501
2016-04-18 21:23:00 +02:00
Alexander Neumann
039019689a Merge pull request #500 from restic/fix-499
Fix exclude filters with trailing slash
2016-04-18 21:02:27 +02:00
Alexander Neumann
1eb896ae6f Merge pull request #498 from restic/fix-travis-drawin
Fix CI tests
2016-04-18 20:58:45 +02:00
Alexander Neumann
22338903bf Fix golint 2016-04-18 20:33:45 +02:00
Alexander Neumann
baece5eeb3 Add error checking for CI tests 2016-04-18 20:24:12 +02:00
Alexander Neumann
b2846ea49d Add error handling 2016-04-18 20:24:12 +02:00
Alexander Neumann
aa43b69651 Better error reporting for CI tests 2016-04-18 20:24:12 +02:00
Alexander Neumann
6fe25548bd Add another filter test 2016-04-17 22:04:42 +02:00
Alexander Neumann
9002eaa259 Fix exclude filters with trailing slash 2016-04-17 21:54:12 +02:00
Alexander Neumann
2e3c541237 Rework Go program to run CI tests 2016-04-17 17:55:13 +02:00
Alexander Neumann
ead6d11ecf Backend tests: remove debug 2016-04-17 17:39:14 +02:00
Alexander Neumann
41e3e12f4b Travis: correct exclude for darwin 2016-04-17 13:08:57 +02:00
Alexander Neumann
d4b202243a Update Go versions 2016-04-16 22:50:36 +02:00
Alexander Neumann
87250c4489 Call minio server with env variables 2016-04-16 22:49:23 +02:00
Alexander Neumann
4a576af855 Fix CI tests on darwin 2016-04-16 22:27:34 +02:00
Thor77
4fb6669196 add documentation for environment-var expanding in exclude-files 2016-04-16 22:07:36 +02:00
Thor77
9644399074 add environment-var expanding for exclude-files 2016-04-16 22:04:29 +02:00
Alexander Neumann
5b33a7a903 Merge pull request #496 from jzacsh/jzacsh-nit-on-doc-goget
doc: point to correct `gb` arg for `go get`
2016-04-14 08:58:12 +02:00
Jonathan Zacsh
1fe0e30d71 doc: point to correct db arg for go get
per side-chat in https://github.com/restic/restic/issues/494#issuecomment-209591073
2016-04-13 15:52:08 -04:00
Alexander Neumann
75dd9e0fee Merge pull request #495 from restic/fix-494
Umount fuse in tests
2016-04-13 20:48:50 +02:00
Alexander Neumann
23d7464306 Umount fuse in tests
This corrects the order when the fuse mount is terminated by closing the
done channel: Before, restic would close the fuse connection and only
afterwards try to remove the mount, that does not work.

Closes #494
2016-04-13 20:18:54 +02:00
Alexander Neumann
32a5778602 Merge pull request #490 from Thor77/backupExcludeFile
add backup --exclude-file
2016-04-06 00:09:46 +02:00
Thor77
b4493b4640 add documentation for --exclude[-file] and patterns 2016-04-01 15:56:52 +02:00
Thor77
1c1eacfc94 add backup --exclude-file 2016-04-01 13:53:22 +02:00
Alexander Neumann
aac2405e95 Merge pull request #489 from xmaka/xmaka-patch-1
add debian stable help
2016-03-31 22:01:14 +02:00
xmaka
90765a7dac add debian stable help
add a little clue to debian stable users, to install 'go' from the repositories
2016-03-31 21:02:05 +02:00
Alexander Neumann
008337aad4 Merge pull request #487 from restic/use-fadvise
Purge read and written data from OS cache
2016-03-31 19:27:24 +02:00
Alexander Neumann
380e9b8119 Fix Makefile 2016-03-31 19:20:57 +02:00
Alexander Neumann
ddfadae6f6 Fix compilation for Go 1.3 2016-03-28 16:09:28 +02:00
Alexander Neumann
c8f46ce81d fs: Require Go1.4 for Linux 2016-03-28 15:48:18 +02:00
Alexander Neumann
c30f4a9134 fs: remove unneeded code 2016-03-28 15:33:10 +02:00
Alexander Neumann
b7713d2d34 local backend: Drop file content from cache after write 2016-03-28 15:31:25 +02:00
Alexander Neumann
5b5bb070b9 fs: Split out ClearCache from File 2016-03-28 15:31:25 +02:00
Alexander Neumann
feb664620a Use fadvise() to not cache the content of files read 2016-03-28 15:26:46 +02:00
Alexander Neumann
d2df2ad92d Add dep: golang.org/x/sys/unix 2016-03-28 15:25:45 +02:00
Alexander Neumann
9e81b158bf Add hints on how to get started 2016-03-12 11:28:10 +01:00
Alexander Neumann
49eb55c457 Merge pull request #484 from mholt/patch-1
Change ErrNoKeyFound message
2016-03-11 21:26:19 +01:00
Matt Holt
e6ba9e5849 Change ErrNoKeyFound message
For #438. I was just going to change it to "wrong password" but then I saw that it might actually be the case that no key could be found, so I changed it to what I did. Let me know if you'd like something different!
2016-03-11 08:21:01 -07:00
Alexander Neumann
a747cf994e Merge pull request #482 from restic/update-chunker
Update chunker
2016-03-10 11:02:19 +01:00
Alexander Neumann
afd0eb7f67 Merge pull request #481 from restic/fix-memory-usage
Use tempfiles for not-yet-uploaded pack files
2016-03-06 16:28:44 +01:00
Alexander Neumann
e4a6dd8c8c Use newRandReader instead of rand.New()
This needs to be done since for Go < 1.6 rand.Rand does not implement
io.Reader.
2016-03-06 14:21:02 +01:00
Alexander Neumann
18c3024171 Unexport NewPackerManager 2016-03-06 14:20:48 +01:00
Alexander Neumann
1e1368eea3 Add randReader for tests
This can be removed once we require Go 1.6.
2016-03-06 13:59:06 +01:00
Alexander Neumann
cda7616c82 Remove tempdir for packerManager 2016-03-06 13:14:06 +01:00
Alexander Neumann
015cea0c50 PackerManager: Remove debug comment 2016-03-06 12:35:21 +01:00
Alexander Neumann
c0b5f5a8af Fix all code which uses repository.New() 2016-03-06 12:34:23 +01:00
Alexander Neumann
f956f60f9f PackerManager: use tempfiles instead of memory buffers 2016-03-06 12:26:25 +01:00
Alexander Neumann
f893ec57cb Add test and benchmark for PackerManager 2016-03-05 15:58:39 +01:00
Alexander Neumann
9e24238cdd Update chunker 2016-03-05 13:46:20 +01:00
Alexander Neumann
4dac6d45fd Merge pull request #459 from restic/debug-434
Debug issue #434
2016-02-27 14:02:59 +01:00
Alexander Neumann
8d1a5731f3 Remove integration test for 'optimize' 2016-02-27 13:38:05 +01:00
Alexander Neumann
04b3ce00e2 Remove command 'optimize'
It was discovered that the current code may delete still-referenced
blobs, so we'll remove the command for now.

This closes #434
2016-02-27 13:12:22 +01:00
Alexander Neumann
9386bfbafa checker: Do not use reference in checker errors 2016-02-27 13:10:35 +01:00
Alexander Neumann
a613e23e34 checker: Use backend.IDSet instead of custom struct 2016-02-27 13:10:35 +01:00
Alexander Neumann
5ce1375ddd Rename non-exported function 2016-02-27 13:10:35 +01:00
Alexander Neumann
4cefd456bb Refactor rebuild-index code
This code reads all pack headers from all packs and rebuilds the index
from scratch. Afterwards, all indexes are removed. This is needed
because in #434 the command `optimize` produced a broken index that
did not contain a blob any more. Running `rebuild-index` should fix
this.
2016-02-27 13:10:35 +01:00
Alexander Neumann
bc911f4609 cmd_dump: Only load pack header 2016-02-27 13:06:21 +01:00
Alexander Neumann
090920039f pack: Add test with backend.NewReadSeeker
This uses the new backend ReadSeeker with the unpacker.
2016-02-27 13:06:21 +01:00
Alexander Neumann
21a99397ff worker: fix tests
The test failed on Windows, probably because the machine used for CI was
too slow. The new test doesn't depend on timing any more.
2016-02-27 13:06:21 +01:00
Alexander Neumann
482fc9f51d Add backend.readSeeker
This struct implements an io.ReadSeeker on top of a backend. This is the
easiest way to make the packer compatible to the new backend without
loading a complete pack into a bytes.Buffer.
2016-02-27 13:06:21 +01:00
Alexander Neumann
b114ab7108 cmd_dump: Allow dumping all blobs in all packs
I had the suspicion that one of my repositories contained redundant
data blobs that is not recorder in the index. In order to check this I
needed to dump information about the pack files without consulting the
index. The code here iterates over all packs and dumps the headers, and
dumps information about them in JSON (so we can use other tools to parse
that information again).
2016-02-27 13:06:21 +01:00
Alexander Neumann
4cb4a3ac7f Add separate goroutine that closes the output chan
This allows iterating over the output channel without having to start
another Goroutine outside of the worker pool. This also removes the need
for calling Wait().
2016-02-27 13:06:21 +01:00
Alexander Neumann
ee422110c8 Make worker pools input/output chans symmetric
Input and output channel are now both of type `chan Job`, this makes it
possible to chain multiple worker pools together.
2016-02-27 13:06:21 +01:00
Alexander Neumann
e5ee4eba53 Add worker pool
A worker pool is needed whenever something should be done concurrently.
This small library makes it easy to create a worker pool by specifying
channels, concurrency and a function that should be executed for each
job and returns a result and an error.
2016-02-27 13:06:21 +01:00
Alexander Neumann
1e0b7dbdd2 Merge pull request #477 from restic/rest-backend
rest backend: Remove indirection on http.Client
2016-02-25 22:24:00 +01:00
Alexander Neumann
5e152b7753 Merge pull request #476 from restic/fix-475
Ignore invalid index files
2016-02-25 17:56:49 +01:00
Alexander Neumann
17f5b524a6 local: Replace matching code with proper Readdir() 2016-02-24 22:43:04 +01:00
Alexander Neumann
4ae16d7661 repository: Use backend.ID to load index
This commit uses ParallelWorkFuncParseID() to load all indexes and
ignores file names with invalid format.

This fixes #475.
2016-02-24 22:41:32 +01:00
Alexander Neumann
77d85cee52 Merge pull request #472 from restic/update-chunker
Update chunker
2016-02-24 21:25:15 +01:00
Alexander Neumann
d84dec47bf Merge pull request #473 from restic/fix-coveralls
Fix coveralls.io service
2016-02-23 23:51:24 +01:00
Alexander Neumann
6c9170da51 Merge pull request #471 from fawick/move-cmd-folders
Refactor src/restic/cmd to src/cmds
2016-02-23 23:24:09 +01:00
Alexander Neumann
2ce49ea0ee Update code to use the new Chunker interface 2016-02-23 23:14:35 +01:00
Alexander Neumann
3db569c45a Update chunker 2016-02-23 23:14:35 +01:00
Alexander Neumann
98985019f9 Set GOPATH for goveralls 2016-02-23 23:03:38 +01:00
Alexander Neumann
699cb5ed8f Let goveralls fail if it needs to
Maybe this fixes coverage reporting...
2016-02-23 22:16:01 +01:00
Fabian Wickborn
6005bd9833 Fix relative paths in integrations tests 2016-02-23 09:21:50 +01:00
Fabian Wickborn
ee494ab939 Use relocated command folders in build.go and run_integration_tests.go
The resulting command structure is almost compatible to that of that gb
reference project (example-gsftp), as the subfolder for commands is
'cmds' instead of 'cmd'.
2016-02-23 09:18:09 +01:00
Fabian Wickborn
442780f214 Move commands to src/cmds 2016-02-23 07:21:28 +01:00
Alexander Neumann
9c47a8abfc Merge pull request #467 from fawick/master
Merging restic-server
2016-02-22 20:48:18 +01:00
Fabian Wickborn
dd5680dab6 restic-server: Create tmp folder 2016-02-22 20:14:11 +01:00
Fabian Wickborn
1cdbc8e1aa restic-server: Fix folder permissions 2016-02-22 20:14:11 +01:00
Fabian Wickborn
e4168fdde5 restic-server: Reduce memory footprint for saving blobs
Before, the restic-server read the whole blob (up to 8MB) into memory
prior to writing it to disk. Concurrent writes consumed a lot
of memory. This change writes the blob to a tmp file directly and
renames it afterwards in case there where no errors.
2016-02-22 20:14:11 +01:00
Fabian Wickborn
4749e610af restic-server: Fix content length for HEAD requests 2016-02-22 20:14:11 +01:00
Fabian Wickborn
51d86370a5 Fixes for the PR
- Removed external dependencies for test
- Prevent building restic-server w/ Go 1.3

Go versions 1.0, 1.1., and 1.2 are going to fail as well, but they
are "excluded" by README.md already.
2016-02-22 20:13:55 +01:00
Fabian Wickborn
d86c093480 Merged the restic-server by @bchapuis
Commit ID in fd0/restic-server at time of merge is
07fae00e7ddd8751b150e2ebf0bff8b2871c77ce
2016-02-22 20:12:50 +01:00
Alexander Neumann
bb7b9ef3fc Merge pull request #466 from ckemper67/sftp-path-clean
Cleaned up the sftp parsing logic.
2016-02-22 19:04:56 +01:00
Alexander Neumann
5dd65a5c19 Merge pull request #464 from restic/rest-backend
Add REST backend
2016-02-22 18:52:37 +01:00
Christian Kemper
6eb97ca6cc Cleaned up the sftp parsing logic.
Simplified and cleaned up the sftp parsing logic. Added support to
path.Clean the directory. Added additional tests.
2016-02-21 10:50:36 -08:00
Alexander Neumann
9a822285eb Merge pull request #468 from ckemper67/s3-logging
Added missing handle to the s3.Stat log message output
2016-02-21 19:13:10 +01:00
Christian Kemper
c2716755f1 Added missing handle to the s3.Stat log message output 2016-02-21 09:59:27 -08:00
Alexander Neumann
7087efaa79 rest backend: Remove indirection on http.Client 2016-02-21 17:06:35 +01:00
Alexander Neumann
921c2f6069 rest backend: Improve documentation 2016-02-21 17:03:27 +01:00
Alexander Neumann
8ad98e8040 rest backend: Fixes 2016-02-21 16:35:25 +01:00
Alexander Neumann
f7a10a9b9c backend tests: Test accessing config
This commit adds real testing for accessing the config file with
different names.
2016-02-21 16:02:13 +01:00
Alexander Neumann
bd621197f8 Add rest backend to ui parser 2016-02-21 15:33:13 +01:00
Alexander Neumann
ec34da2d66 Add rest backend to location 2016-02-21 15:33:13 +01:00
Alexander Neumann
c2348ba768 Add REST backend
This is a port of the original work by @bchapuis in
https://github.com/restic/restic/pull/253
2016-02-21 15:33:13 +01:00
Alexander Neumann
75d69639e6 .gitignore: Add /vendor/pkg 2016-02-21 15:33:13 +01:00
Alexander Neumann
9485fd0c4d Merge pull request #462 from restic/update-documentation
doc: Reduce text in README, add to Manual
2016-02-21 14:01:57 +01:00
Alexander Neumann
9b93b3a72c doc: Reduce text in README, add to Manual 2016-02-21 13:37:55 +01:00
Alexander Neumann
eaa2f899d5 Merge pull request #455 from restic/mkdocs
Add mkdocs for readthedocs.org
2016-02-21 13:14:14 +01:00
Alexander Neumann
1edf9c1ee4 doc: Add paragraph about mkdocs 2016-02-21 13:04:45 +01:00
Alexander Neumann
45e9561b48 doc: Add paragraph about documentation version 2016-02-21 12:58:42 +01:00
Alexander Neumann
7de8bf6c27 Manual: Correct headings, add section about debug 2016-02-21 12:56:25 +01:00
Alexander Neumann
1c2992e2e5 Update manual 2016-02-21 12:52:31 +01:00
Alexander Neumann
dc994699d9 Remove obsolete structure image 2016-02-21 12:29:13 +01:00
Alexander Neumann
a13f9f14d0 Add something to the front page 2016-02-21 12:28:46 +01:00
Alexander Neumann
e71e2c74f8 README: Add readthedocs badge 2016-02-21 12:28:46 +01:00
Alexander Neumann
7c4bd662cb Manual: Fix shell blocks and ToC 2016-02-21 12:28:46 +01:00
Alexander Neumann
6559fa7382 Add mkdocs for readthedocs.org 2016-02-21 12:28:46 +01:00
Alexander Neumann
b9eea24728 Merge pull request #460 from restic/add-github-issue-template
Add issue template
2016-02-21 12:23:24 +01:00
Alexander Neumann
8f33afead4 Add issue template 2016-02-21 00:35:58 +01:00
Alexander Neumann
625c987d23 Move sftp test 2016-02-20 20:53:40 +01:00
Alexander Neumann
933479047f Merge pull request #458 from restic/use-gb-vendor
Properly vendor dependencies with gb-vendor
2016-02-20 19:19:34 +01:00
Alexander Neumann
eef73d466d Properly vendor dependencies with gb-vendor 2016-02-20 18:33:06 +01:00
Alexander Neumann
134d129986 Merge pull request #445 from restic/switch-to-gb
Switch to gb
2016-02-20 18:23:40 +01:00
Alexander Neumann
4dffd3de66 Update .travis.yml 2016-02-20 17:56:11 +01:00
Alexander Neumann
cc8a929d43 Vagrantfile: Update to new structure (and Go version)
Also add /.vagrant to .gitignore
2016-02-20 17:44:48 +01:00
Alexander Neumann
1ab8220022 Update CONTRIBUTING.md 2016-02-20 17:31:39 +01:00
Alexander Neumann
6e31f4bb19 Dockerfile: Update for gb 2016-02-20 17:31:39 +01:00
Alexander Neumann
9bda164ed3 Update Appveyor configuration 2016-02-20 17:31:22 +01:00
Alexander Neumann
c51889c157 Ignore vendor/ directory for gofmt tests 2016-02-20 17:31:22 +01:00
Alexander Neumann
b3c2febf79 Add output of build.go to gitignore 2016-02-20 17:31:22 +01:00
Alexander Neumann
96e66bb3e9 Update build.go and run_integration_tests.go 2016-02-20 17:31:22 +01:00
Alexander Neumann
841326d713 Move build.go and run_integration_tests.go to root 2016-02-20 17:31:21 +01:00
Alexander Neumann
7b6629802b Move "doc" to root dir 2016-02-20 17:31:21 +01:00
Alexander Neumann
c0bd660a9e Rename package
* github.com/restic/restic -> restic
2016-02-20 17:31:21 +01:00
Alexander Neumann
0a8ef79dad Move top-level files 2016-02-20 17:31:21 +01:00
Alexander Neumann
b63399d606 Move things around for gb
This moves all restic source files to src/, and all vendored
dependencies to vendor/src.
2016-02-20 17:31:20 +01:00
Alexander Neumann
273d028a82 Merge pull request #451 from fawick/master
Support custom SSH ports in URL of SFTP-repo
2016-02-16 21:13:42 +01:00
Fabian Wickborn
2d94e71117 Support custom SSH ports in URL of SFTP-repo 2016-02-15 19:17:41 +01:00
Alexander Neumann
d918d0f0c3 Merge pull request #443 from ckemper67/fix-387
Introduced a configurable object path prefix for s3 repositories to address #387
2016-02-14 22:43:40 +01:00
Alexander Neumann
517eff7e48 Merge pull request #447 from ckemper67/fix-442
Fix #442: set blocks and blocksize in fuse
2016-02-14 22:25:37 +01:00
Christian Kemper
1f81865847 set Blocks and BlockSize in Attr using a default blocksize of 512 to address #442. 2016-02-14 13:14:08 -08:00
Christian Kemper
f3f1404849 always use "restic" as the default prefix for parsing.
improved test error message to make it easier to find the problematic pattern
2016-02-14 11:26:46 -08:00
Christian Kemper
24b7514fe0 cleaner sharing of s3: and s3:// configuration 2016-02-14 09:45:58 -08:00
Christian Kemper
91dc14c9fc fix Hound warning 2016-02-14 09:27:00 -08:00
Christian Kemper
32c2cafa89 Simplify creation of the Config by moving it to a separate function. Simplify the parsing logic
by sharing the handling of s3: and s3://
2016-02-14 09:18:22 -08:00
Christian Kemper
74608531c7 strip off the trailing slash for the object prefix 2016-02-14 07:16:50 -08:00
Christian Kemper
48f85fbb09 replaced if-else chain with switch 2016-02-14 07:01:14 -08:00
Christian Kemper
535dfaf097 address first round of review comments 2016-02-14 06:40:15 -08:00
Christian Kemper
8f5ff379b7 Introduced a configurable object path prefix for s3 repositories.
Prepends the object path prefix to all s3 paths and allows to have multiple independent
restic backup repositories in a single s3 bucket.

Removed the hardcoded "restic" prefix from s3 paths.

Use "restic" as the default object path prefix for s3 if no other prefix gets specified.
This will retain backward compatibility with existing s3 repository configurations.

Simplified the parse flow to have a single point where we parse the bucket name and the prefix within the bucket.

Added tests for s3 object path prefix and the new default prefix to config_test and location_test.
2016-02-14 06:05:38 -08:00
Alexander Neumann
b7260cafac CONTRIBUTING: Add paragraph about PRs and tests 2016-02-14 11:49:10 +01:00
Alexander Neumann
3d27751b69 Merge pull request #441 from restic/add-note-gccgo
README: Add note about gccgo
2016-02-13 20:21:58 +01:00
Alexander Neumann
4053fe0a9b Merge pull request #440 from restic/fix-431
sftp: Use os.IsNotExist() for Test()
2016-02-13 19:45:14 +01:00
Alexander Neumann
e75856a7d2 README: Add note about gccgo
Closes #432
2016-02-13 19:20:25 +01:00
Alexander Neumann
50380f8c14 Merge pull request #439 from restic/fix-402
Add cleanup handler to restore terminal state
2016-02-13 19:14:59 +01:00
Alexander Neumann
4032af0b78 sftp: Use os.IsNotExist() for Test()
The sftp library introduced a change so that the error returned for
(among others) Lstat() can be used with os.IsNotExist() to test whether
the target file does not exist.
2016-02-13 19:11:41 +01:00
Alexander Neumann
2bb55f017d Update pkg/sftp library 2016-02-13 19:11:35 +01:00
Alexander Neumann
1287b307ac Add cleanup handler to restore terminal state
Closes #402
2016-02-13 18:29:26 +01:00
Alexander Neumann
24618305cc Merge pull request #429 from restic/fix-428
backup: Hide status output for narrow terminals
2016-02-10 18:02:18 +01:00
Alexander Neumann
4ec5050cbb backup: Hide status output for narrow terminals
closes #428
2016-02-10 17:28:48 +01:00
Alexander Neumann
eccbcb73a1 Merge pull request #425 from restic/fix-loadall
backend.LoadAll: return nil on expected error
2016-02-08 00:37:04 +01:00
Alexander Neumann
e781e1cf1d Merge pull request #424 from restic/fix-backup
Fix backup of root directory
2016-02-08 00:06:19 +01:00
Alexander Neumann
e9a21c1dc6 backend.LoadAll: return nil on expected error
The current code returns io.ErrUnexpectedEOF, but it is the normal,
expected behaviour of the function LoadAll() to load until the item is
completely loaded. Therefore, the io.ErrUnexpectedEOF is not returned to
the caller.
2016-02-07 23:48:54 +01:00
Alexander Neumann
a37ed45534 Add test for LoadAll with too large buffer
LoadAll() should not pass on io.ErrUnexpectedEOF, since the occurrence
of this error is normal.
2016-02-07 23:48:03 +01:00
Alexander Neumann
26484d0c7b pipe: Ignore excluded directories
When top-level directories are ignored, they were still added to the
top-level Dir{} object. This commit ignores them.
2016-02-07 23:23:16 +01:00
Alexander Neumann
68db75b4e3 pipe/archiver: Add more debug messages 2016-02-07 23:22:06 +01:00
Alexander Neumann
9048eb676b pipe: join replaced paths with original path
When saving `/`, it was replaced with the contents, but without the
proper path. So `/` was replaced by [`boot`, `bin`, `home`, ...], but
without prefixing the entry name with the proper path.
2016-02-07 22:18:37 +01:00
Alexander Neumann
6a5b022939 archiver: Add error reporting for directories
When an error occurred while walking a directory, this error wasn't
reported to the user before.
2016-02-07 22:18:00 +01:00
Alexander Neumann
57a24b2cdf Merge pull request #421 from restic/fix-fuse-with-special-names
fuse: Replace special node names with their content
2016-02-07 20:43:29 +01:00
Alexander Neumann
da47389483 Merge pull request #420 from restic/archiver-unique-paths
archiver: deduplicate list of paths to save
2016-02-07 20:31:18 +01:00
Alexander Neumann
4c329110c5 Merge pull request #419 from restic/fix-backup-root
Fix backup of "/"
2016-02-07 20:31:08 +01:00
Alexander Neumann
46fbae0d71 Merge pull request #418 from benmur/show-pack-name-on-rebuild-index-error
Handle pack loading errors in rebuild-index
2016-02-07 19:59:01 +01:00
Alexander Neumann
1835e988cf fuse: Replace special node names with their content
This is the companion fix for #419 and allows mounting repositories with
special directory names directly below the snapshot.

Closes #403
2016-02-07 19:51:29 +01:00
Alexander Neumann
537347d9b5 archiver: deduplicate list of paths to save 2016-02-07 19:35:35 +01:00
Alexander Neumann
811dbfa52d Make TestWalkerPath absolute before walking 2016-02-07 19:34:02 +01:00
Rached Ben Mustapha
ba35ca522a Handle pack loading errors in rebuild-index
Errors returned from backend.LoadAll() were not handled, leading to
these fatal errors from the unpacker trying to read the size from the end of
an empty buffer:

`seeking to read header length failed: bytes.Reader.Seek: negative position`

This change takes care of returning on error, as well as showing which pack
failed to load and validating pack integrity.
2016-02-07 18:30:47 +00:00
Alexander Neumann
c6a1f2e2f3 pipe: Handle special paths gracefully
This fixes handling special paths "." and "/". When such a path name is
found, it is replaced by the contents of the path before walking.
2016-02-07 19:30:00 +01:00
Alexander Neumann
604c27f001 Test pipe walker for invalid paths
It was discovered that when restic is instructed to save `/`, the tree
structures in the repository contain an invalid node.

When saving the dir `/home/user`, the following structure is created:

    snapshot
         -> tree
             nodes: ["user"]
             [...]

When the root directory `/` is saved, the structure is as follows:

    snapshot
         -> tree
             nodes: ["/"]
             [...]

This behavior is caused by the walker in pipe.go sending a node with the
name "." to the archiver, so this commit adds a test for invalid node
names.
2016-02-07 19:29:44 +01:00
Alexander Neumann
0535490618 Merge pull request #413 from restic/reduce-travis-matrix
Reduce jobs run on Travis
2016-02-06 14:10:54 +01:00
Alexander Neumann
9c0fc4930b Merge pull request #412 from restic/activate-hound
Enable HoundCI checking for Go
2016-02-06 13:37:13 +01:00
Alexander Neumann
a45f2cb205 Reduce jobs run on Travis
Skip building and testing restic with Go 1.3, 1.4 and 1.6 on osx.
2016-02-06 13:04:49 +01:00
Alexander Neumann
3fd5b5975a Merge pull request #411 from restic/fix-appveyor-sourceforge
Always use NetCologne SourceForge mirror
2016-02-06 13:01:55 +01:00
Alexander Neumann
9175f0b6af Enable HoundCI checking for Go 2016-02-05 21:15:46 +01:00
Alexander Neumann
1160d03279 Always use NetCologne SourceForge mirror
The one automatically selected by the SourceForge CDN fails currently:

    appveyor DownloadFile http://downloads.sourceforge.net/project/gnuwin32/tar/1.13-1/tar-1.13-1-bin.zip -FileName tar.zip
    Error downloading file: Unable to connect to the remote server Command exited with code 2
2016-02-05 21:13:14 +01:00
Alexander Neumann
789e0df49e Merge pull request #408 from benmur/validate-pack-checksums
checker: Validate pack checksums before unpacking
2016-02-05 19:45:21 +01:00
Rached Ben Mustapha
83bbf21f1a checker: Validate pack checksums before unpacking
This avoids reading a possibly invalid size at the end of a corrupted pack
2016-02-04 22:55:39 +01:00
Alexander Neumann
7a8054d678 Add link to record of talk at C4 Cologne 2016-02-04 19:56:41 +01:00
Alexander Neumann
c0bbb7254d Merge pull request #406 from restic/fix-405
Test and fix for #405
2016-02-02 20:53:52 +01:00
Alexander Neumann
4f1f03cdb9 Move testing for known blobs to Archiver
This removes the list of in-flight blobs from the master index and
instead keeps a list of "known" blobs in the Archiver. "known" here
means: either already processed, or included in an index. This property
is tested atomically, when the blob is not in the list of "known" blobs,
it is added to the list and the caller is responsible to make this
happen (i.e. save the blob).
2016-02-01 23:50:56 +01:00
Alexander Neumann
382c766983 Move test for #405: Test Archiver instead of Repo 2016-02-01 23:50:41 +01:00
Alexander Neumann
f5f6e9cf37 Add test to reproduce #405 2016-02-01 23:35:01 +01:00
Alexander Neumann
cf88b33383 Merge pull request #398 from restic/update-minio-go
Update minio-library
2016-01-29 13:14:37 +01:00
Alexander Neumann
57615edd3a Merge pull request #400 from restic/update-go-appveyor
Update Go version for CI tests
2016-01-29 13:13:42 +01:00
Alexander Neumann
49cb88d158 Add note about lightning talk at FOSDEM 2016-01-28 22:40:52 +01:00
Alexander Neumann
1464d84cf5 Update minio-go
This should work with Go 1.3/1.4 again
2016-01-28 22:38:29 +01:00
Alexander Neumann
74ce027924 Merge pull request #397 from restic/remove-readcloser
Remove backend.ReadCloser, this is not used any more
2016-01-28 22:31:30 +01:00
Alexander Neumann
39f698886a Update Travis Go version 2016-01-28 22:28:17 +01:00
Alexander Neumann
9f8f2bc874 Update Go version for appveyor 2016-01-28 00:00:28 +01:00
Alexander Neumann
f8daadc5ef Update minio-library
This addresses #388
2016-01-27 23:23:47 +01:00
Alexander Neumann
f2371db2a9 repository: remove decryptReadCloser 2016-01-27 22:47:09 +01:00
Alexander Neumann
7d1775e000 Remove backend.ReadCloser 2016-01-27 22:35:18 +01:00
Alexander Neumann
ce4a7f16ca Merge pull request #395 from restic/rework-backend-interface
WIP: Rework backend interface
2016-01-27 22:11:20 +01:00
Alexander Neumann
322eca86bc mem backend: remove unused code 2016-01-27 21:33:48 +01:00
Alexander Neumann
3d06e6083a CI: download minio for the correct os and arch 2016-01-26 23:52:39 +01:00
Alexander Neumann
b64006221c CI: Download minio server, do not compile latest master 2016-01-26 23:50:27 +01:00
Alexander Neumann
1fde872016 CI: only build minio on Go 1.5.1 and above 2016-01-26 22:36:06 +01:00
Alexander Neumann
2701eabe39 Remove ContinuousReader 2016-01-26 22:35:51 +01:00
Alexander Neumann
c388101217 s3: Unexport structure 2016-01-26 22:19:44 +01:00
Alexander Neumann
1528d1ca83 sftp: Reduce duplicate code, add error check 2016-01-26 22:16:24 +01:00
Alexander Neumann
0bbad683c5 local: split out tempfile write function 2016-01-26 22:12:53 +01:00
Alexander Neumann
9ec435d863 local: remove duplicate code 2016-01-26 22:09:29 +01:00
Alexander Neumann
9b1c4b2dd6 local: Remove mutex and hash of open files 2016-01-26 22:08:20 +01:00
Alexander Neumann
7196971159 Remove unneeded HashingReader implementation 2016-01-26 22:00:11 +01:00
Alexander Neumann
eb1669a061 Add a lot of comments 2016-01-26 21:56:13 +01:00
Alexander Neumann
c34aa72538 Remove duplicate function str2id 2016-01-26 21:52:02 +01:00
Alexander Neumann
da883d6196 Cleanups, move Hash() to id.go 2016-01-26 21:49:33 +01:00
Alexander Neumann
b482df04ec Add more documentation 2016-01-26 21:49:22 +01:00
Alexander Neumann
5fcb5ae549 Reduce number of tests for Load() 2016-01-24 21:40:54 +01:00
Alexander Neumann
a0d484113a backends: Do not sort strings
Closes #305
2016-01-24 21:32:45 +01:00
Alexander Neumann
d9c87559b5 s3/local backend: Fix error for overwriting files 2016-01-24 21:13:24 +01:00
Alexander Neumann
1547d3b656 Remove Create() everywhere 2016-01-24 20:23:50 +01:00
Alexander Neumann
ea29ad6f96 Remove last ocurrence of Create() 2016-01-24 19:30:14 +01:00
Alexander Neumann
1a95e48389 Remove unneeded special readers 2016-01-24 18:58:15 +01:00
Alexander Neumann
ac2fe4e04f Remove BlobWriter 2016-01-24 18:53:39 +01:00
Alexander Neumann
cfdd3a853d Remove usage of CreateEncryptedBlob() 2016-01-24 18:52:11 +01:00
Alexander Neumann
01e40e62bf repo: Use Save() instead of Create() 2016-01-24 18:50:41 +01:00
Alexander Neumann
35f9eae6c3 local backend: do not call Sync() on directory
This fails at least on Windows.
2016-01-24 18:01:00 +01:00
Alexander Neumann
fe565e17c3 Key: Use Save() instead of Create() 2016-01-24 17:52:44 +01:00
Alexander Neumann
4735a7f9b5 Improve random reader for tests 2016-01-24 17:47:45 +01:00
Alexander Neumann
54f8860612 backends: Add Save() 2016-01-24 16:59:38 +01:00
Alexander Neumann
ed172c06e0 backends: Add Save() function 2016-01-24 01:15:35 +01:00
Alexander Neumann
adbe9e2e1c backend: Remove GetReader 2016-01-24 01:00:27 +01:00
Alexander Neumann
2c3a6a6fa9 cmd_rebuild_index: Remove calls to GetReader() 2016-01-24 00:42:04 +01:00
Alexander Neumann
61551b0591 cmd_cat: Remove calls to GetReader() 2016-01-24 00:42:04 +01:00
Alexander Neumann
280d580ae2 checker: Use Load() instead of GetReader() 2016-01-24 00:42:04 +01:00
Alexander Neumann
782a1bf7b0 repository: remove GetDecryptReader() 2016-01-24 00:12:17 +01:00
Alexander Neumann
3191778d33 repository: Use Load() instead of GetReader() 2016-01-24 00:12:09 +01:00
Alexander Neumann
9bfa633187 repository/key: Use Load() instead of GetReader() 2016-01-23 23:48:19 +01:00
Alexander Neumann
9209dcfa26 Add LoadAll() 2016-01-23 23:41:55 +01:00
Alexander Neumann
919b40c6cf Add Stat() method to backend interface 2016-01-23 23:27:58 +01:00
Alexander Neumann
10b03eee27 Add comment 2016-01-23 23:27:40 +01:00
Alexander Neumann
0b50f9e02c Move MemoryBackend to backend/mem 2016-01-23 19:50:11 +01:00
Alexander Neumann
f05a32509e Add "Test" prefix to backend test functions 2016-01-23 19:12:02 +01:00
Alexander Neumann
e4f2e4a203 Remove old s3 tests 2016-01-23 19:11:47 +01:00
Alexander Neumann
6ba56befad Abort fuse integration test on error
Before, the fuse integration test was run and the tests were never
finished, because the testing code did not detect any errors when the
fusermount binary returned an error. This commit fixes it.
2016-01-23 19:10:43 +01:00
Alexander Neumann
15c8b85a4b Add tests for s3 backend 2016-01-23 18:46:04 +01:00
Alexander Neumann
c6db567e3f Add sftp tests 2016-01-23 18:30:02 +01:00
Alexander Neumann
4952f86682 Add test for to prevent double create 2016-01-23 18:07:15 +01:00
Alexander Neumann
16b7cc7655 Remove redundant local tests 2016-01-23 17:45:33 +01:00
Alexander Neumann
99fab793c0 Remove timestamp from generated tests 2016-01-23 17:43:49 +01:00
Alexander Neumann
9423767827 Update test generate script, add tests to membackend 2016-01-23 17:42:26 +01:00
Alexander Neumann
e966df3fed Add Load() to MemBackend 2016-01-23 17:19:55 +01:00
Alexander Neumann
3aafa21887 Fix MockBackend.Load() 2016-01-23 17:19:47 +01:00
Alexander Neumann
9a490f9e01 Implement package-local tests 2016-01-23 17:08:03 +01:00
Alexander Neumann
0a24261afb Add Load() for all existing backends 2016-01-23 14:12:12 +01:00
Alexander Neumann
8b7bf8691d backend: Remove Get()
This is the first commit that removes the (redundant) Get() method of
the backend interface. Get(x, y) is equivalent to GetReader(x, y, 0, 0).
2016-01-23 13:13:05 +01:00
Alexander Neumann
d3a6e2a991 Drop requirement from List()
Closes #305
2016-01-23 12:47:16 +01:00
Alexander Neumann
171cd0dfe1 Add backend.Handle, add comments 2016-01-23 12:46:20 +01:00
Alexander Neumann
4d7e802c44 Merge pull request #392 from restic/fix-build
Allow saving duplicate blobs in the repacker
2016-01-17 22:02:09 +01:00
Alexander Neumann
109a120b39 Fix RandomReader 2016-01-17 21:27:51 +01:00
Alexander Neumann
f53008d916 Allow saving duplicate blobs in the repacker
This adds code to the master index to allow saving duplicate blobs
within the repacker. In this mode, only the list of currently in flight
blobs is consulted, and not the index. This correct because while
repacking, a unique list of blobs is saved again to the index.
2016-01-17 21:14:55 +01:00
Alexander Neumann
34c1056efc Merge pull request #358 from episource/iss358_pack_not_referened_add_test
Closes #365
Closes #358
2016-01-17 20:07:56 +01:00
Alexander Neumann
00e7a76ecc Merge branch 'iss358_pack_not_referenced_fix' of https://github.com/episource/restic into episource-358 2016-01-17 20:07:31 +01:00
Alexander Neumann
e689d499e7 Improve RandomReader 2016-01-17 19:46:48 +01:00
Alexander Neumann
5df9bdec9a Merge pull request #366 from restic/howeyc-s3-minio
rebase: Switch s3 library to allow for s3 compatible backends
2016-01-17 19:21:13 +01:00
Alexander Neumann
c722851f92 Update Dockerfile 2016-01-17 18:50:50 +01:00
Alexander Neumann
877f3f61a0 Add flag to disable cross-compilation 2016-01-17 18:49:43 +01:00
Alexander Neumann
1dd4c52a8b Add comments, configure flag library 2016-01-17 18:48:05 +01:00
Alexander Neumann
c6e1696f07 Fix debug message 2016-01-17 18:48:05 +01:00
Alexander Neumann
1483e15e4e Update s3 library (again) 2016-01-17 18:48:05 +01:00
Alexander Neumann
6a56d5b87b Repo: Add more debug 2016-01-17 18:48:05 +01:00
Alexander Neumann
289aee9448 Adapt s3 backend to new library 2016-01-17 18:48:05 +01:00
Alexander Neumann
0e9236475b Update s3 library (again) 2016-01-17 18:48:05 +01:00
Alexander Neumann
181480b68b Update s3 library 2016-01-17 18:48:05 +01:00
Alexander Neumann
61e66e936f Fix imports 2016-01-17 18:48:05 +01:00
Alexander Neumann
314182e7e0 Add debug, do not create bucket if it already exists 2016-01-17 18:48:05 +01:00
Alexander Neumann
69e6e9e5c7 Update s3 library (again) 2016-01-17 18:48:05 +01:00
Alexander Neumann
fc347ba60f Add new test with multiple writes for backends 2016-01-17 18:48:05 +01:00
Alexander Neumann
26eb859663 Dockerfile: Add sftp server binary 2016-01-17 18:48:05 +01:00
Alexander Neumann
338ad42273 location: fix tests 2016-01-17 18:48:05 +01:00
Alexander Neumann
5722ccfcda Fix s3 backend, add more tests 2016-01-17 18:48:05 +01:00
Alexander Neumann
0237b0d972 Update s3 library again 2016-01-17 18:48:05 +01:00
Alexander Neumann
a850041cf0 ContReader: Remove debug output 2016-01-17 18:48:05 +01:00
Alexander Neumann
5071f28d55 ReadCloser: Call close if reader implements it 2016-01-17 18:48:05 +01:00
Alexander Neumann
e0361b1f9f Add ContinuousReader 2016-01-17 18:48:05 +01:00
Alexander Neumann
f319354174 Update s3 library again 2016-01-17 18:48:05 +01:00
Alexander Neumann
a73c4bd5a7 update s3 library for bugfix 2016-01-17 18:48:05 +01:00
Alexander Neumann
d79c85af62 Fix s3 tests 2016-01-17 18:48:05 +01:00
Alexander Neumann
407819e5a9 s3: properly integrate minio-go lib 2016-01-17 18:48:05 +01:00
Alexander Neumann
2c15597e24 walker: print errors 2016-01-17 18:48:05 +01:00
Alexander Neumann
a17b6bbb64 Update minio-go library 2016-01-17 18:48:05 +01:00
Alexander Neumann
1922a4272c s3: fix usage
Ignore error response for existing bucket, add more debug code.
2016-01-17 18:48:05 +01:00
Alexander Neumann
2b10791df2 location: Fix test 2016-01-17 18:48:05 +01:00
Alexander Neumann
1ad5c3813c correct CI s3 test server url 2016-01-17 18:48:05 +01:00
Alexander Neumann
7d5f8214cf use new backend open with config 2016-01-17 18:48:05 +01:00
Alexander Neumann
2b0b44c5ce s3: implement open with config 2016-01-17 18:48:05 +01:00
Alexander Neumann
f7c9091970 sftp: implement open with config 2016-01-17 18:48:05 +01:00
Alexander Neumann
7b1e8fdd06 local: correct comment 2016-01-17 18:48:05 +01:00
Alexander Neumann
d257dedf42 rename LocationParse -> Parse 2016-01-17 18:48:05 +01:00
Alexander Neumann
3d2a714b5a Update minio-go library 2016-01-17 18:48:05 +01:00
Alexander Neumann
de933a1d48 Rename URI -> Config/Location 2016-01-17 18:48:05 +01:00
Alexander Neumann
566a15285a Add repository location parsing code 2016-01-17 18:48:05 +01:00
Alexander Neumann
43cf95e3c6 Correctly stop the minio server after the tests 2016-01-17 18:48:05 +01:00
Alexander Neumann
0b12ceabe9 Dockerfile: Install go in home dir
This allows cross-compilation with gox with Go < 1.5
2016-01-17 18:48:05 +01:00
Alexander Neumann
e96f28c536 Output stderr when minio server failed 2016-01-17 18:48:05 +01:00
Alexander Neumann
d5e36bd2f0 Only run minio server for Go >= 1.5.1 2016-01-17 18:48:05 +01:00
Alexander Neumann
34e8f63f77 Increase debug output for minio server 2016-01-17 18:47:24 +01:00
Alexander Neumann
3e422c8776 Add debug output, listen on localhost 2016-01-17 18:47:24 +01:00
Alexander Neumann
edfb31f4fe s3: Run integration test with minio server 2016-01-17 18:47:24 +01:00
Alexander Neumann
8562a1bb2f Dockerfile: Also install minio 2016-01-17 18:46:08 +01:00
Alexander Neumann
fa7192fdfb CI: save cross-compiled binaries in /tmp 2016-01-17 18:46:08 +01:00
Alexander Neumann
c22c0f2706 Add Dockerfile that resembles the Travis environment 2016-01-17 18:46:08 +01:00
Alexander Neumann
5736742c3e s3: Open() creates bucket if it does not exist 2016-01-17 18:46:08 +01:00
Alexander Neumann
248f991ad4 s3: don't remove the bucket on Delete() 2016-01-17 18:46:08 +01:00
Alexander Neumann
55f10eb1c1 Fix s3 test with local minio server instance 2016-01-17 18:46:08 +01:00
Alexander Neumann
d0ca118387 Fix usage of the done chan 2016-01-17 18:46:08 +01:00
Chris Howey
69a9adc4c3 Use local instance of minio server.
Need to figure out how to have tests automatically start and kill
server.
2016-01-17 18:46:08 +01:00
Chris Howey
e2445f4c97 GetPartialObject does not work. 2016-01-17 18:46:08 +01:00
Chris Howey
ed2a4ba1d5 Fix s3 backend test 2016-01-17 18:46:08 +01:00
Chris Howey
6d1552af51 Switch s3 library to allow for s3 compatible backends. Fixes #315 2016-01-17 18:46:08 +01:00
Alexander Neumann
c969de7fad Merge pull request #390 from restic/fix-travis
Fix travis
2016-01-16 14:39:51 +01:00
Alexander Neumann
b8c300e61e Remove run_tests.go from Makefile 2016-01-16 14:37:23 +01:00
Alexander Neumann
2499bbb09d Also specify new -X syntax for go1.6 2016-01-16 14:08:13 +01:00
Alexander Neumann
7c70d5c1bd Build toolchain for gox only on older Versions of Go 2016-01-16 13:40:16 +01:00
Alexander Neumann
f90381910b Remove Go tip, add 1.6beta2 2016-01-16 13:39:12 +01:00
Alexander Neumann
172c31ff45 Use gotestcover instead of homebrew run_tests.go 2016-01-16 13:32:23 +01:00
Alexander Neumann
bbfd1dd0c0 Fix ignore tip build failure 2016-01-16 13:23:45 +01:00
Alexander Neumann
8d71e5d698 Travis CI: Update Go version, add tip 2016-01-16 13:00:28 +01:00
Alexander Neumann
0f69169262 OpenChaos lecture 2016-01-13 20:16:47 +01:00
Alexander Neumann
72bcebbfb1 Remove (broken) sourcegraph and waffle badges 2016-01-07 21:09:32 +01:00
Philipp Serr
0fde09a866 Lock MasterIndex and InFlight store together
fixes: #358
2015-12-28 18:40:43 +01:00
Philipp Serr
e7bf936d2b Increase number of chunks and test repetitions 2015-12-28 18:33:28 +01:00
Philipp Serr
3d7f72311a Provoke unreferenced packs using fewer goroutines
TestParallelSaveWithDuplication has been reworked to provoke
unreferenced packs using fewer goroutines than before and create
only one bytes.Reader per blob. This reduces memory usage
significantly.

The following actions have been taken to keep the chance of provoking
unreferenced packs due to #358 high:
 * Interweaved processing of subsequent chunks
 * Delaying each goroutine by a few pseudo-randomly chosen nanoseconds
   (depending on the platform this will most probably only make the os
   yield execution to another thread): together with the interweaved
   processing of subsequent chunks, this ensures a minimalistic delay
   between processing of (some) duplicated chunks
 * Repeating the test 5 times with different seeds

On my test machine, the modified test provoked unreferenced packs 60
times in a row.
2015-12-28 18:33:26 +01:00
Philipp Serr
6a548336ec Add a test concurrently saving duplicated chunks
This commit adds an integration test, that calls Archiver.Save from
many goroutines processing several duplicated chunks concurrently.
The test asserts, that after all chunks have been saved, there are no
unreferenced packs in the repository.

The test has been checked to give the expected results:
 1) Running the test with maxParallel=1 (all chunks are processed
    sequentially) has been verified not to produce any unreferenced
    packs. Consequently the test passes.
 2) Running the test with unbounded parallelism (maxParallel=
    math.MaxInt32) has been verified to produce unreferenced packs
    all the time (at least 25 test runs). Consequently the test fails
    due to #358.

references: #358
2015-12-28 18:33:22 +01:00
Alexander Neumann
d3e7766f89 Merge pull request #380 from restic/PKGBUILD-update
Update PKGBUILD to reflect restic official version numbering
2015-12-27 22:07:28 +01:00
Florian Daniel
360193320f Update PKGBUILD to reflect restic official version numbering 2015-12-27 22:05:05 +01:00
Alexander Neumann
1f1b8e16a7 Add Code Quality Badge
Closes #379
2015-12-27 20:35:27 +01:00
Alexander Neumann
3abff7928c Merge pull request #375 from restic/fix-suid
Backup and restore setuid/setgid/sticky bits
2015-12-20 20:45:18 +01:00
Alexander Neumann
2976df2dc6 Call brew update before installing 2015-12-20 20:01:13 +01:00
Alexander Neumann
f49cb62812 Backup and restore setuid/setgid/sticky bits
A user discovered that restic does not restore setuid/setgid/sticky file
attributes. This commit fixes that. The mode is stored in the Go format
as an uint32: https://golang.org/pkg/os/#FileMode
2015-12-20 19:45:36 +01:00
Alexander Neumann
55d9c5f80c Merge pull request #364 from restic/fix-356
Allow reading all data
2015-12-06 21:55:21 +01:00
Alexander Neumann
d1ca986f55 Fix tests 2015-12-06 17:38:51 +01:00
Alexander Neumann
3ac1d0e4d1 add progress 2015-12-06 17:29:31 +01:00
Alexander Neumann
0e66a66bce read packs concurrently 2015-12-06 17:09:06 +01:00
Alexander Neumann
43a23f91a6 checker: add function to read and verify all data 2015-12-02 22:40:36 +01:00
Alexander Neumann
8d229bfd21 Make ReadCloser public 2015-12-02 22:37:58 +01:00
Alexander Neumann
04cd318f6c Merge pull request #362 from restic/improvements
Cleanups
2015-11-30 22:09:24 +01:00
Alexander Neumann
141d400b4a test: improvements 2015-11-29 14:54:20 +01:00
Alexander Neumann
b841eb4c54 crypto: check key for validity 2015-11-29 14:54:20 +01:00
Alexander Neumann
4f6bc754b8 MemBackend: Add Delete() and more debug 2015-11-29 14:53:02 +01:00
Alexander Neumann
26697a0223 Fix MemoryBackend GetReader() method 2015-11-29 14:52:19 +01:00
Alexander Neumann
480054bc3a MemoryBackend: handle config correctly, add tests for that 2015-11-29 14:52:19 +01:00
Alexander Neumann
538e5878a1 add debug logging to MemoryBackend 2015-11-29 14:52:19 +01:00
Alexander Neumann
9cb4e14327 add MemBackend and MockBackend 2015-11-29 14:52:19 +01:00
Alexander Neumann
da71da23d9 Add MockBackend 2015-11-29 14:52:19 +01:00
Alexander Neumann
0d5731383f Remove HashAppendWriter 2015-11-29 14:29:59 +01:00
Alexander Neumann
4fd7676e92 HashingWriter: Add documentation 2015-11-29 14:29:59 +01:00
Alexander Neumann
8209bb309b split out decryptReader and packerManager 2015-11-29 14:29:59 +01:00
Alexander Neumann
d4b873ca76 Repo: add documentation 2015-11-29 14:29:57 +01:00
Alexander Neumann
5b601f00b1 Add error checking 2015-11-29 14:25:57 +01:00
Alexander Neumann
2c95772a6a Update README, instruct users to always open an issue 2015-11-22 19:23:48 +01:00
Alexander Neumann
867fc5bd4b Merge pull request #354 from restic/fix-index
Save new packs in index atomically
2015-11-20 23:20:27 +01:00
Alexander Neumann
567de35df4 Save new packs in index atomically
This commit fixes a situation reported by a user where two indexes
contained information about the same pack without overlap, e.g.:

Index 3e6a32 contained:

    {
      "id": "c02e3b",
      "blobs": [
        {
          "id": "8114b1",
          "type": "data",
          "offset": 0,
          "length": 530107
        }
      ]
    }

And index 62da5f contained:

    {
      "id": "c02e3b",
      "blobs": [
        {
          "id": "e344f8",
          "type": "data",
          "offset": 1975848,
          "length": 3426468
        },
        {
          "id": "939ed9",
          "type": "data",
          "offset": 530107,
          "length": 1445741
        }
      ]
    }

This commit adds all blobs in a pack in one atomic operation so that
intermediate such as these do not happen.
2015-11-20 22:56:56 +01:00
Alexander Neumann
34bf70faea Merge pull request #352 from restic/rework-readme
Rework README
2015-11-13 23:48:17 +01:00
Alexander Neumann
6b9c8ffd14 Merge pull request #350 from restic/allow-read-only-ops
Allow check/restore read-only, without locking
2015-11-13 23:48:05 +01:00
Alexander Neumann
96061d2a2f Fix debug log message 2015-11-13 23:47:53 +01:00
Alexander Neumann
88b167cc10 Add note about unreleased code in the master branch 2015-11-13 12:41:47 +01:00
Alexander Neumann
a79fba13e1 Shorten README, remove options 2015-11-13 12:41:36 +01:00
Alexander Neumann
a1440c819b Make NoLock a global option 2015-11-13 12:33:59 +01:00
Alexander Neumann
6edb7e02d0 Allow restoring and listing without locking 2015-11-10 22:13:53 +01:00
Alexander Neumann
c79dcbd7c4 Allow checking a read-only repo with --no-lock 2015-11-10 21:41:22 +01:00
Alexander Neumann
acba82c8f7 Merge pull request #252 from restic/repack-blobs
WIP: Repack blobs
2015-11-09 20:57:57 +01:00
Alexander Neumann
1f9aea9905 fix test on windows, reset read-only flag 2015-11-08 22:41:45 +01:00
Alexander Neumann
742d69bf4d Add another test for optimizing unused blobs 2015-11-08 22:38:17 +01:00
Alexander Neumann
5776b8f01c remove ConvertIndex 2015-11-08 22:27:13 +01:00
Alexander Neumann
6c54d3fa82 index: also mark old index as final on decode 2015-11-08 22:21:29 +01:00
Alexander Neumann
2e6eee991d Add test for optimize command with old indexes 2015-11-08 22:21:08 +01:00
Alexander Neumann
c59b12c939 Show a hint whech the checker finds an old index 2015-11-08 21:50:48 +01:00
Alexander Neumann
0222b1701e Remvoe automatic index conversion 2015-11-08 21:35:48 +01:00
Alexander Neumann
43e2c9837e check: removing orphaned packs is handled in 'optimize' 2015-11-08 21:24:51 +01:00
Alexander Neumann
c4fc7b52ae Add 'optimize' command that repacks blobs 2015-11-08 21:10:03 +01:00
Alexander Neumann
cd948b56ac cmd_check: Don't display unused blobs by default 2015-11-08 20:46:52 +01:00
Alexander Neumann
19a713970f Merge pull request #344 from restic/fix-338
pipe: propagate errors properly
2015-11-08 17:58:55 +01:00
Alexander Neumann
4484a3ea0d archiver: ignore dir nodes with errors 2015-11-07 11:42:28 +01:00
Alexander Neumann
ea41a1045f Add integration test for error on readdirnames 2015-11-06 23:19:56 +01:00
Alexander Neumann
005c13ff05 pipe: make test platform-independent 2015-11-06 22:38:34 +01:00
Alexander Neumann
1569176e48 pipe: propagate errors properly 2015-11-06 19:41:57 +01:00
Alexander Neumann
7c3d227527 Merge pull request #342 from restic/add-bug-reporting-instructions
Add instructions on reporting bugs
2015-11-06 00:41:22 +01:00
Alexander Neumann
0f92f6319f Add note on determinism 2015-11-05 23:10:35 +01:00
Alexander Neumann
7b9f2fa9ef Add instructions on reporting bugs 2015-11-05 23:09:01 +01:00
Alexander Neumann
8de8ca05f1 Merge pull request #341 from restic/read-password-from-stdin
Read password from stdin if terminal is not a tty
2015-11-04 22:45:54 +01:00
Alexander Neumann
18d7f7f835 Read password from stdin if terminal is not a tty 2015-11-04 22:20:58 +01:00
Alexander Neumann
73e085ae23 Merge pull request #335 from JaCoB1123/windowspathnames
Always use forward slashes in SFTP (Fixes #334)
2015-11-03 21:29:00 +01:00
Jan Bader
af960b9b40 Simplify Implementation of Join 2015-11-03 18:48:51 +01:00
Jan Bader
d09e6d5b0f Fix missing Join calls 2015-11-03 18:47:01 +01:00
Alexander Neumann
db41102bfa Finalize repacker 2015-11-02 19:28:30 +01:00
Alexander Neumann
1fc0d78913 Refactor Index.Store() to take a PackedBlob 2015-11-02 19:07:03 +01:00
Alexander Neumann
f3f84b1544 Add ID handling for index 2015-11-02 18:52:13 +01:00
Alexander Neumann
60a34087c9 Move LoadIndexWithDecoder to index.go 2015-11-02 18:52:13 +01:00
Alexander Neumann
266bc05edc Add mostly ready repacker 2015-11-02 18:52:13 +01:00
Alexander Neumann
51aff3ca57 Add FindBlobsForPacks() 2015-11-02 18:52:13 +01:00
Alexander Neumann
30cf002574 Sort IDSet.List() 2015-11-02 18:52:13 +01:00
Alexander Neumann
89a77ab2f9 Add Index.ListPack() 2015-11-02 18:52:13 +01:00
Alexander Neumann
484331cd8d Add repacker 2015-11-02 18:52:13 +01:00
Alexander Neumann
181963ba08 Fix IDSet.String() 2015-11-02 17:36:05 +01:00
Alexander Neumann
50c2f2e87f cmd_cat: allow dumping raw tree blobs 2015-11-02 17:36:04 +01:00
Alexander Neumann
ba8e6035b0 Merge pull request #336 from restic/refactor-index
Refactor Index.Lookup() to return struct PackedBlob
2015-11-02 17:35:09 +01:00
Jan Bader
81ec7337e0 Always use forward slashes in SFTP (Fixes #334)
Add custom Join func that always uses forward slashes in SFTP
2015-11-02 14:53:42 +01:00
Alexander Neumann
ed9470b19d Remove tempfiles after test 2015-10-31 14:53:03 +01:00
Alexander Neumann
fccde030d5 Refactor Index.Lookup() to return struct PackedBlob 2015-10-31 14:47:42 +01:00
Alexander Neumann
fc0f5d8f72 Merge pull request #331 from tyll/gpgfingerprint
Show full GPG fingerprint in README
2015-10-29 21:55:28 +01:00
Till Maas
34af39667b Show full GPG fingerprint in README 2015-10-29 21:51:21 +01:00
Alexander Neumann
6ffd7da4d7 Merge pull request #328 from restic/improve-backup-speed
Load trees from repository in parallel
2015-10-29 20:03:29 +01:00
Alexander Neumann
5958dc920b Fix walk tree test for windows 2015-10-28 22:02:37 +01:00
Alexander Neumann
5a45d95b80 Fix compatibility with Go < 1.5 2015-10-27 23:06:56 +01:00
Alexander Neumann
23aeca85ff load trees in parallel 2015-10-27 22:44:10 +01:00
Alexander Neumann
b5976474dd backup: add debug output for excluded files/dirs 2015-10-27 22:34:30 +01:00
Alexander Neumann
18478e2d3d walk_test: test correct number of items 2015-10-27 21:41:25 +01:00
Alexander Neumann
ca5c0bf78e Test WalkTree() for correct order 2015-10-27 20:51:55 +01:00
Alexander Neumann
4cc9d946de Add benchmark for WalkTree with high-latency repo 2015-10-27 20:51:55 +01:00
Alexander Neumann
50fd8f6f44 make repo for tree walker mockable 2015-10-27 20:51:55 +01:00
Alexander Neumann
7711fcda69 use new index format for repository tests 2015-10-27 20:51:55 +01:00
Alexander Neumann
7717ea5cca Add benchmark for LoadJSONPack 2015-10-27 20:51:55 +01:00
Alexander Neumann
ae46674cd3 debug: log timing 2015-10-27 20:51:55 +01:00
Alexander Neumann
00e05ae3c9 bugfix: close pack files after reading the header 2015-10-27 20:39:52 +01:00
Alexander Neumann
4bc81c2bd2 Merge pull request #325 from restic/fix-rebuild-index
rebuild-index: Remember already stored blobs
2015-10-25 22:57:12 +01:00
Alexander Neumann
74cd134b54 rebuild index: remember already stored blobs 2015-10-25 22:34:22 +01:00
Alexander Neumann
734ae7fcb8 Add test for corner case
It was observed that a restic repository still contained overlapping
indexes after `rebuild-index` has been called. This is caused by
instantly forgetting that blobs have already been saved once a full
index has been written during index rebuilding.

This commit adds a (failing) test that shows the behaviour.
2015-10-25 21:51:57 +01:00
Alexander Neumann
7b8e42a763 Silence rebuild-index tests 2015-10-25 21:51:46 +01:00
Alexander Neumann
566fb22bcf Merge pull request #324 from restic/fix-index-grow
Fix really large index and memory exhaustion
2015-10-25 18:58:31 +01:00
Alexander Neumann
b88ccb4f1b Fix Unpacker test 2015-10-25 18:21:48 +01:00
Alexander Neumann
efbce9f0fa rebuild-index: handle not yet indexed packs 2015-10-25 18:07:51 +01:00
Alexander Neumann
88849c06a6 rebuild-index: Refactor a bit 2015-10-25 17:53:02 +01:00
Alexander Neumann
5d617edbbf local/sftp backend: Do not seek if offset is 0 2015-10-25 17:51:26 +01:00
Alexander Neumann
6aed9f268b Add command rebuild-index 2015-10-25 17:24:52 +01:00
Alexander Neumann
9074c923ea index: add AddToSupersedes() 2015-10-25 17:06:56 +01:00
Alexander Neumann
1365495599 debug: remove extra space between filename and line 2015-10-25 17:06:20 +01:00
Alexander Neumann
461d54e43c Refactor repository.SaveIndex() 2015-10-25 17:05:54 +01:00
Alexander Neumann
96ecc26507 Let the checker return a list of hints along with errors 2015-10-25 16:26:50 +01:00
Alexander Neumann
91e1929b52 checker: test for packs in multiple indexes 2015-10-25 16:00:06 +01:00
Alexander Neumann
04614c7527 Add test for packs in duplicate indexes 2015-10-25 15:35:33 +01:00
Alexander Neumann
f7ff5b766c Mark written indexes as finalized 2015-10-25 15:35:18 +01:00
Alexander Neumann
d9f9b77d68 Add Index.Packs() and IDSet.Equals() 2015-10-25 15:28:01 +01:00
Alexander Neumann
4b1a2caea7 Allow overwriting the IndexFull function for tests 2015-10-25 15:05:22 +01:00
Alexander Neumann
af0d6f58b9 Remove unneeded pointer to pack id 2015-10-25 14:35:08 +01:00
Alexander Neumann
2710d6399a Cleanup index code
The selectFn wasn't used any more, so remove it from generatePackList().
2015-10-25 14:26:04 +01:00
Alexander Neumann
650eab6a0e Fix typo in dump usage 2015-10-25 13:19:35 +01:00
Alexander Neumann
5de36dfdf0 Merge pull request #310 from restic/resume-backups
resume interrupted backups
2015-10-14 21:53:25 +02:00
Alexander Neumann
1dd731fdb8 Handle concurrent access to the inFlight list 2015-10-14 20:50:54 +02:00
Alexander Neumann
6fa4be5af2 Regularly save intermediate indexes 2015-10-12 23:59:17 +02:00
Alexander Neumann
941b7025b6 Delete Index.Remove() 2015-10-12 22:51:11 +02:00
Alexander Neumann
4b2a4b03ec Remove Index.StoreInProgress() 2015-10-12 22:49:31 +02:00
Alexander Neumann
7ab9915859 Fix 'dump' command 2015-10-12 22:42:31 +02:00
Alexander Neumann
86fcd170f6 Add and use MasterIndex 2015-10-12 22:34:12 +02:00
Alexander Neumann
64fa89d406 Add error checks 2015-10-12 22:07:56 +02:00
Alexander Neumann
eb73182fcf Rework index decode and handling old format 2015-10-12 20:53:07 +02:00
Alexander Neumann
356bb62243 Add CreateEncryptedBlob and GetDecryptReader 2015-10-12 20:53:07 +02:00
Alexander Neumann
de2c20be84 Dump individual indexes 2015-10-12 20:53:07 +02:00
Alexander Neumann
96f2165067 Allow loading index with old format 2015-10-12 20:53:07 +02:00
Alexander Neumann
7944e8e323 Update index format 2015-10-12 20:53:07 +02:00
Alexander Neumann
5c39abfe53 Merge pull request #319 from restic/fix-311
Handle null subtree IDs
2015-10-11 21:48:52 +02:00
Alexander Neumann
1020e9c3af Check for data blobs with null ID, improve errors 2015-10-11 20:58:57 +02:00
Alexander Neumann
cc7acba02b Return the original backend ID on duplicate entries 2015-10-11 20:45:50 +02:00
Alexander Neumann
f188cf81dc Add more panic() calls for invalid conditions 2015-10-11 20:45:42 +02:00
Alexander Neumann
7db2369081 Shorten error message for tree errors 2015-10-11 20:22:52 +02:00
Alexander Neumann
db85ab8aa0 Use the correct channel for sending errors 2015-10-11 19:13:45 +02:00
Alexander Neumann
86c8328f62 Handle null subtree IDs 2015-10-11 19:13:35 +02:00
Alexander Neumann
72fcd00859 Check subtrees with null ID 2015-10-11 18:46:26 +02:00
Alexander Neumann
8a7873ee3a Handle invalid subtree IDs 2015-10-11 18:45:16 +02:00
Alexander Neumann
e738d35c4e Merge pull request #309 from restic/update-ci-go-version
Update Go 1.4 to version 1.4.3
2015-09-27 20:20:35 +02:00
Alexander Neumann
6ddda5fc5e appveyor: remove old Go installation 2015-09-27 18:34:11 +02:00
Alexander Neumann
7291342723 Install current version of Go
This is inspired by the appveyor.yaml from the go-plus project:
https://github.com/joefitzgerald/go-plus/blob/master/appveyor.yml
2015-09-27 18:02:44 +02:00
Alexander Neumann
70a6233b94 Install the 'cover' tool 2015-09-27 17:58:13 +02:00
Alexander Neumann
749ca28534 Update Go 1.4 to version 1.4.3 2015-09-27 17:22:12 +02:00
Alexander Neumann
321c2e6a47 Merge pull request #308 from episource/fix/restic_cr292_unreferenced_pack
fix:restic#292 Prevent concurrent processing of the same blob
2015-09-27 17:18:28 +02:00
Philipp Serr
7b11660f4f Prevent concurrent processing of same blob
... by first adding a preliminary index entry and making this fail if
an index entry for the same blob already exists.

A preliminary index entry is characterized by not yet being associated
with a pack. Until now, these entries where added to the index just
like final index entries using index.Store, which silently overwrites
existing index entries.

This commit adds a new method index.StoreInProgress which refuses to
overwrite existing index entries and allows for creating preliminary
index entries only. The existing method index.Store has not been
changed and continues to silently overwrite existing index entries.
This distinction is important, as otherwise, it would be impossible to
update a preliminary index entry after the blob has been written to a
pack.

Resolves: restic#292
2015-09-27 16:56:49 +02:00
Alexander Neumann
4fb46faae7 Use Go 1.5.1 for travis tests 2015-09-12 22:15:09 +02:00
Alexander Neumann
316f6ed313 Update chunker version 2015-09-12 22:14:58 +02:00
Alexander Neumann
108d28316a Merge pull request #294 from restic/rework-id
Refactor IDs and IDSet
2015-09-08 21:26:07 +02:00
Alexander Neumann
5c46dc41de Add methods to IDSet 2015-09-05 18:49:28 +02:00
Alexander Neumann
d42ff509ba Small refactorings
* use uint instead of uint32 in packs/indexes
 * use ID.Str() for debug messages
 * add ParallelIDWorkFunc
2015-09-05 18:41:58 +02:00
Alexander Neumann
2cb0fbf589 backend: Add String() to IDs 2015-09-05 18:41:58 +02:00
Alexander Neumann
a0bad1695c Remove comment 2015-09-05 18:41:58 +02:00
Alexander Neumann
681d7851aa index: use backend.ID instead of string for maps 2015-09-05 18:41:58 +02:00
Alexander Neumann
3063ad1d05 Split id.go into several files 2015-09-05 18:41:56 +02:00
Alexander Neumann
76b1f017c0 Merge pull request #290 from bchapuis/fix-289
Load the index and search subtree
2015-09-01 21:08:21 +02:00
Chapuis Bertil
c765688779 find command integration tests 2015-08-28 19:31:05 +02:00
Chapuis Bertil
d4686ebcc5 Load the index and search subtree 2015-08-27 23:21:44 +02:00
Alexander Neumann
f653aca0ed Merge pull request #287 from restic/fix-279
Remove tests for directories
2015-08-27 22:07:07 +02:00
Alexander Neumann
0a457eafed Correctly test for config file 2015-08-26 22:06:52 +02:00
Alexander Neumann
b211f834fa Remove tests for directories
For testing whether a repository already exists it is sufficient to
test if the config file (and therefore the master key) exists.

Closes #279
2015-08-26 21:51:40 +02:00
Alexander Neumann
9aefc2b7a6 Merge pull request #281 from restic/version-with-git
build.go: use new combined version string
2015-08-26 20:53:24 +02:00
Alexander Neumann
10f0d7ccac Merge pull request #280 from restic/ldflags-go1.5
build.go: Make `-ldflags` compatible to Go 1.5
2015-08-26 20:33:43 +02:00
Alexander Neumann
cb460b7dec Merge pull request #285 from howeyc/fix-aws-v4
Use new version of s3 library, Fixes #276
2015-08-26 20:20:32 +02:00
Alexander Neumann
39a82d951b Refactor getVersion(), address code review comments 2015-08-26 20:17:51 +02:00
Alexander Neumann
a54f9715b1 Add "build: " prefix to verbose messages 2015-08-26 20:03:26 +02:00
Alexander Neumann
4c47c2b2c9 Address code review comments 2015-08-26 20:03:16 +02:00
Chris Howey
ccb2f00b8a typo 2015-08-26 07:54:39 -05:00
Chris Howey
3bf447b422 Update tests for new s3 lib 2015-08-26 07:44:00 -05:00
Chris Howey
10cd672a92 Use new version of s3 library, Fixes #276 2015-08-26 06:25:05 -05:00
Alexander Neumann
f3c64d0740 build.go: use new combined version string
Previously, when a VERSION file exists it takes precendence over the
git version. This is unfortunate because all restic binaries compiled
from a git checkout will just identify as the latest release (e.g.
'0.1.0'), regardeless of any commits on top of it.

This commit adds a combined version string by using the contents of
the VERSION file, and append the current git version returned by `git
describe` if available, e.g.:

    0.1.0 (v0.1.0-6-gb188217-dirty).
2015-08-25 22:20:53 +02:00
Alexander Neumann
dca200c2e9 build.go: Make -ldflags compatible to Go 1.5
This change uses the old syntax (-ldflags "-X foo bar") for Go <= 1.4
and the new syntax for (-ldflags "-X foo=bar") for Go 1.5 (without a
warning).
2015-08-25 22:07:52 +02:00
Alexander Neumann
b188217e83 Merge pull request #274 from restic/fix-documentation
Documentation fixes
2015-08-22 23:07:31 +02:00
Alexander Neumann
3a50c2bbfb Fix docs 2015-08-22 23:03:25 +02:00
Alexander Neumann
e0e9cd8680 More documentation fixes 2015-08-22 15:09:53 +02:00
Alexander Neumann
b6872fb454 Clarify documentation about MAC key 2015-08-22 15:09:21 +02:00
Florian Daniel
3f3cca8f2a Merge pull request #273 from restic/fix-124
fix typo in Readme
2015-08-22 00:01:35 +02:00
Florian Daniel
647ee5b74a fix typo in Readme 2015-08-21 23:53:59 +02:00
1011 changed files with 187598 additions and 31893 deletions

1
.envrc Normal file
View File

@@ -0,0 +1 @@
GOPATH=$PWD:$PWD/vendor

12
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,12 @@
## Output of `restic version`
## Expected behavior
## Actual behavior
## Steps to reproduce the behavior

9
.gitignore vendored
View File

@@ -1,8 +1,5 @@
/.gopath
/pkg
/bin
/restic
/restic.debug
/dirdiff
cmd/dirdiff/dirdiff
cmd/gentestdata/gentestdata
cmd/restic/restic
/.vagrant
/vendor/pkg

2
.hound.yml Normal file
View File

@@ -0,0 +1,2 @@
go:
enabled: true

View File

@@ -2,14 +2,39 @@ language: go
sudo: false
go:
- 1.3.3
- 1.4.2
- 1.5
- 1.6.4
- 1.7.4
- tip
os:
- linux
- osx
env:
matrix:
RESTIC_TEST_FUSE=0
matrix:
exclude:
- os: osx
go: 1.6.4
- os: osx
go: tip
- os: linux
go: 1.7.4
include:
- os: linux
go: 1.7.4
sudo: true
env:
RESTIC_TEST_FUSE=1
allow_failures:
- go: tip
branches:
only:
- master
notifications:
irc:
channels:
@@ -22,11 +47,11 @@ install:
- go version
- export GOBIN="$GOPATH/bin"
- export PATH="$PATH:$GOBIN"
- export GOPATH="$GOPATH:${TRAVIS_BUILD_DIR}/Godeps/_workspace"
- go env
- ulimit -n 2048
script:
- go run run_integration_tests.go
after_success:
- goveralls -coverprofile=all.cov -service=travis-ci -repotoken "$COVERALLS_TOKEN" || true
- bash <(curl -s https://codecov.io/bash) -f all.cov

View File

@@ -25,42 +25,64 @@ those tagged
[minor complexity](https://github.com/restic/restic/labels/minor%20complexity).
Reporting Bugs
==============
You've found a bug? Thanks for letting us know so we can fix it! It is a good
idea to describe in detail how to reproduce the bug (when you know how), what
environment was used and so on. Please tell us at least the following things:
* What's the version of restic you used? Please include the output of
`restic version` in your bug report.
* What commands did you execute to get to where the bug occurred?
* What did you expect?
* What happened instead?
* Are you aware of a way to reproduce the bug?
Remember, the easier it is for us to reproduce the bug, the earlier it will be
corrected!
In addition, you can compile restic with debug support by running
`go run build.go -tags debug` and instructing it to create a debug log by
setting the environment variable `DEBUG_LOG` to a file, e.g. like this:
$ export DEBUG_LOG=/tmp/restic-debug.log
$ restic backup ~/work
Please be aware that the debug log file will contain potentially sensitive
things like file and directory names, so please either redact it before
uploading it somewhere or post only the parts that are really relevant.
Development Environment
=======================
For development, it is recommended to check out the restic repository within a
`GOPATH`, an introductory text is
["How to Write Go Code"](https://golang.org/doc/code.html). It is recommended
to have a working directory, we're using `~/work/restic` in the following. This
directory mainly contains the directory `src`, where the source code is stored.
For development you need the build tool [`gb`](https://getgb.io), it can be
installed by running the following command:
First, create the necessary directory structure and clone the restic repository
to the correct location:
$ go get github.com/constabulary/gb/...
The repository contains two directories with code: `src/` contains the code
written for restic, whereas `vendor/` contains copies of libraries restic
depends on. The libraries are managed with the `gb vendor` command.
Just clone the repository, `cd` to it and run `gb build` to build the binary:
$ mkdir --parents ~/work/restic/src/github.com/restic
$ cd ~/work/restic/src/github.com/restic
$ git clone https://github.com/restic/restic
$ cd restic
Now we're in the main directory of the restic repository. The last step is to
set the environment variable `$GOPATH` to the correct value:
$ export GOPATH=~/work/restic:~/work/restic/src/github.com/restic/restic/Godeps/_workspace
$ gb build
[...]
$ bin/restic version
restic compiled manually
compiled at unknown time with go1.6
The following commands can be used to run all the tests:
$ go test ./...
$ gb test
ok github.com/restic/restic 8.174s
[...]
The restic binary can be built from the directory `cmd/restic` this way:
$ cd cmd/restic
$ go build
$ ./restic version
restic compiled manually on go1.4.2
if you want to run your tests on Linux, OpenBSD or FreeBSD, you can use
If you want to run your tests on Linux, OpenBSD or FreeBSD, you can use
[vagrant](https://www.vagrantup.com/) with the proveded `Vagrantfile` to
quickly set up VMs and run the tests, e.g.:
@@ -70,6 +92,16 @@ quickly set up VMs and run the tests, e.g.:
$ vagrant ssh freebsd -c 'cd restic/restic; go test -v ./...'
[...]
The default `go` tool can also be used by setting the environment variable
`GOPATH` to the following value while being in the top level directory in the
git repository:
$ export GOPATH=$PWD:$PWD/vendor
The file `.envrc` allows automatic `GOPATH` configuration with
[direnv](https://direnv.net/), inspect the file and then allow automatic
configuration by running `direnv allow`.
Providing Patches
=================
@@ -78,23 +110,32 @@ get it into the project! The workflow we're using is also described on the
[GitHub Flow](https://guides.github.com/introduction/flow/) website, it boils
down to the following steps:
0. If you want to work on something, please add a comment to the issue on
GitHub. For a new feature, please add an issue before starting to work on
it, so that duplicate work is prevented.
1. First we would kindly ask you to fork our project on GitHub if you haven't
done so already.
2. Clone the repository locally and create a new branch. If you are working on
the code itself, please set up the development environment as described in
the previous section and instead of cloning add your fork on GitHub as a
remote to the clone of the restic repository.
the previous section.
3. Then commit your changes as fine grained as possible, as smaller patches,
that handle one and only one issue are easier to discuss and merge.
4. Push the new branch with your changes to your fork of the repository.
5. Create a pull request by visiting the GitHub website, it will guide you
through the process.
6. You will receive comments on your code and the feature or bug that they
address. Maybe you need to rework some minor things, in this case push new
commits to the branch you created for the pull request, they will be
automatically added to the pull request.
7. Once your code looks good, we'll merge it. Thanks a low for your
contribution!
7. Once your code looks good and passes all the tests, we'll merge it. Thanks
a low for your contribution!
Please provide the patches for each bug or feature in a separate branch and
open up a pull request for each.
@@ -109,6 +150,14 @@ in the project root directory before committing. Installing the script
pre-commit hook checks formatting before committing automatically, just copy
this script to `.git/hooks/pre-commit`.
For each pull request, several different systems run the integration tests on
Linux, OS X and Windows. We won't merge any code that does not pass all tests
for all systems, so when a tests fails, try to find out what's wrong and fix
it. If you need help on this, please leave a comment in the pull request, and
we'll be glad to assist. Having a PR with failing integration tests is nothing
to be ashamed of. In contrast, that happens regularly for all of us. That's
what the tests are there for.
Git Commits
-----------

57
Dockerfile Normal file
View File

@@ -0,0 +1,57 @@
# This Dockerfiles configures a container that is similar to the Travis CI
# environment and can be used to run tests locally.
#
# build the image:
# docker build -t restic/test .
#
# run all tests and cross-compile restic:
# docker run --rm -v $PWD:/home/travis/restic restic/test go run run_integration_tests.go -minio minio
#
# run interactively:
# docker run --interactive --tty --rm -v $PWD:/home/travis/restic restic/test /bin/bash
#
# run a subset of tests:
# docker run --rm -v $PWD:/home/travis/restic restic/test gb test -v ./backend
#
# build the image for an older version of Go:
# docker build --build-arg GOVERSION=1.3.3 -t restic/test:go1.3.3 .
FROM ubuntu:14.04
ARG GOVERSION=1.7
ARG GOARCH=amd64
# install dependencies
RUN apt-get update
RUN apt-get install -y --no-install-recommends ca-certificates wget git build-essential openssh-server
# add and configure user
ENV HOME /home/travis
RUN useradd -m -d $HOME -s /bin/bash travis
# run everything below as user travis
USER travis
WORKDIR $HOME
# download and install Go
RUN wget -q -O /tmp/go.tar.gz https://storage.googleapis.com/golang/go${GOVERSION}.linux-${GOARCH}.tar.gz
RUN tar xf /tmp/go.tar.gz && rm -f /tmp/go.tar.gz
ENV GOROOT $HOME/go
ENV GOPATH $HOME/gopath
ENV PATH $PATH:$GOROOT/bin:$GOPATH/bin:$HOME/bin
RUN mkdir -p $HOME/restic
# pre-install tools, this speeds up running the tests itself
RUN go get github.com/constabulary/gb/...
RUN go get golang.org/x/tools/cmd/cover
RUN go get github.com/mitchellh/gox
RUN go get github.com/pierrre/gotestcover
RUN mkdir $HOME/bin \
&& wget -q -O $HOME/bin/minio https://dl.minio.io/server/minio/release/linux-${GOARCH}/minio \
&& chmod +x $HOME/bin/minio
# set TRAVIS_BUILD_DIR for integration script
ENV TRAVIS_BUILD_DIR $HOME/restic
WORKDIR $HOME/restic

62
Godeps/Godeps.json generated
View File

@@ -1,62 +0,0 @@
{
"ImportPath": "github.com/restic/restic",
"GoVersion": "go1.4.2",
"Packages": [
"./..."
],
"Deps": [
{
"ImportPath": "bazil.org/fuse",
"Rev": "18419ee53958df28fcfc9490fe6123bd59e237bb"
},
{
"ImportPath": "github.com/jessevdk/go-flags",
"Comment": "v1-297-g1b89bf7",
"Rev": "1b89bf73cd2c3a911d7b2a279ab085c4a18cf539"
},
{
"ImportPath": "github.com/juju/errors",
"Rev": "4567a5e69fd3130ca0d89f69478e7ac025b67452"
},
{
"ImportPath": "github.com/kr/fs",
"Rev": "2788f0dbd16903de03cb8186e5c7d97b69ad387b"
},
{
"ImportPath": "github.com/mitchellh/goamz/aws",
"Rev": "caaaea8b30ee15616494ee68abd5d8ebbbef05cf"
},
{
"ImportPath": "github.com/mitchellh/goamz/s3",
"Rev": "caaaea8b30ee15616494ee68abd5d8ebbbef05cf"
},
{
"ImportPath": "github.com/pkg/sftp",
"Rev": "518aed2757a65cfa64d4b1b2baf08410f8b7a6bc"
},
{
"ImportPath": "github.com/restic/chunker",
"Rev": "e795b80f4c927ebcf2687ce18bcf1a39fee740b1"
},
{
"ImportPath": "github.com/vaughan0/go-ini",
"Rev": "a98ad7ee00ec53921f08832bc06ecf7fd600e6a1"
},
{
"ImportPath": "golang.org/x/crypto/pbkdf2",
"Rev": "cc04154d65fb9296747569b107cfd05380b1ea3e"
},
{
"ImportPath": "golang.org/x/crypto/poly1305",
"Rev": "cc04154d65fb9296747569b107cfd05380b1ea3e"
},
{
"ImportPath": "golang.org/x/crypto/scrypt",
"Rev": "cc04154d65fb9296747569b107cfd05380b1ea3e"
},
{
"ImportPath": "golang.org/x/crypto/ssh",
"Rev": "cc04154d65fb9296747569b107cfd05380b1ea3e"
}
]
}

5
Godeps/Readme generated
View File

@@ -1,5 +0,0 @@
This directory tree is generated automatically by godep.
Please do not edit.
See https://github.com/tools/godep for more information.

2
Godeps/_workspace/.gitignore generated vendored
View File

@@ -1,2 +0,0 @@
/pkg
/bin

View File

@@ -1,2 +0,0 @@
*.go filter=gofmt
*.cgo filter=gofmt

View File

@@ -1,11 +0,0 @@
*~
.#*
## the next line needs to start with a backslash to avoid looking like
## a comment
\#*#
.*.swp
*.test
/clockfs
/hellofs

View File

@@ -1,4 +0,0 @@
/*.seq.svg
# not ignoring *.seq.png; we want those committed to the repo
# for embedding on Github

View File

@@ -1 +0,0 @@
package fstestutil

View File

@@ -1,31 +0,0 @@
package fuse_test
import (
"os"
"testing"
"bazil.org/fuse"
)
func TestOpenFlagsAccmodeMask(t *testing.T) {
var f = fuse.OpenFlags(os.O_RDWR | os.O_SYNC)
if g, e := f&fuse.OpenAccessModeMask, fuse.OpenReadWrite; g != e {
t.Fatalf("OpenAccessModeMask behaves wrong: %v: %o != %o", f, g, e)
}
if f.IsReadOnly() {
t.Fatalf("IsReadOnly is wrong: %v", f)
}
if f.IsWriteOnly() {
t.Fatalf("IsWriteOnly is wrong: %v", f)
}
if !f.IsReadWrite() {
t.Fatalf("IsReadWrite is wrong: %v", f)
}
}
func TestOpenFlagsString(t *testing.T) {
var f = fuse.OpenFlags(os.O_RDWR | os.O_SYNC | os.O_APPEND)
if g, e := f.String(), "OpenReadWrite+OpenAppend+OpenSync"; g != e {
t.Fatalf("OpenFlags.String: %q != %q", g, e)
}
}

View File

@@ -1,126 +0,0 @@
package fuse
import (
"bytes"
"errors"
"fmt"
"os"
"os/exec"
"strconv"
"strings"
"syscall"
)
var errNoAvail = errors.New("no available fuse devices")
var errNotLoaded = errors.New("osxfusefs is not loaded")
func loadOSXFUSE() error {
cmd := exec.Command("/Library/Filesystems/osxfusefs.fs/Support/load_osxfusefs")
cmd.Dir = "/"
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
err := cmd.Run()
return err
}
func openOSXFUSEDev() (*os.File, error) {
var f *os.File
var err error
for i := uint64(0); ; i++ {
path := "/dev/osxfuse" + strconv.FormatUint(i, 10)
f, err = os.OpenFile(path, os.O_RDWR, 0000)
if os.IsNotExist(err) {
if i == 0 {
// not even the first device was found -> fuse is not loaded
return nil, errNotLoaded
}
// we've run out of kernel-provided devices
return nil, errNoAvail
}
if err2, ok := err.(*os.PathError); ok && err2.Err == syscall.EBUSY {
// try the next one
continue
}
if err != nil {
return nil, err
}
return f, nil
}
}
func callMount(dir string, conf *mountConfig, f *os.File, ready chan<- struct{}, errp *error) error {
bin := "/Library/Filesystems/osxfusefs.fs/Support/mount_osxfusefs"
for k, v := range conf.options {
if strings.Contains(k, ",") || strings.Contains(v, ",") {
// Silly limitation but the mount helper does not
// understand any escaping. See TestMountOptionCommaError.
return fmt.Errorf("mount options cannot contain commas on darwin: %q=%q", k, v)
}
}
cmd := exec.Command(
bin,
"-o", conf.getOptions(),
// Tell osxfuse-kext how large our buffer is. It must split
// writes larger than this into multiple writes.
//
// OSXFUSE seems to ignore InitResponse.MaxWrite, and uses
// this instead.
"-o", "iosize="+strconv.FormatUint(maxWrite, 10),
// refers to fd passed in cmd.ExtraFiles
"3",
dir,
)
cmd.ExtraFiles = []*os.File{f}
cmd.Env = os.Environ()
cmd.Env = append(cmd.Env, "MOUNT_FUSEFS_CALL_BY_LIB=")
// TODO this is used for fs typenames etc, let app influence it
cmd.Env = append(cmd.Env, "MOUNT_FUSEFS_DAEMON_PATH="+bin)
var buf bytes.Buffer
cmd.Stdout = &buf
cmd.Stderr = &buf
err := cmd.Start()
if err != nil {
return err
}
go func() {
err := cmd.Wait()
if err != nil {
if buf.Len() > 0 {
output := buf.Bytes()
output = bytes.TrimRight(output, "\n")
msg := err.Error() + ": " + string(output)
err = errors.New(msg)
}
}
*errp = err
close(ready)
}()
return err
}
func mount(dir string, conf *mountConfig, ready chan<- struct{}, errp *error) (*os.File, error) {
f, err := openOSXFUSEDev()
if err == errNotLoaded {
err = loadOSXFUSE()
if err != nil {
return nil, err
}
// try again
f, err = openOSXFUSEDev()
}
if err != nil {
return nil, err
}
err = callMount(dir, conf, f, ready, errp)
if err != nil {
f.Close()
return nil, err
}
return f, nil
}

View File

@@ -1,41 +0,0 @@
package fuse
import (
"fmt"
"os"
"os/exec"
"strings"
)
func mount(dir string, conf *mountConfig, ready chan<- struct{}, errp *error) (*os.File, error) {
for k, v := range conf.options {
if strings.Contains(k, ",") || strings.Contains(v, ",") {
// Silly limitation but the mount helper does not
// understand any escaping. See TestMountOptionCommaError.
return nil, fmt.Errorf("mount options cannot contain commas on FreeBSD: %q=%q", k, v)
}
}
f, err := os.OpenFile("/dev/fuse", os.O_RDWR, 0000)
if err != nil {
*errp = err
return nil, err
}
cmd := exec.Command(
"/sbin/mount_fusefs",
"--safe",
"-o", conf.getOptions(),
"3",
dir,
)
cmd.ExtraFiles = []*os.File{f}
out, err := cmd.CombinedOutput()
if err != nil {
return nil, fmt.Errorf("mount_fusefs: %q, %v", out, err)
}
close(ready)
return f, nil
}

View File

@@ -1,170 +0,0 @@
package fuse
import (
"errors"
"strings"
)
func dummyOption(conf *mountConfig) error {
return nil
}
// mountConfig holds the configuration for a mount operation.
// Use it by passing MountOption values to Mount.
type mountConfig struct {
options map[string]string
maxReadahead uint32
initFlags InitFlags
}
func escapeComma(s string) string {
s = strings.Replace(s, `\`, `\\`, -1)
s = strings.Replace(s, `,`, `\,`, -1)
return s
}
// getOptions makes a string of options suitable for passing to FUSE
// mount flag `-o`. Returns an empty string if no options were set.
// Any platform specific adjustments should happen before the call.
func (m *mountConfig) getOptions() string {
var opts []string
for k, v := range m.options {
k = escapeComma(k)
if v != "" {
k += "=" + escapeComma(v)
}
opts = append(opts, k)
}
return strings.Join(opts, ",")
}
type mountOption func(*mountConfig) error
// MountOption is passed to Mount to change the behavior of the mount.
type MountOption mountOption
// FSName sets the file system name (also called source) that is
// visible in the list of mounted file systems.
//
// FreeBSD ignores this option.
func FSName(name string) MountOption {
return func(conf *mountConfig) error {
conf.options["fsname"] = name
return nil
}
}
// Subtype sets the subtype of the mount. The main type is always
// `fuse`. The type in a list of mounted file systems will look like
// `fuse.foo`.
//
// OS X ignores this option.
// FreeBSD ignores this option.
func Subtype(fstype string) MountOption {
return func(conf *mountConfig) error {
conf.options["subtype"] = fstype
return nil
}
}
// LocalVolume sets the volume to be local (instead of network),
// changing the behavior of Finder, Spotlight, and such.
//
// OS X only. Others ignore this option.
func LocalVolume() MountOption {
return localVolume
}
// VolumeName sets the volume name shown in Finder.
//
// OS X only. Others ignore this option.
func VolumeName(name string) MountOption {
return volumeName(name)
}
var ErrCannotCombineAllowOtherAndAllowRoot = errors.New("cannot combine AllowOther and AllowRoot")
// AllowOther allows other users to access the file system.
//
// Only one of AllowOther or AllowRoot can be used.
func AllowOther() MountOption {
return func(conf *mountConfig) error {
if _, ok := conf.options["allow_root"]; ok {
return ErrCannotCombineAllowOtherAndAllowRoot
}
conf.options["allow_other"] = ""
return nil
}
}
// AllowRoot allows other users to access the file system.
//
// Only one of AllowOther or AllowRoot can be used.
//
// FreeBSD ignores this option.
func AllowRoot() MountOption {
return func(conf *mountConfig) error {
if _, ok := conf.options["allow_other"]; ok {
return ErrCannotCombineAllowOtherAndAllowRoot
}
conf.options["allow_root"] = ""
return nil
}
}
// DefaultPermissions makes the kernel enforce access control based on
// the file mode (as in chmod).
//
// Without this option, the Node itself decides what is and is not
// allowed. This is normally ok because FUSE file systems cannot be
// accessed by other users without AllowOther/AllowRoot.
//
// FreeBSD ignores this option.
func DefaultPermissions() MountOption {
return func(conf *mountConfig) error {
conf.options["default_permissions"] = ""
return nil
}
}
// ReadOnly makes the mount read-only.
func ReadOnly() MountOption {
return func(conf *mountConfig) error {
conf.options["ro"] = ""
return nil
}
}
// MaxReadahead sets the number of bytes that can be prefetched for
// sequential reads. The kernel can enforce a maximum value lower than
// this.
//
// This setting makes the kernel perform speculative reads that do not
// originate from any client process. This usually tremendously
// improves read performance.
func MaxReadahead(n uint32) MountOption {
return func(conf *mountConfig) error {
conf.maxReadahead = n
return nil
}
}
// AsyncRead enables multiple outstanding read requests for the same
// handle. Without this, there is at most one request in flight at a
// time.
func AsyncRead() MountOption {
return func(conf *mountConfig) error {
conf.initFlags |= InitAsyncRead
return nil
}
}
// WritebackCache enables the kernel to buffer writes before sending
// them to the FUSE server. Without this, writethrough caching is
// used.
func WritebackCache() MountOption {
return func(conf *mountConfig) error {
conf.initFlags |= InitWritebackCache
return nil
}
}

View File

@@ -1,13 +0,0 @@
package fuse
func localVolume(conf *mountConfig) error {
conf.options["local"] = ""
return nil
}
func volumeName(name string) MountOption {
return func(conf *mountConfig) error {
conf.options["volname"] = name
return nil
}
}

View File

@@ -1,9 +0,0 @@
package fuse
func localVolume(conf *mountConfig) error {
return nil
}
func volumeName(name string) MountOption {
return dummyOption
}

View File

@@ -1,9 +0,0 @@
package fuse
func localVolume(conf *mountConfig) error {
return nil
}
func volumeName(name string) MountOption {
return dummyOption
}

View File

@@ -1,35 +0,0 @@
language: go
install:
# go-flags
- go get -d -v ./...
- go build -v ./...
# linting
- go get golang.org/x/tools/cmd/vet
- go get github.com/golang/lint
- go install github.com/golang/lint/golint
# code coverage
- go get golang.org/x/tools/cmd/cover
- go get github.com/onsi/ginkgo/ginkgo
- go get github.com/modocache/gover
- if [ "$TRAVIS_SECURE_ENV_VARS" = "true" ]; then go get github.com/mattn/goveralls; fi
script:
# go-flags
- $(exit $(gofmt -l . | wc -l))
- go test -v ./...
# linting
- go tool vet -all=true -v=true . || true
- $(go env GOPATH | awk 'BEGIN{FS=":"} {print $1}')/bin/golint ./...
# code coverage
- $(go env GOPATH | awk 'BEGIN{FS=":"} {print $1}')/bin/ginkgo -r -cover
- $(go env GOPATH | awk 'BEGIN{FS=":"} {print $1}')/bin/gover
- if [ "$TRAVIS_SECURE_ENV_VARS" = "true" ]; then $(go env GOPATH | awk 'BEGIN{FS=":"} {print $1}')/bin/goveralls -coverprofile=gover.coverprofile -service=travis-ci -repotoken $COVERALLS_TOKEN; fi
env:
# coveralls.io
secure: "RCYbiB4P0RjQRIoUx/vG/AjP3mmYCbzOmr86DCww1Z88yNcy3hYr3Cq8rpPtYU5v0g7wTpu4adaKIcqRE9xknYGbqj3YWZiCoBP1/n4Z+9sHW3Dsd9D/GRGeHUus0laJUGARjWoCTvoEtOgTdGQDoX7mH+pUUY0FBltNYUdOiiU="

View File

@@ -1,131 +0,0 @@
go-flags: a go library for parsing command line arguments
=========================================================
[![GoDoc](https://godoc.org/github.com/jessevdk/go-flags?status.png)](https://godoc.org/github.com/jessevdk/go-flags) [![Build Status](https://travis-ci.org/jessevdk/go-flags.svg?branch=master)](https://travis-ci.org/jessevdk/go-flags) [![Coverage Status](https://img.shields.io/coveralls/jessevdk/go-flags.svg)](https://coveralls.io/r/jessevdk/go-flags?branch=master)
This library provides similar functionality to the builtin flag library of
go, but provides much more functionality and nicer formatting. From the
documentation:
Package flags provides an extensive command line option parser.
The flags package is similar in functionality to the go builtin flag package
but provides more options and uses reflection to provide a convenient and
succinct way of specifying command line options.
Supported features:
* Options with short names (-v)
* Options with long names (--verbose)
* Options with and without arguments (bool v.s. other type)
* Options with optional arguments and default values
* Multiple option groups each containing a set of options
* Generate and print well-formatted help message
* Passing remaining command line arguments after -- (optional)
* Ignoring unknown command line options (optional)
* Supports -I/usr/include -I=/usr/include -I /usr/include option argument specification
* Supports multiple short options -aux
* Supports all primitive go types (string, int{8..64}, uint{8..64}, float)
* Supports same option multiple times (can store in slice or last option counts)
* Supports maps
* Supports function callbacks
* Supports namespaces for (nested) option groups
The flags package uses structs, reflection and struct field tags
to allow users to specify command line options. This results in very simple
and concise specification of your application options. For example:
type Options struct {
Verbose []bool `short:"v" long:"verbose" description:"Show verbose debug information"`
}
This specifies one option with a short name -v and a long name --verbose.
When either -v or --verbose is found on the command line, a 'true' value
will be appended to the Verbose field. e.g. when specifying -vvv, the
resulting value of Verbose will be {[true, true, true]}.
Example:
--------
var opts struct {
// Slice of bool will append 'true' each time the option
// is encountered (can be set multiple times, like -vvv)
Verbose []bool `short:"v" long:"verbose" description:"Show verbose debug information"`
// Example of automatic marshalling to desired type (uint)
Offset uint `long:"offset" description:"Offset"`
// Example of a callback, called each time the option is found.
Call func(string) `short:"c" description:"Call phone number"`
// Example of a required flag
Name string `short:"n" long:"name" description:"A name" required:"true"`
// Example of a value name
File string `short:"f" long:"file" description:"A file" value-name:"FILE"`
// Example of a pointer
Ptr *int `short:"p" description:"A pointer to an integer"`
// Example of a slice of strings
StringSlice []string `short:"s" description:"A slice of strings"`
// Example of a slice of pointers
PtrSlice []*string `long:"ptrslice" description:"A slice of pointers to string"`
// Example of a map
IntMap map[string]int `long:"intmap" description:"A map from string to int"`
}
// Callback which will invoke callto:<argument> to call a number.
// Note that this works just on OS X (and probably only with
// Skype) but it shows the idea.
opts.Call = func(num string) {
cmd := exec.Command("open", "callto:"+num)
cmd.Start()
cmd.Process.Release()
}
// Make some fake arguments to parse.
args := []string{
"-vv",
"--offset=5",
"-n", "Me",
"-p", "3",
"-s", "hello",
"-s", "world",
"--ptrslice", "hello",
"--ptrslice", "world",
"--intmap", "a:1",
"--intmap", "b:5",
"arg1",
"arg2",
"arg3",
}
// Parse flags from `args'. Note that here we use flags.ParseArgs for
// the sake of making a working example. Normally, you would simply use
// flags.Parse(&opts) which uses os.Args
args, err := flags.ParseArgs(&opts, args)
if err != nil {
panic(err)
os.Exit(1)
}
fmt.Printf("Verbosity: %v\n", opts.Verbose)
fmt.Printf("Offset: %d\n", opts.Offset)
fmt.Printf("Name: %s\n", opts.Name)
fmt.Printf("Ptr: %d\n", *opts.Ptr)
fmt.Printf("StringSlice: %v\n", opts.StringSlice)
fmt.Printf("PtrSlice: [%v %v]\n", *opts.PtrSlice[0], *opts.PtrSlice[1])
fmt.Printf("IntMap: [a:%v b:%v]\n", opts.IntMap["a"], opts.IntMap["b"])
fmt.Printf("Remaining args: %s\n", strings.Join(args, " "))
// Output: Verbosity: [true true]
// Offset: 5
// Name: Me
// Ptr: 3
// StringSlice: [hello world]
// PtrSlice: [hello world]
// IntMap: [a:1 b:5]
// Remaining args: arg1 arg2 arg3
More information can be found in the godocs: <http://godoc.org/github.com/jessevdk/go-flags>

View File

@@ -1,21 +0,0 @@
package flags
import (
"reflect"
)
// Arg represents a positional argument on the command line.
type Arg struct {
// The name of the positional argument (used in the help)
Name string
// A description of the positional argument (used in the help)
Description string
value reflect.Value
tag multiTag
}
func (a *Arg) isRemaining() bool {
return a.value.Type().Kind() == reflect.Slice
}

View File

@@ -1,53 +0,0 @@
package flags
import (
"testing"
)
func TestPositional(t *testing.T) {
var opts = struct {
Value bool `short:"v"`
Positional struct {
Command int
Filename string
Rest []string
} `positional-args:"yes" required:"yes"`
}{}
p := NewParser(&opts, Default)
ret, err := p.ParseArgs([]string{"10", "arg_test.go", "a", "b"})
if err != nil {
t.Fatalf("Unexpected error: %v", err)
return
}
if opts.Positional.Command != 10 {
t.Fatalf("Expected opts.Positional.Command to be 10, but got %v", opts.Positional.Command)
}
if opts.Positional.Filename != "arg_test.go" {
t.Fatalf("Expected opts.Positional.Filename to be \"arg_test.go\", but got %v", opts.Positional.Filename)
}
assertStringArray(t, opts.Positional.Rest, []string{"a", "b"})
assertStringArray(t, ret, []string{})
}
func TestPositionalRequired(t *testing.T) {
var opts = struct {
Value bool `short:"v"`
Positional struct {
Command int
Filename string
Rest []string
} `positional-args:"yes" required:"yes"`
}{}
p := NewParser(&opts, None)
_, err := p.ParseArgs([]string{"10"})
assertError(t, err, ErrRequired, "the required argument `Filename` was not provided")
}

View File

@@ -1,177 +0,0 @@
package flags
import (
"fmt"
"io"
"io/ioutil"
"os"
"os/exec"
"path"
"runtime"
"testing"
)
func assertCallerInfo() (string, int) {
ptr := make([]uintptr, 15)
n := runtime.Callers(1, ptr)
if n == 0 {
return "", 0
}
mef := runtime.FuncForPC(ptr[0])
mefile, meline := mef.FileLine(ptr[0])
for i := 2; i < n; i++ {
f := runtime.FuncForPC(ptr[i])
file, line := f.FileLine(ptr[i])
if file != mefile {
return file, line
}
}
return mefile, meline
}
func assertErrorf(t *testing.T, format string, args ...interface{}) {
msg := fmt.Sprintf(format, args...)
file, line := assertCallerInfo()
t.Errorf("%s:%d: %s", path.Base(file), line, msg)
}
func assertFatalf(t *testing.T, format string, args ...interface{}) {
msg := fmt.Sprintf(format, args...)
file, line := assertCallerInfo()
t.Fatalf("%s:%d: %s", path.Base(file), line, msg)
}
func assertString(t *testing.T, a string, b string) {
if a != b {
assertErrorf(t, "Expected %#v, but got %#v", b, a)
}
}
func assertStringArray(t *testing.T, a []string, b []string) {
if len(a) != len(b) {
assertErrorf(t, "Expected %#v, but got %#v", b, a)
return
}
for i, v := range a {
if b[i] != v {
assertErrorf(t, "Expected %#v, but got %#v", b, a)
return
}
}
}
func assertBoolArray(t *testing.T, a []bool, b []bool) {
if len(a) != len(b) {
assertErrorf(t, "Expected %#v, but got %#v", b, a)
return
}
for i, v := range a {
if b[i] != v {
assertErrorf(t, "Expected %#v, but got %#v", b, a)
return
}
}
}
func assertParserSuccess(t *testing.T, data interface{}, args ...string) (*Parser, []string) {
parser := NewParser(data, Default&^PrintErrors)
ret, err := parser.ParseArgs(args)
if err != nil {
t.Fatalf("Unexpected parse error: %s", err)
return nil, nil
}
return parser, ret
}
func assertParseSuccess(t *testing.T, data interface{}, args ...string) []string {
_, ret := assertParserSuccess(t, data, args...)
return ret
}
func assertError(t *testing.T, err error, typ ErrorType, msg string) {
if err == nil {
assertFatalf(t, "Expected error: %s", msg)
return
}
if e, ok := err.(*Error); !ok {
assertFatalf(t, "Expected Error type, but got %#v", err)
} else {
if e.Type != typ {
assertErrorf(t, "Expected error type {%s}, but got {%s}", typ, e.Type)
}
if e.Message != msg {
assertErrorf(t, "Expected error message %#v, but got %#v", msg, e.Message)
}
}
}
func assertParseFail(t *testing.T, typ ErrorType, msg string, data interface{}, args ...string) []string {
parser := NewParser(data, Default&^PrintErrors)
ret, err := parser.ParseArgs(args)
assertError(t, err, typ, msg)
return ret
}
func diff(a, b string) (string, error) {
atmp, err := ioutil.TempFile("", "help-diff")
if err != nil {
return "", err
}
btmp, err := ioutil.TempFile("", "help-diff")
if err != nil {
return "", err
}
if _, err := io.WriteString(atmp, a); err != nil {
return "", err
}
if _, err := io.WriteString(btmp, b); err != nil {
return "", err
}
ret, err := exec.Command("diff", "-u", "-d", "--label", "got", atmp.Name(), "--label", "expected", btmp.Name()).Output()
os.Remove(atmp.Name())
os.Remove(btmp.Name())
if err.Error() == "exit status 1" {
return string(ret), nil
}
return string(ret), err
}
func assertDiff(t *testing.T, actual, expected, msg string) {
if actual == expected {
return
}
ret, err := diff(actual, expected)
if err != nil {
assertErrorf(t, "Unexpected diff error: %s", err)
assertErrorf(t, "Unexpected %s, expected:\n\n%s\n\nbut got\n\n%s", msg, expected, actual)
} else {
assertErrorf(t, "Unexpected %s:\n\n%s", msg, ret)
}
}

View File

@@ -1,16 +0,0 @@
#!/bin/bash
set -e
echo '# linux arm7'
GOARM=7 GOARCH=arm GOOS=linux go build
echo '# linux arm5'
GOARM=5 GOARCH=arm GOOS=linux go build
echo '# windows 386'
GOARCH=386 GOOS=windows go build
echo '# windows amd64'
GOARCH=amd64 GOOS=windows go build
echo '# darwin'
GOARCH=amd64 GOOS=darwin go build
echo '# freebsd'
GOARCH=amd64 GOOS=freebsd go build

View File

@@ -1,59 +0,0 @@
package flags
func levenshtein(s string, t string) int {
if len(s) == 0 {
return len(t)
}
if len(t) == 0 {
return len(s)
}
dists := make([][]int, len(s)+1)
for i := range dists {
dists[i] = make([]int, len(t)+1)
dists[i][0] = i
}
for j := range t {
dists[0][j] = j
}
for i, sc := range s {
for j, tc := range t {
if sc == tc {
dists[i+1][j+1] = dists[i][j]
} else {
dists[i+1][j+1] = dists[i][j] + 1
if dists[i+1][j] < dists[i+1][j+1] {
dists[i+1][j+1] = dists[i+1][j] + 1
}
if dists[i][j+1] < dists[i+1][j+1] {
dists[i+1][j+1] = dists[i][j+1] + 1
}
}
}
}
return dists[len(s)][len(t)]
}
func closestChoice(cmd string, choices []string) (string, int) {
if len(choices) == 0 {
return "", 0
}
mincmd := -1
mindist := -1
for i, c := range choices {
l := levenshtein(cmd, c)
if mincmd < 0 || l < mindist {
mindist = l
mincmd = i
}
}
return choices[mincmd], mindist
}

View File

@@ -1,106 +0,0 @@
package flags
// Command represents an application command. Commands can be added to the
// parser (which itself is a command) and are selected/executed when its name
// is specified on the command line. The Command type embeds a Group and
// therefore also carries a set of command specific options.
type Command struct {
// Embedded, see Group for more information
*Group
// The name by which the command can be invoked
Name string
// The active sub command (set by parsing) or nil
Active *Command
// Whether subcommands are optional
SubcommandsOptional bool
// Aliases for the command
Aliases []string
// Whether positional arguments are required
ArgsRequired bool
commands []*Command
hasBuiltinHelpGroup bool
args []*Arg
}
// Commander is an interface which can be implemented by any command added in
// the options. When implemented, the Execute method will be called for the last
// specified (sub)command providing the remaining command line arguments.
type Commander interface {
// Execute will be called for the last active (sub)command. The
// args argument contains the remaining command line arguments. The
// error that Execute returns will be eventually passed out of the
// Parse method of the Parser.
Execute(args []string) error
}
// Usage is an interface which can be implemented to show a custom usage string
// in the help message shown for a command.
type Usage interface {
// Usage is called for commands to allow customized printing of command
// usage in the generated help message.
Usage() string
}
// AddCommand adds a new command to the parser with the given name and data. The
// data needs to be a pointer to a struct from which the fields indicate which
// options are in the command. The provided data can implement the Command and
// Usage interfaces.
func (c *Command) AddCommand(command string, shortDescription string, longDescription string, data interface{}) (*Command, error) {
cmd := newCommand(command, shortDescription, longDescription, data)
cmd.parent = c
if err := cmd.scan(); err != nil {
return nil, err
}
c.commands = append(c.commands, cmd)
return cmd, nil
}
// AddGroup adds a new group to the command with the given name and data. The
// data needs to be a pointer to a struct from which the fields indicate which
// options are in the group.
func (c *Command) AddGroup(shortDescription string, longDescription string, data interface{}) (*Group, error) {
group := newGroup(shortDescription, longDescription, data)
group.parent = c
if err := group.scanType(c.scanSubcommandHandler(group)); err != nil {
return nil, err
}
c.groups = append(c.groups, group)
return group, nil
}
// Commands returns a list of subcommands of this command.
func (c *Command) Commands() []*Command {
return c.commands
}
// Find locates the subcommand with the given name and returns it. If no such
// command can be found Find will return nil.
func (c *Command) Find(name string) *Command {
for _, cc := range c.commands {
if cc.match(name) {
return cc
}
}
return nil
}
// Args returns a list of positional arguments associated with this command.
func (c *Command) Args() []*Arg {
ret := make([]*Arg, len(c.args))
copy(ret, c.args)
return ret
}

View File

@@ -1,271 +0,0 @@
package flags
import (
"reflect"
"sort"
"strings"
"unsafe"
)
type lookup struct {
shortNames map[string]*Option
longNames map[string]*Option
commands map[string]*Command
}
func newCommand(name string, shortDescription string, longDescription string, data interface{}) *Command {
return &Command{
Group: newGroup(shortDescription, longDescription, data),
Name: name,
}
}
func (c *Command) scanSubcommandHandler(parentg *Group) scanHandler {
f := func(realval reflect.Value, sfield *reflect.StructField) (bool, error) {
mtag := newMultiTag(string(sfield.Tag))
if err := mtag.Parse(); err != nil {
return true, err
}
positional := mtag.Get("positional-args")
if len(positional) != 0 {
stype := realval.Type()
for i := 0; i < stype.NumField(); i++ {
field := stype.Field(i)
m := newMultiTag((string(field.Tag)))
if err := m.Parse(); err != nil {
return true, err
}
name := m.Get("positional-arg-name")
if len(name) == 0 {
name = field.Name
}
arg := &Arg{
Name: name,
Description: m.Get("description"),
value: realval.Field(i),
tag: m,
}
c.args = append(c.args, arg)
if len(mtag.Get("required")) != 0 {
c.ArgsRequired = true
}
}
return true, nil
}
subcommand := mtag.Get("command")
if len(subcommand) != 0 {
ptrval := reflect.NewAt(realval.Type(), unsafe.Pointer(realval.UnsafeAddr()))
shortDescription := mtag.Get("description")
longDescription := mtag.Get("long-description")
subcommandsOptional := mtag.Get("subcommands-optional")
aliases := mtag.GetMany("alias")
subc, err := c.AddCommand(subcommand, shortDescription, longDescription, ptrval.Interface())
if err != nil {
return true, err
}
if len(subcommandsOptional) > 0 {
subc.SubcommandsOptional = true
}
if len(aliases) > 0 {
subc.Aliases = aliases
}
return true, nil
}
return parentg.scanSubGroupHandler(realval, sfield)
}
return f
}
func (c *Command) scan() error {
return c.scanType(c.scanSubcommandHandler(c.Group))
}
func (c *Command) eachCommand(f func(*Command), recurse bool) {
f(c)
for _, cc := range c.commands {
if recurse {
cc.eachCommand(f, true)
} else {
f(cc)
}
}
}
func (c *Command) eachActiveGroup(f func(cc *Command, g *Group)) {
c.eachGroup(func(g *Group) {
f(c, g)
})
if c.Active != nil {
c.Active.eachActiveGroup(f)
}
}
func (c *Command) addHelpGroups(showHelp func() error) {
if !c.hasBuiltinHelpGroup {
c.addHelpGroup(showHelp)
c.hasBuiltinHelpGroup = true
}
for _, cc := range c.commands {
cc.addHelpGroups(showHelp)
}
}
func (c *Command) makeLookup() lookup {
ret := lookup{
shortNames: make(map[string]*Option),
longNames: make(map[string]*Option),
commands: make(map[string]*Command),
}
parent := c.parent
for parent != nil {
if cmd, ok := parent.(*Command); ok {
cmd.fillLookup(&ret, true)
}
if grp, ok := parent.(*Group); ok {
parent = grp
} else {
parent = nil
}
}
c.fillLookup(&ret, false)
return ret
}
func (c *Command) fillLookup(ret *lookup, onlyOptions bool) {
c.eachGroup(func(g *Group) {
for _, option := range g.options {
if option.ShortName != 0 {
ret.shortNames[string(option.ShortName)] = option
}
if len(option.LongName) > 0 {
ret.longNames[option.LongNameWithNamespace()] = option
}
}
})
if onlyOptions {
return
}
for _, subcommand := range c.commands {
ret.commands[subcommand.Name] = subcommand
for _, a := range subcommand.Aliases {
ret.commands[a] = subcommand
}
}
}
func (c *Command) groupByName(name string) *Group {
if grp := c.Group.groupByName(name); grp != nil {
return grp
}
for _, subc := range c.commands {
prefix := subc.Name + "."
if strings.HasPrefix(name, prefix) {
if grp := subc.groupByName(name[len(prefix):]); grp != nil {
return grp
}
} else if name == subc.Name {
return subc.Group
}
}
return nil
}
type commandList []*Command
func (c commandList) Less(i, j int) bool {
return c[i].Name < c[j].Name
}
func (c commandList) Len() int {
return len(c)
}
func (c commandList) Swap(i, j int) {
c[i], c[j] = c[j], c[i]
}
func (c *Command) sortedCommands() []*Command {
ret := make(commandList, len(c.commands))
copy(ret, c.commands)
sort.Sort(ret)
return []*Command(ret)
}
func (c *Command) match(name string) bool {
if c.Name == name {
return true
}
for _, v := range c.Aliases {
if v == name {
return true
}
}
return false
}
func (c *Command) hasCliOptions() bool {
ret := false
c.eachGroup(func(g *Group) {
if g.isBuiltinHelp {
return
}
for _, opt := range g.options {
if opt.canCli() {
ret = true
}
}
})
return ret
}
func (c *Command) fillParseState(s *parseState) {
s.positional = make([]*Arg, len(c.args))
copy(s.positional, c.args)
s.lookup = c.makeLookup()
s.command = c
}

View File

@@ -1,402 +0,0 @@
package flags
import (
"fmt"
"testing"
)
func TestCommandInline(t *testing.T) {
var opts = struct {
Value bool `short:"v"`
Command struct {
G bool `short:"g"`
} `command:"cmd"`
}{}
p, ret := assertParserSuccess(t, &opts, "-v", "cmd", "-g")
assertStringArray(t, ret, []string{})
if p.Active == nil {
t.Errorf("Expected active command")
}
if !opts.Value {
t.Errorf("Expected Value to be true")
}
if !opts.Command.G {
t.Errorf("Expected Command.G to be true")
}
if p.Command.Find("cmd") != p.Active {
t.Errorf("Expected to find command `cmd' to be active")
}
}
func TestCommandInlineMulti(t *testing.T) {
var opts = struct {
Value bool `short:"v"`
C1 struct {
} `command:"c1"`
C2 struct {
G bool `short:"g"`
} `command:"c2"`
}{}
p, ret := assertParserSuccess(t, &opts, "-v", "c2", "-g")
assertStringArray(t, ret, []string{})
if p.Active == nil {
t.Errorf("Expected active command")
}
if !opts.Value {
t.Errorf("Expected Value to be true")
}
if !opts.C2.G {
t.Errorf("Expected C2.G to be true")
}
if p.Command.Find("c1") == nil {
t.Errorf("Expected to find command `c1'")
}
if c2 := p.Command.Find("c2"); c2 == nil {
t.Errorf("Expected to find command `c2'")
} else if c2 != p.Active {
t.Errorf("Expected to find command `c2' to be active")
}
}
func TestCommandFlagOrder1(t *testing.T) {
var opts = struct {
Value bool `short:"v"`
Command struct {
G bool `short:"g"`
} `command:"cmd"`
}{}
assertParseFail(t, ErrUnknownFlag, "unknown flag `g'", &opts, "-v", "-g", "cmd")
}
func TestCommandFlagOrder2(t *testing.T) {
var opts = struct {
Value bool `short:"v"`
Command struct {
G bool `short:"g"`
} `command:"cmd"`
}{}
assertParseSuccess(t, &opts, "cmd", "-v", "-g")
if !opts.Value {
t.Errorf("Expected Value to be true")
}
if !opts.Command.G {
t.Errorf("Expected Command.G to be true")
}
}
func TestCommandFlagOverride1(t *testing.T) {
var opts = struct {
Value bool `short:"v"`
Command struct {
Value bool `short:"v"`
} `command:"cmd"`
}{}
assertParseSuccess(t, &opts, "-v", "cmd")
if !opts.Value {
t.Errorf("Expected Value to be true")
}
if opts.Command.Value {
t.Errorf("Expected Command.Value to be false")
}
}
func TestCommandFlagOverride2(t *testing.T) {
var opts = struct {
Value bool `short:"v"`
Command struct {
Value bool `short:"v"`
} `command:"cmd"`
}{}
assertParseSuccess(t, &opts, "cmd", "-v")
if opts.Value {
t.Errorf("Expected Value to be false")
}
if !opts.Command.Value {
t.Errorf("Expected Command.Value to be true")
}
}
func TestCommandEstimate(t *testing.T) {
var opts = struct {
Value bool `short:"v"`
Cmd1 struct {
} `command:"remove"`
Cmd2 struct {
} `command:"add"`
}{}
p := NewParser(&opts, None)
_, err := p.ParseArgs([]string{})
assertError(t, err, ErrCommandRequired, "Please specify one command of: add or remove")
}
func TestCommandEstimate2(t *testing.T) {
var opts = struct {
Value bool `short:"v"`
Cmd1 struct {
} `command:"remove"`
Cmd2 struct {
} `command:"add"`
}{}
p := NewParser(&opts, None)
_, err := p.ParseArgs([]string{"rmive"})
assertError(t, err, ErrUnknownCommand, "Unknown command `rmive', did you mean `remove'?")
}
type testCommand struct {
G bool `short:"g"`
Executed bool
EArgs []string
}
func (c *testCommand) Execute(args []string) error {
c.Executed = true
c.EArgs = args
return nil
}
func TestCommandExecute(t *testing.T) {
var opts = struct {
Value bool `short:"v"`
Command testCommand `command:"cmd"`
}{}
assertParseSuccess(t, &opts, "-v", "cmd", "-g", "a", "b")
if !opts.Value {
t.Errorf("Expected Value to be true")
}
if !opts.Command.Executed {
t.Errorf("Did not execute command")
}
if !opts.Command.G {
t.Errorf("Expected Command.C to be true")
}
assertStringArray(t, opts.Command.EArgs, []string{"a", "b"})
}
func TestCommandClosest(t *testing.T) {
var opts = struct {
Value bool `short:"v"`
Cmd1 struct {
} `command:"remove"`
Cmd2 struct {
} `command:"add"`
}{}
args := assertParseFail(t, ErrUnknownCommand, "Unknown command `addd', did you mean `add'?", &opts, "-v", "addd")
assertStringArray(t, args, []string{"addd"})
}
func TestCommandAdd(t *testing.T) {
var opts = struct {
Value bool `short:"v"`
}{}
var cmd = struct {
G bool `short:"g"`
}{}
p := NewParser(&opts, Default)
c, err := p.AddCommand("cmd", "", "", &cmd)
if err != nil {
t.Fatalf("Unexpected error: %v", err)
return
}
ret, err := p.ParseArgs([]string{"-v", "cmd", "-g", "rest"})
if err != nil {
t.Fatalf("Unexpected error: %v", err)
return
}
assertStringArray(t, ret, []string{"rest"})
if !opts.Value {
t.Errorf("Expected Value to be true")
}
if !cmd.G {
t.Errorf("Expected Command.G to be true")
}
if p.Command.Find("cmd") != c {
t.Errorf("Expected to find command `cmd'")
}
if p.Commands()[0] != c {
t.Errorf("Expected command %#v, but got %#v", c, p.Commands()[0])
}
if c.Options()[0].ShortName != 'g' {
t.Errorf("Expected short name `g' but got %v", c.Options()[0].ShortName)
}
}
func TestCommandNestedInline(t *testing.T) {
var opts = struct {
Value bool `short:"v"`
Command struct {
G bool `short:"g"`
Nested struct {
N string `long:"n"`
} `command:"nested"`
} `command:"cmd"`
}{}
p, ret := assertParserSuccess(t, &opts, "-v", "cmd", "-g", "nested", "--n", "n", "rest")
assertStringArray(t, ret, []string{"rest"})
if !opts.Value {
t.Errorf("Expected Value to be true")
}
if !opts.Command.G {
t.Errorf("Expected Command.G to be true")
}
assertString(t, opts.Command.Nested.N, "n")
if c := p.Command.Find("cmd"); c == nil {
t.Errorf("Expected to find command `cmd'")
} else {
if c != p.Active {
t.Errorf("Expected `cmd' to be the active parser command")
}
if nested := c.Find("nested"); nested == nil {
t.Errorf("Expected to find command `nested'")
} else if nested != c.Active {
t.Errorf("Expected to find command `nested' to be the active `cmd' command")
}
}
}
func TestRequiredOnCommand(t *testing.T) {
var opts = struct {
Value bool `short:"v" required:"true"`
Command struct {
G bool `short:"g"`
} `command:"cmd"`
}{}
assertParseFail(t, ErrRequired, fmt.Sprintf("the required flag `%cv' was not specified", defaultShortOptDelimiter), &opts, "cmd")
}
func TestRequiredAllOnCommand(t *testing.T) {
var opts = struct {
Value bool `short:"v" required:"true"`
Missing bool `long:"missing" required:"true"`
Command struct {
G bool `short:"g"`
} `command:"cmd"`
}{}
assertParseFail(t, ErrRequired, fmt.Sprintf("the required flags `%smissing' and `%cv' were not specified", defaultLongOptDelimiter, defaultShortOptDelimiter), &opts, "cmd")
}
func TestDefaultOnCommand(t *testing.T) {
var opts = struct {
Command struct {
G bool `short:"g" default:"true"`
} `command:"cmd"`
}{}
assertParseSuccess(t, &opts, "cmd")
if !opts.Command.G {
t.Errorf("Expected G to be true")
}
}
func TestSubcommandsOptional(t *testing.T) {
var opts = struct {
Value bool `short:"v"`
Cmd1 struct {
} `command:"remove"`
Cmd2 struct {
} `command:"add"`
}{}
p := NewParser(&opts, None)
p.SubcommandsOptional = true
_, err := p.ParseArgs([]string{"-v"})
if err != nil {
t.Fatalf("Unexpected error: %v", err)
return
}
if !opts.Value {
t.Errorf("Expected Value to be true")
}
}
func TestCommandAlias(t *testing.T) {
var opts = struct {
Command struct {
G bool `short:"g" default:"true"`
} `command:"cmd" alias:"cm"`
}{}
assertParseSuccess(t, &opts, "cm")
if !opts.Command.G {
t.Errorf("Expected G to be true")
}
}

View File

@@ -1,304 +0,0 @@
package flags
import (
"fmt"
"path/filepath"
"reflect"
"sort"
"strings"
"unicode/utf8"
)
// Completion is a type containing information of a completion.
type Completion struct {
// The completed item
Item string
// A description of the completed item (optional)
Description string
}
type completions []Completion
func (c completions) Len() int {
return len(c)
}
func (c completions) Less(i, j int) bool {
return c[i].Item < c[j].Item
}
func (c completions) Swap(i, j int) {
c[i], c[j] = c[j], c[i]
}
// Completer is an interface which can be implemented by types
// to provide custom command line argument completion.
type Completer interface {
// Complete receives a prefix representing a (partial) value
// for its type and should provide a list of possible valid
// completions.
Complete(match string) []Completion
}
type completion struct {
parser *Parser
ShowDescriptions bool
}
// Filename is a string alias which provides filename completion.
type Filename string
func completionsWithoutDescriptions(items []string) []Completion {
ret := make([]Completion, len(items))
for i, v := range items {
ret[i].Item = v
}
return ret
}
// Complete returns a list of existing files with the given
// prefix.
func (f *Filename) Complete(match string) []Completion {
ret, _ := filepath.Glob(match + "*")
return completionsWithoutDescriptions(ret)
}
func (c *completion) skipPositional(s *parseState, n int) {
if n >= len(s.positional) {
s.positional = nil
} else {
s.positional = s.positional[n:]
}
}
func (c *completion) completeOptionNames(names map[string]*Option, prefix string, match string) []Completion {
n := make([]Completion, 0, len(names))
for k, opt := range names {
if strings.HasPrefix(k, match) {
n = append(n, Completion{
Item: prefix + k,
Description: opt.Description,
})
}
}
return n
}
func (c *completion) completeLongNames(s *parseState, prefix string, match string) []Completion {
return c.completeOptionNames(s.lookup.longNames, prefix, match)
}
func (c *completion) completeShortNames(s *parseState, prefix string, match string) []Completion {
if len(match) != 0 {
return []Completion{
Completion{
Item: prefix + match,
},
}
}
return c.completeOptionNames(s.lookup.shortNames, prefix, match)
}
func (c *completion) completeCommands(s *parseState, match string) []Completion {
n := make([]Completion, 0, len(s.command.commands))
for _, cmd := range s.command.commands {
if cmd.data != c && strings.HasPrefix(cmd.Name, match) {
n = append(n, Completion{
Item: cmd.Name,
Description: cmd.ShortDescription,
})
}
}
return n
}
func (c *completion) completeValue(value reflect.Value, prefix string, match string) []Completion {
i := value.Interface()
var ret []Completion
if cmp, ok := i.(Completer); ok {
ret = cmp.Complete(match)
} else if value.CanAddr() {
if cmp, ok = value.Addr().Interface().(Completer); ok {
ret = cmp.Complete(match)
}
}
for i, v := range ret {
ret[i].Item = prefix + v.Item
}
return ret
}
func (c *completion) completeArg(arg *Arg, prefix string, match string) []Completion {
if arg.isRemaining() {
// For remaining positional args (that are parsed into a slice), complete
// based on the element type.
return c.completeValue(reflect.New(arg.value.Type().Elem()), prefix, match)
}
return c.completeValue(arg.value, prefix, match)
}
func (c *completion) complete(args []string) []Completion {
if len(args) == 0 {
args = []string{""}
}
s := &parseState{
args: args,
}
c.parser.fillParseState(s)
var opt *Option
for len(s.args) > 1 {
arg := s.pop()
if (c.parser.Options&PassDoubleDash) != None && arg == "--" {
opt = nil
c.skipPositional(s, len(s.args)-1)
break
}
if argumentIsOption(arg) {
prefix, optname, islong := stripOptionPrefix(arg)
optname, _, argument := splitOption(prefix, optname, islong)
if argument == nil {
var o *Option
canarg := true
if islong {
o = s.lookup.longNames[optname]
} else {
for i, r := range optname {
sname := string(r)
o = s.lookup.shortNames[sname]
if o == nil {
break
}
if i == 0 && o.canArgument() && len(optname) != len(sname) {
canarg = false
break
}
}
}
if o == nil && (c.parser.Options&PassAfterNonOption) != None {
opt = nil
c.skipPositional(s, len(s.args)-1)
break
} else if o != nil && o.canArgument() && !o.OptionalArgument && canarg {
if len(s.args) > 1 {
s.pop()
} else {
opt = o
}
}
}
} else {
if len(s.positional) > 0 {
if !s.positional[0].isRemaining() {
// Don't advance beyond a remaining positional arg (because
// it consumes all subsequent args).
s.positional = s.positional[1:]
}
} else if cmd, ok := s.lookup.commands[arg]; ok {
cmd.fillParseState(s)
}
opt = nil
}
}
lastarg := s.args[len(s.args)-1]
var ret []Completion
if opt != nil {
// Completion for the argument of 'opt'
ret = c.completeValue(opt.value, "", lastarg)
} else if argumentStartsOption(lastarg) {
// Complete the option
prefix, optname, islong := stripOptionPrefix(lastarg)
optname, split, argument := splitOption(prefix, optname, islong)
if argument == nil && !islong {
rname, n := utf8.DecodeRuneInString(optname)
sname := string(rname)
if opt := s.lookup.shortNames[sname]; opt != nil && opt.canArgument() {
ret = c.completeValue(opt.value, prefix+sname, optname[n:])
} else {
ret = c.completeShortNames(s, prefix, optname)
}
} else if argument != nil {
if islong {
opt = s.lookup.longNames[optname]
} else {
opt = s.lookup.shortNames[optname]
}
if opt != nil {
ret = c.completeValue(opt.value, prefix+optname+split, *argument)
}
} else if islong {
ret = c.completeLongNames(s, prefix, optname)
} else {
ret = c.completeShortNames(s, prefix, optname)
}
} else if len(s.positional) > 0 {
// Complete for positional argument
ret = c.completeArg(s.positional[0], "", lastarg)
} else if len(s.command.commands) > 0 {
// Complete for command
ret = c.completeCommands(s, lastarg)
}
sort.Sort(completions(ret))
return ret
}
func (c *completion) execute(args []string) {
ret := c.complete(args)
if c.ShowDescriptions && len(ret) > 1 {
maxl := 0
for _, v := range ret {
if len(v.Item) > maxl {
maxl = len(v.Item)
}
}
for _, v := range ret {
fmt.Printf("%s", v.Item)
if len(v.Description) > 0 {
fmt.Printf("%s # %s", strings.Repeat(" ", maxl-len(v.Item)), v.Description)
}
fmt.Printf("\n")
}
} else {
for _, v := range ret {
fmt.Println(v.Item)
}
}
}

View File

@@ -1,289 +0,0 @@
package flags
import (
"bytes"
"io"
"os"
"path"
"path/filepath"
"reflect"
"runtime"
"strings"
"testing"
)
type TestComplete struct {
}
func (t *TestComplete) Complete(match string) []Completion {
options := []string{
"hello world",
"hello universe",
"hello multiverse",
}
ret := make([]Completion, 0, len(options))
for _, o := range options {
if strings.HasPrefix(o, match) {
ret = append(ret, Completion{
Item: o,
})
}
}
return ret
}
var completionTestOptions struct {
Verbose bool `short:"v" long:"verbose" description:"Verbose messages"`
Debug bool `short:"d" long:"debug" description:"Enable debug"`
Version bool `long:"version" description:"Show version"`
Required bool `long:"required" required:"true" description:"This is required"`
AddCommand struct {
Positional struct {
Filename Filename
} `positional-args:"yes"`
} `command:"add" description:"add an item"`
AddMultiCommand struct {
Positional struct {
Filename []Filename
} `positional-args:"yes"`
} `command:"add-multi" description:"add multiple items"`
RemoveCommand struct {
Other bool `short:"o"`
File Filename `short:"f" long:"filename"`
} `command:"rm" description:"remove an item"`
RenameCommand struct {
Completed TestComplete `short:"c" long:"completed"`
} `command:"rename" description:"rename an item"`
}
type completionTest struct {
Args []string
Completed []string
ShowDescriptions bool
}
var completionTests []completionTest
func init() {
_, sourcefile, _, _ := runtime.Caller(0)
completionTestSourcedir := filepath.Join(filepath.SplitList(path.Dir(sourcefile))...)
completionTestFilename := []string{filepath.Join(completionTestSourcedir, "completion.go"), filepath.Join(completionTestSourcedir, "completion_test.go")}
completionTests = []completionTest{
{
// Short names
[]string{"-"},
[]string{"-d", "-v"},
false,
},
{
// Short names concatenated
[]string{"-dv"},
[]string{"-dv"},
false,
},
{
// Long names
[]string{"--"},
[]string{"--debug", "--required", "--verbose", "--version"},
false,
},
{
// Long names with descriptions
[]string{"--"},
[]string{
"--debug # Enable debug",
"--required # This is required",
"--verbose # Verbose messages",
"--version # Show version",
},
true,
},
{
// Long names partial
[]string{"--ver"},
[]string{"--verbose", "--version"},
false,
},
{
// Commands
[]string{""},
[]string{"add", "add-multi", "rename", "rm"},
false,
},
{
// Commands with descriptions
[]string{""},
[]string{
"add # add an item",
"add-multi # add multiple items",
"rename # rename an item",
"rm # remove an item",
},
true,
},
{
// Commands partial
[]string{"r"},
[]string{"rename", "rm"},
false,
},
{
// Positional filename
[]string{"add", filepath.Join(completionTestSourcedir, "completion")},
completionTestFilename,
false,
},
{
// Multiple positional filename (1 arg)
[]string{"add-multi", filepath.Join(completionTestSourcedir, "completion")},
completionTestFilename,
false,
},
{
// Multiple positional filename (2 args)
[]string{"add-multi", filepath.Join(completionTestSourcedir, "completion.go"), filepath.Join(completionTestSourcedir, "completion")},
completionTestFilename,
false,
},
{
// Multiple positional filename (3 args)
[]string{"add-multi", filepath.Join(completionTestSourcedir, "completion.go"), filepath.Join(completionTestSourcedir, "completion.go"), filepath.Join(completionTestSourcedir, "completion")},
completionTestFilename,
false,
},
{
// Flag filename
[]string{"rm", "-f", path.Join(completionTestSourcedir, "completion")},
completionTestFilename,
false,
},
{
// Flag short concat last filename
[]string{"rm", "-of", path.Join(completionTestSourcedir, "completion")},
completionTestFilename,
false,
},
{
// Flag concat filename
[]string{"rm", "-f" + path.Join(completionTestSourcedir, "completion")},
[]string{"-f" + completionTestFilename[0], "-f" + completionTestFilename[1]},
false,
},
{
// Flag equal concat filename
[]string{"rm", "-f=" + path.Join(completionTestSourcedir, "completion")},
[]string{"-f=" + completionTestFilename[0], "-f=" + completionTestFilename[1]},
false,
},
{
// Flag concat long filename
[]string{"rm", "--filename=" + path.Join(completionTestSourcedir, "completion")},
[]string{"--filename=" + completionTestFilename[0], "--filename=" + completionTestFilename[1]},
false,
},
{
// Flag long filename
[]string{"rm", "--filename", path.Join(completionTestSourcedir, "completion")},
completionTestFilename,
false,
},
{
// Custom completed
[]string{"rename", "-c", "hello un"},
[]string{"hello universe"},
false,
},
}
}
func TestCompletion(t *testing.T) {
p := NewParser(&completionTestOptions, Default)
c := &completion{parser: p}
for _, test := range completionTests {
if test.ShowDescriptions {
continue
}
ret := c.complete(test.Args)
items := make([]string, len(ret))
for i, v := range ret {
items[i] = v.Item
}
if !reflect.DeepEqual(items, test.Completed) {
t.Errorf("Args: %#v, %#v\n Expected: %#v\n Got: %#v", test.Args, test.ShowDescriptions, test.Completed, items)
}
}
}
func TestParserCompletion(t *testing.T) {
for _, test := range completionTests {
if test.ShowDescriptions {
os.Setenv("GO_FLAGS_COMPLETION", "verbose")
} else {
os.Setenv("GO_FLAGS_COMPLETION", "1")
}
tmp := os.Stdout
r, w, _ := os.Pipe()
os.Stdout = w
out := make(chan string)
go func() {
var buf bytes.Buffer
io.Copy(&buf, r)
out <- buf.String()
}()
p := NewParser(&completionTestOptions, None)
_, err := p.ParseArgs(test.Args)
w.Close()
os.Stdout = tmp
if err != nil {
t.Fatalf("Unexpected error: %s", err)
}
got := strings.Split(strings.Trim(<-out, "\n"), "\n")
if !reflect.DeepEqual(got, test.Completed) {
t.Errorf("Expected: %#v\nGot: %#v", test.Completed, got)
}
}
os.Setenv("GO_FLAGS_COMPLETION", "")
}

View File

@@ -1,377 +0,0 @@
// Copyright 2012 Jesse van den Kieboom. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package flags
import (
"fmt"
"reflect"
"strconv"
"strings"
"time"
)
// Marshaler is the interface implemented by types that can marshal themselves
// to a string representation of the flag.
type Marshaler interface {
// MarshalFlag marshals a flag value to its string representation.
MarshalFlag() (string, error)
}
// Unmarshaler is the interface implemented by types that can unmarshal a flag
// argument to themselves. The provided value is directly passed from the
// command line.
type Unmarshaler interface {
// UnmarshalFlag unmarshals a string value representation to the flag
// value (which therefore needs to be a pointer receiver).
UnmarshalFlag(value string) error
}
func getBase(options multiTag, base int) (int, error) {
sbase := options.Get("base")
var err error
var ivbase int64
if sbase != "" {
ivbase, err = strconv.ParseInt(sbase, 10, 32)
base = int(ivbase)
}
return base, err
}
func convertMarshal(val reflect.Value) (bool, string, error) {
// Check first for the Marshaler interface
if val.Type().NumMethod() > 0 && val.CanInterface() {
if marshaler, ok := val.Interface().(Marshaler); ok {
ret, err := marshaler.MarshalFlag()
return true, ret, err
}
}
return false, "", nil
}
func convertToString(val reflect.Value, options multiTag) (string, error) {
if ok, ret, err := convertMarshal(val); ok {
return ret, err
}
tp := val.Type()
// Support for time.Duration
if tp == reflect.TypeOf((*time.Duration)(nil)).Elem() {
stringer := val.Interface().(fmt.Stringer)
return stringer.String(), nil
}
switch tp.Kind() {
case reflect.String:
return val.String(), nil
case reflect.Bool:
if val.Bool() {
return "true", nil
}
return "false", nil
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
base, err := getBase(options, 10)
if err != nil {
return "", err
}
return strconv.FormatInt(val.Int(), base), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
base, err := getBase(options, 10)
if err != nil {
return "", err
}
return strconv.FormatUint(val.Uint(), base), nil
case reflect.Float32, reflect.Float64:
return strconv.FormatFloat(val.Float(), 'g', -1, tp.Bits()), nil
case reflect.Slice:
if val.Len() == 0 {
return "", nil
}
ret := "["
for i := 0; i < val.Len(); i++ {
if i != 0 {
ret += ", "
}
item, err := convertToString(val.Index(i), options)
if err != nil {
return "", err
}
ret += item
}
return ret + "]", nil
case reflect.Map:
ret := "{"
for i, key := range val.MapKeys() {
if i != 0 {
ret += ", "
}
keyitem, err := convertToString(key, options)
if err != nil {
return "", err
}
item, err := convertToString(val.MapIndex(key), options)
if err != nil {
return "", err
}
ret += keyitem + ":" + item
}
return ret + "}", nil
case reflect.Ptr:
return convertToString(reflect.Indirect(val), options)
case reflect.Interface:
if !val.IsNil() {
return convertToString(val.Elem(), options)
}
}
return "", nil
}
func convertUnmarshal(val string, retval reflect.Value) (bool, error) {
if retval.Type().NumMethod() > 0 && retval.CanInterface() {
if unmarshaler, ok := retval.Interface().(Unmarshaler); ok {
return true, unmarshaler.UnmarshalFlag(val)
}
}
if retval.Type().Kind() != reflect.Ptr && retval.CanAddr() {
return convertUnmarshal(val, retval.Addr())
}
if retval.Type().Kind() == reflect.Interface && !retval.IsNil() {
return convertUnmarshal(val, retval.Elem())
}
return false, nil
}
func convert(val string, retval reflect.Value, options multiTag) error {
if ok, err := convertUnmarshal(val, retval); ok {
return err
}
tp := retval.Type()
// Support for time.Duration
if tp == reflect.TypeOf((*time.Duration)(nil)).Elem() {
parsed, err := time.ParseDuration(val)
if err != nil {
return err
}
retval.SetInt(int64(parsed))
return nil
}
switch tp.Kind() {
case reflect.String:
retval.SetString(val)
case reflect.Bool:
if val == "" {
retval.SetBool(true)
} else {
b, err := strconv.ParseBool(val)
if err != nil {
return err
}
retval.SetBool(b)
}
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
base, err := getBase(options, 10)
if err != nil {
return err
}
parsed, err := strconv.ParseInt(val, base, tp.Bits())
if err != nil {
return err
}
retval.SetInt(parsed)
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
base, err := getBase(options, 10)
if err != nil {
return err
}
parsed, err := strconv.ParseUint(val, base, tp.Bits())
if err != nil {
return err
}
retval.SetUint(parsed)
case reflect.Float32, reflect.Float64:
parsed, err := strconv.ParseFloat(val, tp.Bits())
if err != nil {
return err
}
retval.SetFloat(parsed)
case reflect.Slice:
elemtp := tp.Elem()
elemvalptr := reflect.New(elemtp)
elemval := reflect.Indirect(elemvalptr)
if err := convert(val, elemval, options); err != nil {
return err
}
retval.Set(reflect.Append(retval, elemval))
case reflect.Map:
parts := strings.SplitN(val, ":", 2)
key := parts[0]
var value string
if len(parts) == 2 {
value = parts[1]
}
keytp := tp.Key()
keyval := reflect.New(keytp)
if err := convert(key, keyval, options); err != nil {
return err
}
valuetp := tp.Elem()
valueval := reflect.New(valuetp)
if err := convert(value, valueval, options); err != nil {
return err
}
if retval.IsNil() {
retval.Set(reflect.MakeMap(tp))
}
retval.SetMapIndex(reflect.Indirect(keyval), reflect.Indirect(valueval))
case reflect.Ptr:
if retval.IsNil() {
retval.Set(reflect.New(retval.Type().Elem()))
}
return convert(val, reflect.Indirect(retval), options)
case reflect.Interface:
if !retval.IsNil() {
return convert(val, retval.Elem(), options)
}
}
return nil
}
func isPrint(s string) bool {
for _, c := range s {
if !strconv.IsPrint(c) {
return false
}
}
return true
}
func quoteIfNeeded(s string) string {
if !isPrint(s) {
return strconv.Quote(s)
}
return s
}
func quoteIfNeededV(s []string) []string {
ret := make([]string, len(s))
for i, v := range s {
ret[i] = quoteIfNeeded(v)
}
return ret
}
func quoteV(s []string) []string {
ret := make([]string, len(s))
for i, v := range s {
ret[i] = strconv.Quote(v)
}
return ret
}
func unquoteIfPossible(s string) (string, error) {
if len(s) == 0 || s[0] != '"' {
return s, nil
}
return strconv.Unquote(s)
}
func wrapText(s string, l int, prefix string) string {
// Basic text wrapping of s at spaces to fit in l
var ret string
s = strings.TrimSpace(s)
for len(s) > l {
// Try to split on space
suffix := ""
pos := strings.LastIndex(s[:l], " ")
if pos < 0 {
pos = l - 1
suffix = "-\n"
}
if len(ret) != 0 {
ret += "\n" + prefix
}
ret += strings.TrimSpace(s[:pos]) + suffix
s = strings.TrimSpace(s[pos:])
}
if len(s) > 0 {
if len(ret) != 0 {
ret += "\n" + prefix
}
return ret + s
}
return ret
}

View File

@@ -1,175 +0,0 @@
package flags
import (
"testing"
"time"
)
func expectConvert(t *testing.T, o *Option, expected string) {
s, err := convertToString(o.value, o.tag)
if err != nil {
t.Errorf("Unexpected error: %v", err)
return
}
assertString(t, s, expected)
}
func TestConvertToString(t *testing.T) {
d, _ := time.ParseDuration("1h2m4s")
var opts = struct {
String string `long:"string"`
Int int `long:"int"`
Int8 int8 `long:"int8"`
Int16 int16 `long:"int16"`
Int32 int32 `long:"int32"`
Int64 int64 `long:"int64"`
Uint uint `long:"uint"`
Uint8 uint8 `long:"uint8"`
Uint16 uint16 `long:"uint16"`
Uint32 uint32 `long:"uint32"`
Uint64 uint64 `long:"uint64"`
Float32 float32 `long:"float32"`
Float64 float64 `long:"float64"`
Duration time.Duration `long:"duration"`
Bool bool `long:"bool"`
IntSlice []int `long:"int-slice"`
IntFloatMap map[int]float64 `long:"int-float-map"`
PtrBool *bool `long:"ptr-bool"`
Interface interface{} `long:"interface"`
Int32Base int32 `long:"int32-base" base:"16"`
Uint32Base uint32 `long:"uint32-base" base:"16"`
}{
"string",
-2,
-1,
0,
1,
2,
1,
2,
3,
4,
5,
1.2,
-3.4,
d,
true,
[]int{-3, 4, -2},
map[int]float64{-2: 4.5},
new(bool),
float32(5.2),
-5823,
4232,
}
p := NewNamedParser("test", Default)
grp, _ := p.AddGroup("test group", "", &opts)
expects := []string{
"string",
"-2",
"-1",
"0",
"1",
"2",
"1",
"2",
"3",
"4",
"5",
"1.2",
"-3.4",
"1h2m4s",
"true",
"[-3, 4, -2]",
"{-2:4.5}",
"false",
"5.2",
"-16bf",
"1088",
}
for i, v := range grp.Options() {
expectConvert(t, v, expects[i])
}
}
func TestConvertToStringInvalidIntBase(t *testing.T) {
var opts = struct {
Int int `long:"int" base:"no"`
}{
2,
}
p := NewNamedParser("test", Default)
grp, _ := p.AddGroup("test group", "", &opts)
o := grp.Options()[0]
_, err := convertToString(o.value, o.tag)
if err != nil {
err = newErrorf(ErrMarshal, "%v", err)
}
assertError(t, err, ErrMarshal, "strconv.ParseInt: parsing \"no\": invalid syntax")
}
func TestConvertToStringInvalidUintBase(t *testing.T) {
var opts = struct {
Uint uint `long:"uint" base:"no"`
}{
2,
}
p := NewNamedParser("test", Default)
grp, _ := p.AddGroup("test group", "", &opts)
o := grp.Options()[0]
_, err := convertToString(o.value, o.tag)
if err != nil {
err = newErrorf(ErrMarshal, "%v", err)
}
assertError(t, err, ErrMarshal, "strconv.ParseInt: parsing \"no\": invalid syntax")
}
func TestWrapText(t *testing.T) {
s := "Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum."
got := wrapText(s, 60, " ")
expected := `Lorem ipsum dolor sit amet, consectetur adipisicing elit,
sed do eiusmod tempor incididunt ut labore et dolore magna
aliqua. Ut enim ad minim veniam, quis nostrud exercitation
ullamco laboris nisi ut aliquip ex ea commodo consequat.
Duis aute irure dolor in reprehenderit in voluptate velit
esse cillum dolore eu fugiat nulla pariatur. Excepteur sint
occaecat cupidatat non proident, sunt in culpa qui officia
deserunt mollit anim id est laborum.`
assertDiff(t, got, expected, "wrapped text")
}

View File

@@ -1,123 +0,0 @@
package flags
import (
"fmt"
)
// ErrorType represents the type of error.
type ErrorType uint
const (
// ErrUnknown indicates a generic error.
ErrUnknown ErrorType = iota
// ErrExpectedArgument indicates that an argument was expected.
ErrExpectedArgument
// ErrUnknownFlag indicates an unknown flag.
ErrUnknownFlag
// ErrUnknownGroup indicates an unknown group.
ErrUnknownGroup
// ErrMarshal indicates a marshalling error while converting values.
ErrMarshal
// ErrHelp indicates that the built-in help was shown (the error
// contains the help message).
ErrHelp
// ErrNoArgumentForBool indicates that an argument was given for a
// boolean flag (which don't not take any arguments).
ErrNoArgumentForBool
// ErrRequired indicates that a required flag was not provided.
ErrRequired
// ErrShortNameTooLong indicates that a short flag name was specified,
// longer than one character.
ErrShortNameTooLong
// ErrDuplicatedFlag indicates that a short or long flag has been
// defined more than once
ErrDuplicatedFlag
// ErrTag indicates an error while parsing flag tags.
ErrTag
// ErrCommandRequired indicates that a command was required but not
// specified
ErrCommandRequired
// ErrUnknownCommand indicates that an unknown command was specified.
ErrUnknownCommand
)
func (e ErrorType) String() string {
switch e {
case ErrUnknown:
return "unknown"
case ErrExpectedArgument:
return "expected argument"
case ErrUnknownFlag:
return "unknown flag"
case ErrUnknownGroup:
return "unknown group"
case ErrMarshal:
return "marshal"
case ErrHelp:
return "help"
case ErrNoArgumentForBool:
return "no argument for bool"
case ErrRequired:
return "required"
case ErrShortNameTooLong:
return "short name too long"
case ErrDuplicatedFlag:
return "duplicated flag"
case ErrTag:
return "tag"
case ErrCommandRequired:
return "command required"
case ErrUnknownCommand:
return "unknown command"
}
return "unrecognized error type"
}
// Error represents a parser error. The error returned from Parse is of this
// type. The error contains both a Type and Message.
type Error struct {
// The type of error
Type ErrorType
// The error message
Message string
}
// Error returns the error's message
func (e *Error) Error() string {
return e.Message
}
func newError(tp ErrorType, message string) *Error {
return &Error{
Type: tp,
Message: message,
}
}
func newErrorf(tp ErrorType, format string, args ...interface{}) *Error {
return newError(tp, fmt.Sprintf(format, args...))
}
func wrapError(err error) *Error {
ret, ok := err.(*Error)
if !ok {
return newError(ErrUnknown, err.Error())
}
return ret
}

View File

@@ -1,110 +0,0 @@
// Example of use of the flags package.
package flags
import (
"fmt"
"os/exec"
)
func Example() {
var opts struct {
// Slice of bool will append 'true' each time the option
// is encountered (can be set multiple times, like -vvv)
Verbose []bool `short:"v" long:"verbose" description:"Show verbose debug information"`
// Example of automatic marshalling to desired type (uint)
Offset uint `long:"offset" description:"Offset"`
// Example of a callback, called each time the option is found.
Call func(string) `short:"c" description:"Call phone number"`
// Example of a required flag
Name string `short:"n" long:"name" description:"A name" required:"true"`
// Example of a value name
File string `short:"f" long:"file" description:"A file" value-name:"FILE"`
// Example of a pointer
Ptr *int `short:"p" description:"A pointer to an integer"`
// Example of a slice of strings
StringSlice []string `short:"s" description:"A slice of strings"`
// Example of a slice of pointers
PtrSlice []*string `long:"ptrslice" description:"A slice of pointers to string"`
// Example of a map
IntMap map[string]int `long:"intmap" description:"A map from string to int"`
// Example of a filename (useful for completion)
Filename Filename `long:"filename" description:"A filename"`
// Example of positional arguments
Args struct {
Id string
Num int
Rest []string
} `positional-args:"yes" required:"yes"`
}
// Callback which will invoke callto:<argument> to call a number.
// Note that this works just on OS X (and probably only with
// Skype) but it shows the idea.
opts.Call = func(num string) {
cmd := exec.Command("open", "callto:"+num)
cmd.Start()
cmd.Process.Release()
}
// Make some fake arguments to parse.
args := []string{
"-vv",
"--offset=5",
"-n", "Me",
"-p", "3",
"-s", "hello",
"-s", "world",
"--ptrslice", "hello",
"--ptrslice", "world",
"--intmap", "a:1",
"--intmap", "b:5",
"--filename", "hello.go",
"id",
"10",
"remaining1",
"remaining2",
}
// Parse flags from `args'. Note that here we use flags.ParseArgs for
// the sake of making a working example. Normally, you would simply use
// flags.Parse(&opts) which uses os.Args
_, err := ParseArgs(&opts, args)
if err != nil {
panic(err)
}
fmt.Printf("Verbosity: %v\n", opts.Verbose)
fmt.Printf("Offset: %d\n", opts.Offset)
fmt.Printf("Name: %s\n", opts.Name)
fmt.Printf("Ptr: %d\n", *opts.Ptr)
fmt.Printf("StringSlice: %v\n", opts.StringSlice)
fmt.Printf("PtrSlice: [%v %v]\n", *opts.PtrSlice[0], *opts.PtrSlice[1])
fmt.Printf("IntMap: [a:%v b:%v]\n", opts.IntMap["a"], opts.IntMap["b"])
fmt.Printf("Filename: %v\n", opts.Filename)
fmt.Printf("Args.Id: %s\n", opts.Args.Id)
fmt.Printf("Args.Num: %d\n", opts.Args.Num)
fmt.Printf("Args.Rest: %v\n", opts.Args.Rest)
// Output: Verbosity: [true true]
// Offset: 5
// Name: Me
// Ptr: 3
// StringSlice: [hello world]
// PtrSlice: [hello world]
// IntMap: [a:1 b:5]
// Filename: hello.go
// Args.Id: id
// Args.Num: 10
// Args.Rest: [remaining1 remaining2]
}

View File

@@ -1,23 +0,0 @@
package main
import (
"fmt"
)
type AddCommand struct {
All bool `short:"a" long:"all" description:"Add all files"`
}
var addCommand AddCommand
func (x *AddCommand) Execute(args []string) error {
fmt.Printf("Adding (all=%v): %#v\n", x.All, args)
return nil
}
func init() {
parser.AddCommand("add",
"Add a file",
"The add command adds a file to the repository. Use -a to add all files.",
&addCommand)
}

View File

@@ -1,9 +0,0 @@
_examples() {
args=("${COMP_WORDS[@]:1:$COMP_CWORD}")
local IFS=$'\n'
COMPREPLY=($(GO_FLAGS_COMPLETION=1 ${COMP_WORDS[0]} "${args[@]}"))
return 1
}
complete -F _examples examples

View File

@@ -1,75 +0,0 @@
package main
import (
"errors"
"fmt"
"github.com/jessevdk/go-flags"
"os"
"strconv"
"strings"
)
type EditorOptions struct {
Input flags.Filename `short:"i" long:"input" description:"Input file" default:"-"`
Output flags.Filename `short:"o" long:"output" description:"Output file" default:"-"`
}
type Point struct {
X, Y int
}
func (p *Point) UnmarshalFlag(value string) error {
parts := strings.Split(value, ",")
if len(parts) != 2 {
return errors.New("expected two numbers separated by a ,")
}
x, err := strconv.ParseInt(parts[0], 10, 32)
if err != nil {
return err
}
y, err := strconv.ParseInt(parts[1], 10, 32)
if err != nil {
return err
}
p.X = int(x)
p.Y = int(y)
return nil
}
func (p Point) MarshalFlag() (string, error) {
return fmt.Sprintf("%d,%d", p.X, p.Y), nil
}
type Options struct {
// Example of verbosity with level
Verbose []bool `short:"v" long:"verbose" description:"Verbose output"`
// Example of optional value
User string `short:"u" long:"user" description:"User name" optional:"yes" optional-value:"pancake"`
// Example of map with multiple default values
Users map[string]string `long:"users" description:"User e-mail map" default:"system:system@example.org" default:"admin:admin@example.org"`
// Example of option group
Editor EditorOptions `group:"Editor Options"`
// Example of custom type Marshal/Unmarshal
Point Point `long:"point" description:"A x,y point" default:"1,2"`
}
var options Options
var parser = flags.NewParser(&options, flags.Default)
func main() {
if _, err := parser.Parse(); err != nil {
os.Exit(1)
}
}

View File

@@ -1,23 +0,0 @@
package main
import (
"fmt"
)
type RmCommand struct {
Force bool `short:"f" long:"force" description:"Force removal of files"`
}
var rmCommand RmCommand
func (x *RmCommand) Execute(args []string) error {
fmt.Printf("Removing (force=%v): %#v\n", x.Force, args)
return nil
}
func init() {
parser.AddCommand("rm",
"Remove a file",
"The rm command removes a file to the repository. Use -f to force removal of files.",
&rmCommand)
}

View File

@@ -1,246 +0,0 @@
// Copyright 2012 Jesse van den Kieboom. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
/*
Package flags provides an extensive command line option parser.
The flags package is similar in functionality to the go built-in flag package
but provides more options and uses reflection to provide a convenient and
succinct way of specifying command line options.
Supported features
The following features are supported in go-flags:
Options with short names (-v)
Options with long names (--verbose)
Options with and without arguments (bool v.s. other type)
Options with optional arguments and default values
Option default values from ENVIRONMENT_VARIABLES, including slice and map values
Multiple option groups each containing a set of options
Generate and print well-formatted help message
Passing remaining command line arguments after -- (optional)
Ignoring unknown command line options (optional)
Supports -I/usr/include -I=/usr/include -I /usr/include option argument specification
Supports multiple short options -aux
Supports all primitive go types (string, int{8..64}, uint{8..64}, float)
Supports same option multiple times (can store in slice or last option counts)
Supports maps
Supports function callbacks
Supports namespaces for (nested) option groups
Additional features specific to Windows:
Options with short names (/v)
Options with long names (/verbose)
Windows-style options with arguments use a colon as the delimiter
Modify generated help message with Windows-style / options
Basic usage
The flags package uses structs, reflection and struct field tags
to allow users to specify command line options. This results in very simple
and concise specification of your application options. For example:
type Options struct {
Verbose []bool `short:"v" long:"verbose" description:"Show verbose debug information"`
}
This specifies one option with a short name -v and a long name --verbose.
When either -v or --verbose is found on the command line, a 'true' value
will be appended to the Verbose field. e.g. when specifying -vvv, the
resulting value of Verbose will be {[true, true, true]}.
Slice options work exactly the same as primitive type options, except that
whenever the option is encountered, a value is appended to the slice.
Map options from string to primitive type are also supported. On the command
line, you specify the value for such an option as key:value. For example
type Options struct {
AuthorInfo string[string] `short:"a"`
}
Then, the AuthorInfo map can be filled with something like
-a name:Jesse -a "surname:van den Kieboom".
Finally, for full control over the conversion between command line argument
values and options, user defined types can choose to implement the Marshaler
and Unmarshaler interfaces.
Available field tags
The following is a list of tags for struct fields supported by go-flags:
short: the short name of the option (single character)
long: the long name of the option
required: whether an option is required to appear on the command
line. If a required option is not present, the parser will
return ErrRequired (optional)
description: the description of the option (optional)
long-description: the long description of the option. Currently only
displayed in generated man pages (optional)
no-flag: if non-empty this field is ignored as an option (optional)
optional: whether an argument of the option is optional (optional)
optional-value: the value of an optional option when the option occurs
without an argument. This tag can be specified multiple
times in the case of maps or slices (optional)
default: the default value of an option. This tag can be specified
multiple times in the case of slices or maps (optional)
default-mask: when specified, this value will be displayed in the help
instead of the actual default value. This is useful
mostly for hiding otherwise sensitive information from
showing up in the help. If default-mask takes the special
value "-", then no default value will be shown at all
(optional)
env: the default value of the option is overridden from the
specified environment variable, if one has been defined.
(optional)
env-delim: the 'env' default value from environment is split into
multiple values with the given delimiter string, use with
slices and maps (optional)
value-name: the name of the argument value (to be shown in the help)
(optional)
base: a base (radix) used to convert strings to integer values, the
default base is 10 (i.e. decimal) (optional)
ini-name: the explicit ini option name (optional)
no-ini: if non-empty this field is ignored as an ini option
(optional)
group: when specified on a struct field, makes the struct
field a separate group with the given name (optional)
namespace: when specified on a group struct field, the namespace
gets prepended to every option's long name and
subgroup's namespace of this group, separated by
the parser's namespace delimiter (optional)
command: when specified on a struct field, makes the struct
field a (sub)command with the given name (optional)
subcommands-optional: when specified on a command struct field, makes
any subcommands of that command optional (optional)
alias: when specified on a command struct field, adds the
specified name as an alias for the command. Can be
be specified multiple times to add more than one
alias (optional)
positional-args: when specified on a field with a struct type,
uses the fields of that struct to parse remaining
positional command line arguments into (in order
of the fields). If a field has a slice type,
then all remaining arguments will be added to it.
Positional arguments are optional by default,
unless the "required" tag is specified together
with the "positional-args" tag (optional)
positional-arg-name: used on a field in a positional argument struct; name
of the positional argument placeholder to be shown in
the help (optional)
Either the `short:` tag or the `long:` must be specified to make the field eligible as an
option.
Option groups
Option groups are a simple way to semantically separate your options. All
options in a particular group are shown together in the help under the name
of the group. Namespaces can be used to specify option long names more
precisely and emphasize the options affiliation to their group.
There are currently three ways to specify option groups.
1. Use NewNamedParser specifying the various option groups.
2. Use AddGroup to add a group to an existing parser.
3. Add a struct field to the top-level options annotated with the
group:"group-name" tag.
Commands
The flags package also has basic support for commands. Commands are often
used in monolithic applications that support various commands or actions.
Take git for example, all of the add, commit, checkout, etc. are called
commands. Using commands you can easily separate multiple functions of your
application.
There are currently two ways to specify a command.
1. Use AddCommand on an existing parser.
2. Add a struct field to your options struct annotated with the
command:"command-name" tag.
The most common, idiomatic way to implement commands is to define a global
parser instance and implement each command in a separate file. These
command files should define a go init function which calls AddCommand on
the global parser.
When parsing ends and there is an active command and that command implements
the Commander interface, then its Execute method will be run with the
remaining command line arguments.
Command structs can have options which become valid to parse after the
command has been specified on the command line, in addition to the options
of all the parent commands. I.e. considering a -v flag on the parser and an
add command, the following are equivalent:
./app -v add
./app add -v
However, if the -v flag is defined on the add command, then the first of
the two examples above would fail since the -v flag is not defined before
the add command.
Completion
go-flags has builtin support to provide bash completion of flags, commands
and argument values. To use completion, the binary which uses go-flags
can be invoked in a special environment to list completion of the current
command line argument. It should be noted that this `executes` your application,
and it is up to the user to make sure there are no negative side effects (for
example from init functions).
Setting the environment variable `GO_FLAGS_COMPLETION=1` enables completion
by replacing the argument parsing routine with the completion routine which
outputs completions for the passed arguments. The basic invocation to
complete a set of arguments is therefore:
GO_FLAGS_COMPLETION=1 ./completion-example arg1 arg2 arg3
where `completion-example` is the binary, `arg1` and `arg2` are
the current arguments, and `arg3` (the last argument) is the argument
to be completed. If the GO_FLAGS_COMPLETION is set to "verbose", then
descriptions of possible completion items will also be shown, if there
are more than 1 completion items.
To use this with bash completion, a simple file can be written which
calls the binary which supports go-flags completion:
_completion_example() {
# All arguments except the first one
args=("${COMP_WORDS[@]:1:$COMP_CWORD}")
# Only split on newlines
local IFS=$'\n'
# Call completion (note that the first element of COMP_WORDS is
# the executable itself)
COMPREPLY=($(GO_FLAGS_COMPLETION=1 ${COMP_WORDS[0]} "${args[@]}"))
return 0
}
complete -F _completion_example completion-example
Completion requires the parser option PassDoubleDash and is therefore enforced if the environment variable GO_FLAGS_COMPLETION is set.
Customized completion for argument values is supported by implementing
the flags.Completer interface for the argument value type. An example
of a type which does so is the flags.Filename type, an alias of string
allowing simple filename completion. A slice or array argument value
whose element type implements flags.Completer will also be completed.
*/
package flags

View File

@@ -1,91 +0,0 @@
// Copyright 2012 Jesse van den Kieboom. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package flags
import (
"errors"
"strings"
)
// ErrNotPointerToStruct indicates that a provided data container is not
// a pointer to a struct. Only pointers to structs are valid data containers
// for options.
var ErrNotPointerToStruct = errors.New("provided data is not a pointer to struct")
// Group represents an option group. Option groups can be used to logically
// group options together under a description. Groups are only used to provide
// more structure to options both for the user (as displayed in the help message)
// and for you, since groups can be nested.
type Group struct {
// A short description of the group. The
// short description is primarily used in the built-in generated help
// message
ShortDescription string
// A long description of the group. The long
// description is primarily used to present information on commands
// (Command embeds Group) in the built-in generated help and man pages.
LongDescription string
// The namespace of the group
Namespace string
// The parent of the group or nil if it has no parent
parent interface{}
// All the options in the group
options []*Option
// All the subgroups
groups []*Group
// Whether the group represents the built-in help group
isBuiltinHelp bool
data interface{}
}
// AddGroup adds a new group to the command with the given name and data. The
// data needs to be a pointer to a struct from which the fields indicate which
// options are in the group.
func (g *Group) AddGroup(shortDescription string, longDescription string, data interface{}) (*Group, error) {
group := newGroup(shortDescription, longDescription, data)
group.parent = g
if err := group.scan(); err != nil {
return nil, err
}
g.groups = append(g.groups, group)
return group, nil
}
// Groups returns the list of groups embedded in this group.
func (g *Group) Groups() []*Group {
return g.groups
}
// Options returns the list of options in this group.
func (g *Group) Options() []*Option {
return g.options
}
// Find locates the subgroup with the given short description and returns it.
// If no such group can be found Find will return nil. Note that the description
// is matched case insensitively.
func (g *Group) Find(shortDescription string) *Group {
lshortDescription := strings.ToLower(shortDescription)
var ret *Group
g.eachGroup(func(gg *Group) {
if gg != g && strings.ToLower(gg.ShortDescription) == lshortDescription {
ret = gg
}
})
return ret
}

View File

@@ -1,254 +0,0 @@
package flags
import (
"reflect"
"unicode/utf8"
"unsafe"
)
type scanHandler func(reflect.Value, *reflect.StructField) (bool, error)
func newGroup(shortDescription string, longDescription string, data interface{}) *Group {
return &Group{
ShortDescription: shortDescription,
LongDescription: longDescription,
data: data,
}
}
func (g *Group) optionByName(name string, namematch func(*Option, string) bool) *Option {
prio := 0
var retopt *Option
for _, opt := range g.options {
if namematch != nil && namematch(opt, name) && prio < 4 {
retopt = opt
prio = 4
}
if name == opt.field.Name && prio < 3 {
retopt = opt
prio = 3
}
if name == opt.LongNameWithNamespace() && prio < 2 {
retopt = opt
prio = 2
}
if opt.ShortName != 0 && name == string(opt.ShortName) && prio < 1 {
retopt = opt
prio = 1
}
}
return retopt
}
func (g *Group) eachGroup(f func(*Group)) {
f(g)
for _, gg := range g.groups {
gg.eachGroup(f)
}
}
func (g *Group) scanStruct(realval reflect.Value, sfield *reflect.StructField, handler scanHandler) error {
stype := realval.Type()
if sfield != nil {
if ok, err := handler(realval, sfield); err != nil {
return err
} else if ok {
return nil
}
}
for i := 0; i < stype.NumField(); i++ {
field := stype.Field(i)
// PkgName is set only for non-exported fields, which we ignore
if field.PkgPath != "" {
continue
}
mtag := newMultiTag(string(field.Tag))
if err := mtag.Parse(); err != nil {
return err
}
// Skip fields with the no-flag tag
if mtag.Get("no-flag") != "" {
continue
}
// Dive deep into structs or pointers to structs
kind := field.Type.Kind()
fld := realval.Field(i)
if kind == reflect.Struct {
if err := g.scanStruct(fld, &field, handler); err != nil {
return err
}
} else if kind == reflect.Ptr && field.Type.Elem().Kind() == reflect.Struct {
if fld.IsNil() {
fld.Set(reflect.New(fld.Type().Elem()))
}
if err := g.scanStruct(reflect.Indirect(fld), &field, handler); err != nil {
return err
}
}
longname := mtag.Get("long")
shortname := mtag.Get("short")
// Need at least either a short or long name
if longname == "" && shortname == "" && mtag.Get("ini-name") == "" {
continue
}
short := rune(0)
rc := utf8.RuneCountInString(shortname)
if rc > 1 {
return newErrorf(ErrShortNameTooLong,
"short names can only be 1 character long, not `%s'",
shortname)
} else if rc == 1 {
short, _ = utf8.DecodeRuneInString(shortname)
}
description := mtag.Get("description")
def := mtag.GetMany("default")
optionalValue := mtag.GetMany("optional-value")
valueName := mtag.Get("value-name")
defaultMask := mtag.Get("default-mask")
optional := (mtag.Get("optional") != "")
required := (mtag.Get("required") != "")
option := &Option{
Description: description,
ShortName: short,
LongName: longname,
Default: def,
EnvDefaultKey: mtag.Get("env"),
EnvDefaultDelim: mtag.Get("env-delim"),
OptionalArgument: optional,
OptionalValue: optionalValue,
Required: required,
ValueName: valueName,
DefaultMask: defaultMask,
group: g,
field: field,
value: realval.Field(i),
tag: mtag,
}
g.options = append(g.options, option)
}
return nil
}
func (g *Group) checkForDuplicateFlags() *Error {
shortNames := make(map[rune]*Option)
longNames := make(map[string]*Option)
var duplicateError *Error
g.eachGroup(func(g *Group) {
for _, option := range g.options {
if option.LongName != "" {
longName := option.LongNameWithNamespace()
if otherOption, ok := longNames[longName]; ok {
duplicateError = newErrorf(ErrDuplicatedFlag, "option `%s' uses the same long name as option `%s'", option, otherOption)
return
}
longNames[longName] = option
}
if option.ShortName != 0 {
if otherOption, ok := shortNames[option.ShortName]; ok {
duplicateError = newErrorf(ErrDuplicatedFlag, "option `%s' uses the same short name as option `%s'", option, otherOption)
return
}
shortNames[option.ShortName] = option
}
}
})
return duplicateError
}
func (g *Group) scanSubGroupHandler(realval reflect.Value, sfield *reflect.StructField) (bool, error) {
mtag := newMultiTag(string(sfield.Tag))
if err := mtag.Parse(); err != nil {
return true, err
}
subgroup := mtag.Get("group")
if len(subgroup) != 0 {
ptrval := reflect.NewAt(realval.Type(), unsafe.Pointer(realval.UnsafeAddr()))
description := mtag.Get("description")
group, err := g.AddGroup(subgroup, description, ptrval.Interface())
if err != nil {
return true, err
}
group.Namespace = mtag.Get("namespace")
return true, nil
}
return false, nil
}
func (g *Group) scanType(handler scanHandler) error {
// Get all the public fields in the data struct
ptrval := reflect.ValueOf(g.data)
if ptrval.Type().Kind() != reflect.Ptr {
panic(ErrNotPointerToStruct)
}
stype := ptrval.Type().Elem()
if stype.Kind() != reflect.Struct {
panic(ErrNotPointerToStruct)
}
realval := reflect.Indirect(ptrval)
if err := g.scanStruct(realval, nil, handler); err != nil {
return err
}
if err := g.checkForDuplicateFlags(); err != nil {
return err
}
return nil
}
func (g *Group) scan() error {
return g.scanType(g.scanSubGroupHandler)
}
func (g *Group) groupByName(name string) *Group {
if len(name) == 0 {
return g
}
return g.Find(name)
}

View File

@@ -1,187 +0,0 @@
package flags
import (
"testing"
)
func TestGroupInline(t *testing.T) {
var opts = struct {
Value bool `short:"v"`
Group struct {
G bool `short:"g"`
} `group:"Grouped Options"`
}{}
p, ret := assertParserSuccess(t, &opts, "-v", "-g")
assertStringArray(t, ret, []string{})
if !opts.Value {
t.Errorf("Expected Value to be true")
}
if !opts.Group.G {
t.Errorf("Expected Group.G to be true")
}
if p.Command.Group.Find("Grouped Options") == nil {
t.Errorf("Expected to find group `Grouped Options'")
}
}
func TestGroupAdd(t *testing.T) {
var opts = struct {
Value bool `short:"v"`
}{}
var grp = struct {
G bool `short:"g"`
}{}
p := NewParser(&opts, Default)
g, err := p.AddGroup("Grouped Options", "", &grp)
if err != nil {
t.Fatalf("Unexpected error: %v", err)
return
}
ret, err := p.ParseArgs([]string{"-v", "-g", "rest"})
if err != nil {
t.Fatalf("Unexpected error: %v", err)
return
}
assertStringArray(t, ret, []string{"rest"})
if !opts.Value {
t.Errorf("Expected Value to be true")
}
if !grp.G {
t.Errorf("Expected Group.G to be true")
}
if p.Command.Group.Find("Grouped Options") != g {
t.Errorf("Expected to find group `Grouped Options'")
}
if p.Groups()[1] != g {
t.Errorf("Expected group %#v, but got %#v", g, p.Groups()[0])
}
if g.Options()[0].ShortName != 'g' {
t.Errorf("Expected short name `g' but got %v", g.Options()[0].ShortName)
}
}
func TestGroupNestedInline(t *testing.T) {
var opts = struct {
Value bool `short:"v"`
Group struct {
G bool `short:"g"`
Nested struct {
N string `long:"n"`
} `group:"Nested Options"`
} `group:"Grouped Options"`
}{}
p, ret := assertParserSuccess(t, &opts, "-v", "-g", "--n", "n", "rest")
assertStringArray(t, ret, []string{"rest"})
if !opts.Value {
t.Errorf("Expected Value to be true")
}
if !opts.Group.G {
t.Errorf("Expected Group.G to be true")
}
assertString(t, opts.Group.Nested.N, "n")
if p.Command.Group.Find("Grouped Options") == nil {
t.Errorf("Expected to find group `Grouped Options'")
}
if p.Command.Group.Find("Nested Options") == nil {
t.Errorf("Expected to find group `Nested Options'")
}
}
func TestGroupNestedInlineNamespace(t *testing.T) {
var opts = struct {
Opt string `long:"opt"`
Group struct {
Opt string `long:"opt"`
Group struct {
Opt string `long:"opt"`
} `group:"Subsubgroup" namespace:"sap"`
} `group:"Subgroup" namespace:"sip"`
}{}
p, ret := assertParserSuccess(t, &opts, "--opt", "a", "--sip.opt", "b", "--sip.sap.opt", "c", "rest")
assertStringArray(t, ret, []string{"rest"})
assertString(t, opts.Opt, "a")
assertString(t, opts.Group.Opt, "b")
assertString(t, opts.Group.Group.Opt, "c")
for _, name := range []string{"Subgroup", "Subsubgroup"} {
if p.Command.Group.Find(name) == nil {
t.Errorf("Expected to find group '%s'", name)
}
}
}
func TestDuplicateShortFlags(t *testing.T) {
var opts struct {
Verbose []bool `short:"v" long:"verbose" description:"Show verbose debug information"`
Variables []string `short:"v" long:"variable" description:"Set a variable value."`
}
args := []string{
"--verbose",
"-v", "123",
"-v", "456",
}
_, err := ParseArgs(&opts, args)
if err == nil {
t.Errorf("Expected an error with type ErrDuplicatedFlag")
} else {
err2 := err.(*Error)
if err2.Type != ErrDuplicatedFlag {
t.Errorf("Expected an error with type ErrDuplicatedFlag")
}
}
}
func TestDuplicateLongFlags(t *testing.T) {
var opts struct {
Test1 []bool `short:"a" long:"testing" description:"Test 1"`
Test2 []string `short:"b" long:"testing" description:"Test 2."`
}
args := []string{
"--testing",
}
_, err := ParseArgs(&opts, args)
if err == nil {
t.Errorf("Expected an error with type ErrDuplicatedFlag")
} else {
err2 := err.(*Error)
if err2.Type != ErrDuplicatedFlag {
t.Errorf("Expected an error with type ErrDuplicatedFlag")
}
}
}

View File

@@ -1,426 +0,0 @@
// Copyright 2012 Jesse van den Kieboom. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package flags
import (
"bufio"
"bytes"
"fmt"
"io"
"reflect"
"runtime"
"strings"
"unicode/utf8"
)
type alignmentInfo struct {
maxLongLen int
hasShort bool
hasValueName bool
terminalColumns int
indent bool
}
const (
paddingBeforeOption = 2
distanceBetweenOptionAndDescription = 2
)
func (a *alignmentInfo) descriptionStart() int {
ret := a.maxLongLen + distanceBetweenOptionAndDescription
if a.hasShort {
ret += 2
}
if a.maxLongLen > 0 {
ret += 4
}
if a.hasValueName {
ret += 3
}
return ret
}
func (a *alignmentInfo) updateLen(name string, indent bool) {
l := utf8.RuneCountInString(name)
if indent {
l = l + 4
}
if l > a.maxLongLen {
a.maxLongLen = l
}
}
func (p *Parser) getAlignmentInfo() alignmentInfo {
ret := alignmentInfo{
maxLongLen: 0,
hasShort: false,
hasValueName: false,
terminalColumns: getTerminalColumns(),
}
if ret.terminalColumns <= 0 {
ret.terminalColumns = 80
}
var prevcmd *Command
p.eachActiveGroup(func(c *Command, grp *Group) {
if c != prevcmd {
for _, arg := range c.args {
ret.updateLen(arg.Name, c != p.Command)
}
}
for _, info := range grp.options {
if !info.canCli() {
continue
}
if info.ShortName != 0 {
ret.hasShort = true
}
if len(info.ValueName) > 0 {
ret.hasValueName = true
}
ret.updateLen(info.LongNameWithNamespace()+info.ValueName, c != p.Command)
}
})
return ret
}
func (p *Parser) writeHelpOption(writer *bufio.Writer, option *Option, info alignmentInfo) {
line := &bytes.Buffer{}
prefix := paddingBeforeOption
if info.indent {
prefix += 4
}
line.WriteString(strings.Repeat(" ", prefix))
if option.ShortName != 0 {
line.WriteRune(defaultShortOptDelimiter)
line.WriteRune(option.ShortName)
} else if info.hasShort {
line.WriteString(" ")
}
descstart := info.descriptionStart() + paddingBeforeOption
if len(option.LongName) > 0 {
if option.ShortName != 0 {
line.WriteString(", ")
} else if info.hasShort {
line.WriteString(" ")
}
line.WriteString(defaultLongOptDelimiter)
line.WriteString(option.LongNameWithNamespace())
}
if option.canArgument() {
line.WriteRune(defaultNameArgDelimiter)
if len(option.ValueName) > 0 {
line.WriteString(option.ValueName)
}
}
written := line.Len()
line.WriteTo(writer)
if option.Description != "" {
dw := descstart - written
writer.WriteString(strings.Repeat(" ", dw))
def := ""
defs := option.Default
if len(option.DefaultMask) != 0 {
if option.DefaultMask != "-" {
def = option.DefaultMask
}
} else if len(defs) == 0 && option.canArgument() {
var showdef bool
switch option.field.Type.Kind() {
case reflect.Func, reflect.Ptr:
showdef = !option.value.IsNil()
case reflect.Slice, reflect.String, reflect.Array:
showdef = option.value.Len() > 0
case reflect.Map:
showdef = !option.value.IsNil() && option.value.Len() > 0
default:
zeroval := reflect.Zero(option.field.Type)
showdef = !reflect.DeepEqual(zeroval.Interface(), option.value.Interface())
}
if showdef {
def, _ = convertToString(option.value, option.tag)
}
} else if len(defs) != 0 {
l := len(defs) - 1
for i := 0; i < l; i++ {
def += quoteIfNeeded(defs[i]) + ", "
}
def += quoteIfNeeded(defs[l])
}
var envDef string
if option.EnvDefaultKey != "" {
var envPrintable string
if runtime.GOOS == "windows" {
envPrintable = "%" + option.EnvDefaultKey + "%"
} else {
envPrintable = "$" + option.EnvDefaultKey
}
envDef = fmt.Sprintf(" [%s]", envPrintable)
}
var desc string
if def != "" {
desc = fmt.Sprintf("%s (%v)%s", option.Description, def, envDef)
} else {
desc = option.Description + envDef
}
writer.WriteString(wrapText(desc,
info.terminalColumns-descstart,
strings.Repeat(" ", descstart)))
}
writer.WriteString("\n")
}
func maxCommandLength(s []*Command) int {
if len(s) == 0 {
return 0
}
ret := len(s[0].Name)
for _, v := range s[1:] {
l := len(v.Name)
if l > ret {
ret = l
}
}
return ret
}
// WriteHelp writes a help message containing all the possible options and
// their descriptions to the provided writer. Note that the HelpFlag parser
// option provides a convenient way to add a -h/--help option group to the
// command line parser which will automatically show the help messages using
// this method.
func (p *Parser) WriteHelp(writer io.Writer) {
if writer == nil {
return
}
wr := bufio.NewWriter(writer)
aligninfo := p.getAlignmentInfo()
cmd := p.Command
for cmd.Active != nil {
cmd = cmd.Active
}
if p.Name != "" {
wr.WriteString("Usage:\n")
wr.WriteString(" ")
allcmd := p.Command
for allcmd != nil {
var usage string
if allcmd == p.Command {
if len(p.Usage) != 0 {
usage = p.Usage
} else if p.Options&HelpFlag != 0 {
usage = "[OPTIONS]"
}
} else if us, ok := allcmd.data.(Usage); ok {
usage = us.Usage()
} else if allcmd.hasCliOptions() {
usage = fmt.Sprintf("[%s-OPTIONS]", allcmd.Name)
}
if len(usage) != 0 {
fmt.Fprintf(wr, " %s %s", allcmd.Name, usage)
} else {
fmt.Fprintf(wr, " %s", allcmd.Name)
}
if len(allcmd.args) > 0 {
fmt.Fprintf(wr, " ")
}
for i, arg := range allcmd.args {
if i != 0 {
fmt.Fprintf(wr, " ")
}
name := arg.Name
if arg.isRemaining() {
name = name + "..."
}
if !allcmd.ArgsRequired {
fmt.Fprintf(wr, "[%s]", name)
} else {
fmt.Fprintf(wr, "%s", name)
}
}
if allcmd.Active == nil && len(allcmd.commands) > 0 {
var co, cc string
if allcmd.SubcommandsOptional {
co, cc = "[", "]"
} else {
co, cc = "<", ">"
}
if len(allcmd.commands) > 3 {
fmt.Fprintf(wr, " %scommand%s", co, cc)
} else {
subcommands := allcmd.sortedCommands()
names := make([]string, len(subcommands))
for i, subc := range subcommands {
names[i] = subc.Name
}
fmt.Fprintf(wr, " %s%s%s", co, strings.Join(names, " | "), cc)
}
}
allcmd = allcmd.Active
}
fmt.Fprintln(wr)
if len(cmd.LongDescription) != 0 {
fmt.Fprintln(wr)
t := wrapText(cmd.LongDescription,
aligninfo.terminalColumns,
"")
fmt.Fprintln(wr, t)
}
}
c := p.Command
for c != nil {
printcmd := c != p.Command
c.eachGroup(func(grp *Group) {
first := true
// Skip built-in help group for all commands except the top-level
// parser
if grp.isBuiltinHelp && c != p.Command {
return
}
for _, info := range grp.options {
if !info.canCli() {
continue
}
if printcmd {
fmt.Fprintf(wr, "\n[%s command options]\n", c.Name)
aligninfo.indent = true
printcmd = false
}
if first && cmd.Group != grp {
fmt.Fprintln(wr)
if aligninfo.indent {
wr.WriteString(" ")
}
fmt.Fprintf(wr, "%s:\n", grp.ShortDescription)
first = false
}
p.writeHelpOption(wr, info, aligninfo)
}
})
if len(c.args) > 0 {
if c == p.Command {
fmt.Fprintf(wr, "\nArguments:\n")
} else {
fmt.Fprintf(wr, "\n[%s command arguments]\n", c.Name)
}
maxlen := aligninfo.descriptionStart()
for _, arg := range c.args {
prefix := strings.Repeat(" ", paddingBeforeOption)
fmt.Fprintf(wr, "%s%s", prefix, arg.Name)
if len(arg.Description) > 0 {
align := strings.Repeat(" ", maxlen-len(arg.Name)-1)
fmt.Fprintf(wr, ":%s%s", align, arg.Description)
}
fmt.Fprintln(wr)
}
}
c = c.Active
}
scommands := cmd.sortedCommands()
if len(scommands) > 0 {
maxnamelen := maxCommandLength(scommands)
fmt.Fprintln(wr)
fmt.Fprintln(wr, "Available commands:")
for _, c := range scommands {
fmt.Fprintf(wr, " %s", c.Name)
if len(c.ShortDescription) > 0 {
pad := strings.Repeat(" ", maxnamelen-len(c.Name))
fmt.Fprintf(wr, "%s %s", pad, c.ShortDescription)
if len(c.Aliases) > 0 {
fmt.Fprintf(wr, " (aliases: %s)", strings.Join(c.Aliases, ", "))
}
}
fmt.Fprintln(wr)
}
}
wr.Flush()
}

View File

@@ -1,300 +0,0 @@
package flags
import (
"bytes"
"fmt"
"os"
"runtime"
"testing"
"time"
)
type helpOptions struct {
Verbose []bool `short:"v" long:"verbose" description:"Show verbose debug information" ini-name:"verbose"`
Call func(string) `short:"c" description:"Call phone number" ini-name:"call"`
PtrSlice []*string `long:"ptrslice" description:"A slice of pointers to string"`
EmptyDescription bool `long:"empty-description"`
Default string `long:"default" default:"Some\nvalue" description:"Test default value"`
DefaultArray []string `long:"default-array" default:"Some value" default:"Other\tvalue" description:"Test default array value"`
DefaultMap map[string]string `long:"default-map" default:"some:value" default:"another:value" description:"Testdefault map value"`
EnvDefault1 string `long:"env-default1" default:"Some value" env:"ENV_DEFAULT" description:"Test env-default1 value"`
EnvDefault2 string `long:"env-default2" env:"ENV_DEFAULT" description:"Test env-default2 value"`
OptionWithArgName string `long:"opt-with-arg-name" value-name:"something" description:"Option with named argument"`
OnlyIni string `ini-name:"only-ini" description:"Option only available in ini"`
Other struct {
StringSlice []string `short:"s" default:"some" default:"value" description:"A slice of strings"`
IntMap map[string]int `long:"intmap" default:"a:1" description:"A map from string to int" ini-name:"int-map"`
} `group:"Other Options"`
Group struct {
Opt string `long:"opt" description:"This is a subgroup option"`
Group struct {
Opt string `long:"opt" description:"This is a subsubgroup option"`
} `group:"Subsubgroup" namespace:"sap"`
} `group:"Subgroup" namespace:"sip"`
Command struct {
ExtraVerbose []bool `long:"extra-verbose" description:"Use for extra verbosity"`
} `command:"command" alias:"cm" alias:"cmd" description:"A command"`
Args struct {
Filename string `positional-arg-name:"filename" description:"A filename"`
Number int `positional-arg-name:"num" description:"A number"`
} `positional-args:"yes"`
}
func TestHelp(t *testing.T) {
oldEnv := EnvSnapshot()
defer oldEnv.Restore()
os.Setenv("ENV_DEFAULT", "env-def")
var opts helpOptions
p := NewNamedParser("TestHelp", HelpFlag)
p.AddGroup("Application Options", "The application options", &opts)
_, err := p.ParseArgs([]string{"--help"})
if err == nil {
t.Fatalf("Expected help error")
}
if e, ok := err.(*Error); !ok {
t.Fatalf("Expected flags.Error, but got %T", err)
} else {
if e.Type != ErrHelp {
t.Errorf("Expected flags.ErrHelp type, but got %s", e.Type)
}
var expected string
if runtime.GOOS == "windows" {
expected = `Usage:
TestHelp [OPTIONS] [filename] [num] <command>
Application Options:
/v, /verbose Show verbose debug information
/c: Call phone number
/ptrslice: A slice of pointers to string
/empty-description
/default: Test default value ("Some\nvalue")
/default-array: Test default array value (Some value, "Other\tvalue")
/default-map: Testdefault map value (some:value, another:value)
/env-default1: Test env-default1 value (Some value) [%ENV_DEFAULT%]
/env-default2: Test env-default2 value [%ENV_DEFAULT%]
/opt-with-arg-name:something Option with named argument
Other Options:
/s: A slice of strings (some, value)
/intmap: A map from string to int (a:1)
Subgroup:
/sip.opt: This is a subgroup option
Subsubgroup:
/sip.sap.opt: This is a subsubgroup option
Help Options:
/? Show this help message
/h, /help Show this help message
Arguments:
filename: A filename
num: A number
Available commands:
command A command (aliases: cm, cmd)
`
} else {
expected = `Usage:
TestHelp [OPTIONS] [filename] [num] <command>
Application Options:
-v, --verbose Show verbose debug information
-c= Call phone number
--ptrslice= A slice of pointers to string
--empty-description
--default= Test default value ("Some\nvalue")
--default-array= Test default array value (Some value,
"Other\tvalue")
--default-map= Testdefault map value (some:value,
another:value)
--env-default1= Test env-default1 value (Some value)
[$ENV_DEFAULT]
--env-default2= Test env-default2 value [$ENV_DEFAULT]
--opt-with-arg-name=something Option with named argument
Other Options:
-s= A slice of strings (some, value)
--intmap= A map from string to int (a:1)
Subgroup:
--sip.opt= This is a subgroup option
Subsubgroup:
--sip.sap.opt= This is a subsubgroup option
Help Options:
-h, --help Show this help message
Arguments:
filename: A filename
num: A number
Available commands:
command A command (aliases: cm, cmd)
`
}
assertDiff(t, e.Message, expected, "help message")
}
}
func TestMan(t *testing.T) {
oldEnv := EnvSnapshot()
defer oldEnv.Restore()
os.Setenv("ENV_DEFAULT", "env-def")
var opts helpOptions
p := NewNamedParser("TestMan", HelpFlag)
p.ShortDescription = "Test manpage generation"
p.LongDescription = "This is a somewhat `longer' description of what this does"
p.AddGroup("Application Options", "The application options", &opts)
p.Commands()[0].LongDescription = "Longer `command' description"
var buf bytes.Buffer
p.WriteManPage(&buf)
got := buf.String()
tt := time.Now()
var envDefaultName string
if runtime.GOOS == "windows" {
envDefaultName = "%ENV_DEFAULT%"
} else {
envDefaultName = "$ENV_DEFAULT"
}
expected := fmt.Sprintf(`.TH TestMan 1 "%s"
.SH NAME
TestMan \- Test manpage generation
.SH SYNOPSIS
\fBTestMan\fP [OPTIONS]
.SH DESCRIPTION
This is a somewhat \fBlonger\fP description of what this does
.SH OPTIONS
.TP
\fB\fB\-v\fR, \fB\-\-verbose\fR\fP
Show verbose debug information
.TP
\fB\fB\-c\fR\fP
Call phone number
.TP
\fB\fB\-\-ptrslice\fR\fP
A slice of pointers to string
.TP
\fB\fB\-\-empty-description\fR\fP
.TP
\fB\fB\-\-default\fR <default: \fI"Some\\nvalue"\fR>\fP
Test default value
.TP
\fB\fB\-\-default-array\fR <default: \fI"Some value", "Other\\tvalue"\fR>\fP
Test default array value
.TP
\fB\fB\-\-default-map\fR <default: \fI"some:value", "another:value"\fR>\fP
Testdefault map value
.TP
\fB\fB\-\-env-default1\fR <default: \fI"Some value"\fR>\fP
Test env-default1 value
.TP
\fB\fB\-\-env-default2\fR <default: \fI%s\fR>\fP
Test env-default2 value
.TP
\fB\fB\-\-opt-with-arg-name\fR \fIsomething\fR\fP
Option with named argument
.TP
\fB\fB\-s\fR <default: \fI"some", "value"\fR>\fP
A slice of strings
.TP
\fB\fB\-\-intmap\fR <default: \fI"a:1"\fR>\fP
A map from string to int
.TP
\fB\fB\-\-sip.opt\fR\fP
This is a subgroup option
.TP
\fB\fB\-\-sip.sap.opt\fR\fP
This is a subsubgroup option
.SH COMMANDS
.SS command
A command
Longer \fBcommand\fP description
\fBUsage\fP: TestMan [OPTIONS] command [command-OPTIONS]
\fBAliases\fP: cm, cmd
.TP
\fB\fB\-\-extra-verbose\fR\fP
Use for extra verbosity
`, tt.Format("2 January 2006"), envDefaultName)
assertDiff(t, got, expected, "man page")
}
type helpCommandNoOptions struct {
Command struct {
} `command:"command" description:"A command"`
}
func TestHelpCommand(t *testing.T) {
oldEnv := EnvSnapshot()
defer oldEnv.Restore()
os.Setenv("ENV_DEFAULT", "env-def")
var opts helpCommandNoOptions
p := NewNamedParser("TestHelpCommand", HelpFlag)
p.AddGroup("Application Options", "The application options", &opts)
_, err := p.ParseArgs([]string{"command", "--help"})
if err == nil {
t.Fatalf("Expected help error")
}
if e, ok := err.(*Error); !ok {
t.Fatalf("Expected flags.Error, but got %T", err)
} else {
if e.Type != ErrHelp {
t.Errorf("Expected flags.ErrHelp type, but got %s", e.Type)
}
var expected string
if runtime.GOOS == "windows" {
expected = `Usage:
TestHelpCommand [OPTIONS] command
Help Options:
/? Show this help message
/h, /help Show this help message
`
} else {
expected = `Usage:
TestHelpCommand [OPTIONS] command
Help Options:
-h, --help Show this help message
`
}
assertDiff(t, e.Message, expected, "help message")
}
}

View File

@@ -1,140 +0,0 @@
package flags
import (
"fmt"
"io"
)
// IniError contains location information on where an error occured.
type IniError struct {
// The error message.
Message string
// The filename of the file in which the error occurred.
File string
// The line number at which the error occurred.
LineNumber uint
}
// Error provides a "file:line: message" formatted message of the ini error.
func (x *IniError) Error() string {
return fmt.Sprintf(
"%s:%d: %s",
x.File,
x.LineNumber,
x.Message,
)
}
// IniOptions for writing
type IniOptions uint
const (
// IniNone indicates no options.
IniNone IniOptions = 0
// IniIncludeDefaults indicates that default values should be written.
IniIncludeDefaults = 1 << iota
// IniCommentDefaults indicates that if IniIncludeDefaults is used
// options with default values are written but commented out.
IniCommentDefaults
// IniIncludeComments indicates that comments containing the description
// of an option should be written.
IniIncludeComments
// IniDefault provides a default set of options.
IniDefault = IniIncludeComments
)
// IniParser is a utility to read and write flags options from and to ini
// formatted strings.
type IniParser struct {
parser *Parser
}
// NewIniParser creates a new ini parser for a given Parser.
func NewIniParser(p *Parser) *IniParser {
return &IniParser{
parser: p,
}
}
// IniParse is a convenience function to parse command line options with default
// settings from an ini formatted file. The provided data is a pointer to a struct
// representing the default option group (named "Application Options"). For
// more control, use flags.NewParser.
func IniParse(filename string, data interface{}) error {
p := NewParser(data, Default)
return NewIniParser(p).ParseFile(filename)
}
// ParseFile parses flags from an ini formatted file. See Parse for more
// information on the ini file format. The returned errors can be of the type
// flags.Error or flags.IniError.
func (i *IniParser) ParseFile(filename string) error {
i.parser.clearIsSet()
ini, err := readIniFromFile(filename)
if err != nil {
return err
}
return i.parse(ini)
}
// Parse parses flags from an ini format. You can use ParseFile as a
// convenience function to parse from a filename instead of a general
// io.Reader.
//
// The format of the ini file is as follows:
//
// [Option group name]
// option = value
//
// Each section in the ini file represents an option group or command in the
// flags parser. The default flags parser option group (i.e. when using
// flags.Parse) is named 'Application Options'. The ini option name is matched
// in the following order:
//
// 1. Compared to the ini-name tag on the option struct field (if present)
// 2. Compared to the struct field name
// 3. Compared to the option long name (if present)
// 4. Compared to the option short name (if present)
//
// Sections for nested groups and commands can be addressed using a dot `.'
// namespacing notation (i.e [subcommand.Options]). Group section names are
// matched case insensitive.
//
// The returned errors can be of the type flags.Error or flags.IniError.
func (i *IniParser) Parse(reader io.Reader) error {
i.parser.clearIsSet()
ini, err := readIni(reader, "")
if err != nil {
return err
}
return i.parse(ini)
}
// WriteFile writes the flags as ini format into a file. See WriteIni
// for more information. The returned error occurs when the specified file
// could not be opened for writing.
func (i *IniParser) WriteFile(filename string, options IniOptions) error {
return writeIniToFile(i, filename, options)
}
// Write writes the current values of all the flags to an ini format.
// See Parse for more information on the ini file format. You typically
// call this only after settings have been parsed since the default values of each
// option are stored just before parsing the flags (this is only relevant when
// IniIncludeDefaults is _not_ set in options).
func (i *IniParser) Write(writer io.Writer, options IniOptions) {
writeIni(i, writer, options)
}

View File

@@ -1,452 +0,0 @@
package flags
import (
"bufio"
"fmt"
"io"
"os"
"reflect"
"sort"
"strconv"
"strings"
)
type iniValue struct {
Name string
Value string
Quoted bool
LineNumber uint
}
type iniSection []iniValue
type ini struct {
File string
Sections map[string]iniSection
}
func readFullLine(reader *bufio.Reader) (string, error) {
var line []byte
for {
l, more, err := reader.ReadLine()
if err != nil {
return "", err
}
if line == nil && !more {
return string(l), nil
}
line = append(line, l...)
if !more {
break
}
}
return string(line), nil
}
func optionIniName(option *Option) string {
name := option.tag.Get("_read-ini-name")
if len(name) != 0 {
return name
}
name = option.tag.Get("ini-name")
if len(name) != 0 {
return name
}
return option.field.Name
}
func writeGroupIni(cmd *Command, group *Group, namespace string, writer io.Writer, options IniOptions) {
var sname string
if len(namespace) != 0 {
sname = namespace
}
if cmd.Group != group && len(group.ShortDescription) != 0 {
if len(sname) != 0 {
sname += "."
}
sname += group.ShortDescription
}
sectionwritten := false
comments := (options & IniIncludeComments) != IniNone
for _, option := range group.options {
if option.isFunc() {
continue
}
if len(option.tag.Get("no-ini")) != 0 {
continue
}
val := option.value
if (options&IniIncludeDefaults) == IniNone && option.valueIsDefault() {
continue
}
if !sectionwritten {
fmt.Fprintf(writer, "[%s]\n", sname)
sectionwritten = true
}
if comments && len(option.Description) != 0 {
fmt.Fprintf(writer, "; %s\n", option.Description)
}
oname := optionIniName(option)
commentOption := (options&(IniIncludeDefaults|IniCommentDefaults)) == IniIncludeDefaults|IniCommentDefaults && option.valueIsDefault()
kind := val.Type().Kind()
switch kind {
case reflect.Slice:
kind = val.Type().Elem().Kind()
if val.Len() == 0 {
writeOption(writer, oname, kind, "", "", true, option.iniQuote)
} else {
for idx := 0; idx < val.Len(); idx++ {
v, _ := convertToString(val.Index(idx), option.tag)
writeOption(writer, oname, kind, "", v, commentOption, option.iniQuote)
}
}
case reflect.Map:
kind = val.Type().Elem().Kind()
if val.Len() == 0 {
writeOption(writer, oname, kind, "", "", true, option.iniQuote)
} else {
mkeys := val.MapKeys()
keys := make([]string, len(val.MapKeys()))
kkmap := make(map[string]reflect.Value)
for i, k := range mkeys {
keys[i], _ = convertToString(k, option.tag)
kkmap[keys[i]] = k
}
sort.Strings(keys)
for _, k := range keys {
v, _ := convertToString(val.MapIndex(kkmap[k]), option.tag)
writeOption(writer, oname, kind, k, v, commentOption, option.iniQuote)
}
}
default:
v, _ := convertToString(val, option.tag)
writeOption(writer, oname, kind, "", v, commentOption, option.iniQuote)
}
if comments {
fmt.Fprintln(writer)
}
}
if sectionwritten && !comments {
fmt.Fprintln(writer)
}
}
func writeOption(writer io.Writer, optionName string, optionType reflect.Kind, optionKey string, optionValue string, commentOption bool, forceQuote bool) {
if forceQuote || (optionType == reflect.String && !isPrint(optionValue)) {
optionValue = strconv.Quote(optionValue)
}
comment := ""
if commentOption {
comment = "; "
}
fmt.Fprintf(writer, "%s%s =", comment, optionName)
if optionKey != "" {
fmt.Fprintf(writer, " %s:%s", optionKey, optionValue)
} else if optionValue != "" {
fmt.Fprintf(writer, " %s", optionValue)
}
fmt.Fprintln(writer)
}
func writeCommandIni(command *Command, namespace string, writer io.Writer, options IniOptions) {
command.eachGroup(func(group *Group) {
writeGroupIni(command, group, namespace, writer, options)
})
for _, c := range command.commands {
var nns string
if len(namespace) != 0 {
nns = c.Name + "." + nns
} else {
nns = c.Name
}
writeCommandIni(c, nns, writer, options)
}
}
func writeIni(parser *IniParser, writer io.Writer, options IniOptions) {
writeCommandIni(parser.parser.Command, "", writer, options)
}
func writeIniToFile(parser *IniParser, filename string, options IniOptions) error {
file, err := os.Create(filename)
if err != nil {
return err
}
defer file.Close()
writeIni(parser, file, options)
return nil
}
func readIniFromFile(filename string) (*ini, error) {
file, err := os.Open(filename)
if err != nil {
return nil, err
}
defer file.Close()
return readIni(file, filename)
}
func readIni(contents io.Reader, filename string) (*ini, error) {
ret := &ini{
File: filename,
Sections: make(map[string]iniSection),
}
reader := bufio.NewReader(contents)
// Empty global section
section := make(iniSection, 0, 10)
sectionname := ""
ret.Sections[sectionname] = section
var lineno uint
for {
line, err := readFullLine(reader)
if err == io.EOF {
break
} else if err != nil {
return nil, err
}
lineno++
line = strings.TrimSpace(line)
// Skip empty lines and lines starting with ; (comments)
if len(line) == 0 || line[0] == ';' || line[0] == '#' {
continue
}
if line[0] == '[' {
if line[0] != '[' || line[len(line)-1] != ']' {
return nil, &IniError{
Message: "malformed section header",
File: filename,
LineNumber: lineno,
}
}
name := strings.TrimSpace(line[1 : len(line)-1])
if len(name) == 0 {
return nil, &IniError{
Message: "empty section name",
File: filename,
LineNumber: lineno,
}
}
sectionname = name
section = ret.Sections[name]
if section == nil {
section = make(iniSection, 0, 10)
ret.Sections[name] = section
}
continue
}
// Parse option here
keyval := strings.SplitN(line, "=", 2)
if len(keyval) != 2 {
return nil, &IniError{
Message: fmt.Sprintf("malformed key=value (%s)", line),
File: filename,
LineNumber: lineno,
}
}
name := strings.TrimSpace(keyval[0])
value := strings.TrimSpace(keyval[1])
quoted := false
if len(value) != 0 && value[0] == '"' {
if v, err := strconv.Unquote(value); err == nil {
value = v
quoted = true
} else {
return nil, &IniError{
Message: err.Error(),
File: filename,
LineNumber: lineno,
}
}
}
section = append(section, iniValue{
Name: name,
Value: value,
Quoted: quoted,
LineNumber: lineno,
})
ret.Sections[sectionname] = section
}
return ret, nil
}
func (i *IniParser) matchingGroups(name string) []*Group {
if len(name) == 0 {
var ret []*Group
i.parser.eachGroup(func(g *Group) {
ret = append(ret, g)
})
return ret
}
g := i.parser.groupByName(name)
if g != nil {
return []*Group{g}
}
return nil
}
func (i *IniParser) parse(ini *ini) error {
p := i.parser
var quotesLookup = make(map[*Option]bool)
for name, section := range ini.Sections {
groups := i.matchingGroups(name)
if len(groups) == 0 {
return newErrorf(ErrUnknownGroup, "could not find option group `%s'", name)
}
for _, inival := range section {
var opt *Option
for _, group := range groups {
opt = group.optionByName(inival.Name, func(o *Option, n string) bool {
return strings.ToLower(o.tag.Get("ini-name")) == strings.ToLower(n)
})
if opt != nil && len(opt.tag.Get("no-ini")) != 0 {
opt = nil
}
if opt != nil {
break
}
}
if opt == nil {
if (p.Options & IgnoreUnknown) == None {
return &IniError{
Message: fmt.Sprintf("unknown option: %s", inival.Name),
File: ini.File,
LineNumber: inival.LineNumber,
}
}
continue
}
pval := &inival.Value
if !opt.canArgument() && len(inival.Value) == 0 {
pval = nil
} else {
if opt.value.Type().Kind() == reflect.Map {
parts := strings.SplitN(inival.Value, ":", 2)
// only handle unquoting
if len(parts) == 2 && parts[1][0] == '"' {
if v, err := strconv.Unquote(parts[1]); err == nil {
parts[1] = v
inival.Quoted = true
} else {
return &IniError{
Message: err.Error(),
File: ini.File,
LineNumber: inival.LineNumber,
}
}
s := parts[0] + ":" + parts[1]
pval = &s
}
}
}
if err := opt.set(pval); err != nil {
return &IniError{
Message: err.Error(),
File: ini.File,
LineNumber: inival.LineNumber,
}
}
// either all INI values are quoted or only values who need quoting
if _, ok := quotesLookup[opt]; !inival.Quoted || !ok {
quotesLookup[opt] = inival.Quoted
}
opt.tag.Set("_read-ini-name", inival.Name)
}
}
for opt, quoted := range quotesLookup {
opt.iniQuote = quoted
}
return nil
}

View File

@@ -1,767 +0,0 @@
package flags
import (
"bytes"
"fmt"
"io/ioutil"
"os"
"reflect"
"strings"
"testing"
)
func TestWriteIni(t *testing.T) {
oldEnv := EnvSnapshot()
defer oldEnv.Restore()
os.Setenv("ENV_DEFAULT", "env-def")
var tests = []struct {
args []string
options IniOptions
expected string
}{
{
[]string{"-vv", "--intmap=a:2", "--intmap", "b:3", "filename", "0", "command"},
IniDefault,
`[Application Options]
; Show verbose debug information
verbose = true
verbose = true
; Test env-default1 value
EnvDefault1 = env-def
; Test env-default2 value
EnvDefault2 = env-def
[Other Options]
; A map from string to int
int-map = a:2
int-map = b:3
`,
},
{
[]string{"-vv", "--intmap=a:2", "--intmap", "b:3", "filename", "0", "command"},
IniDefault | IniIncludeDefaults,
`[Application Options]
; Show verbose debug information
verbose = true
verbose = true
; A slice of pointers to string
; PtrSlice =
EmptyDescription = false
; Test default value
Default = "Some\nvalue"
; Test default array value
DefaultArray = Some value
DefaultArray = "Other\tvalue"
; Testdefault map value
DefaultMap = another:value
DefaultMap = some:value
; Test env-default1 value
EnvDefault1 = env-def
; Test env-default2 value
EnvDefault2 = env-def
; Option with named argument
OptionWithArgName =
; Option only available in ini
only-ini =
[Other Options]
; A slice of strings
StringSlice = some
StringSlice = value
; A map from string to int
int-map = a:2
int-map = b:3
[Subgroup]
; This is a subgroup option
Opt =
[Subsubgroup]
; This is a subsubgroup option
Opt =
[command]
; Use for extra verbosity
; ExtraVerbose =
`,
},
{
[]string{"filename", "0", "command"},
IniDefault | IniIncludeDefaults | IniCommentDefaults,
`[Application Options]
; Show verbose debug information
; verbose =
; A slice of pointers to string
; PtrSlice =
; EmptyDescription = false
; Test default value
; Default = "Some\nvalue"
; Test default array value
; DefaultArray = Some value
; DefaultArray = "Other\tvalue"
; Testdefault map value
; DefaultMap = another:value
; DefaultMap = some:value
; Test env-default1 value
EnvDefault1 = env-def
; Test env-default2 value
EnvDefault2 = env-def
; Option with named argument
; OptionWithArgName =
; Option only available in ini
; only-ini =
[Other Options]
; A slice of strings
; StringSlice = some
; StringSlice = value
; A map from string to int
; int-map = a:1
[Subgroup]
; This is a subgroup option
; Opt =
[Subsubgroup]
; This is a subsubgroup option
; Opt =
[command]
; Use for extra verbosity
; ExtraVerbose =
`,
},
{
[]string{"--default=New value", "--default-array=New value", "--default-map=new:value", "filename", "0", "command"},
IniDefault | IniIncludeDefaults | IniCommentDefaults,
`[Application Options]
; Show verbose debug information
; verbose =
; A slice of pointers to string
; PtrSlice =
; EmptyDescription = false
; Test default value
Default = New value
; Test default array value
DefaultArray = New value
; Testdefault map value
DefaultMap = new:value
; Test env-default1 value
EnvDefault1 = env-def
; Test env-default2 value
EnvDefault2 = env-def
; Option with named argument
; OptionWithArgName =
; Option only available in ini
; only-ini =
[Other Options]
; A slice of strings
; StringSlice = some
; StringSlice = value
; A map from string to int
; int-map = a:1
[Subgroup]
; This is a subgroup option
; Opt =
[Subsubgroup]
; This is a subsubgroup option
; Opt =
[command]
; Use for extra verbosity
; ExtraVerbose =
`,
},
}
for _, test := range tests {
var opts helpOptions
p := NewNamedParser("TestIni", Default)
p.AddGroup("Application Options", "The application options", &opts)
_, err := p.ParseArgs(test.args)
if err != nil {
t.Fatalf("Unexpected error: %v", err)
}
inip := NewIniParser(p)
var b bytes.Buffer
inip.Write(&b, test.options)
got := b.String()
expected := test.expected
msg := fmt.Sprintf("with arguments %+v and ini options %b", test.args, test.options)
assertDiff(t, got, expected, msg)
}
}
func TestReadIni(t *testing.T) {
var opts helpOptions
p := NewNamedParser("TestIni", Default)
p.AddGroup("Application Options", "The application options", &opts)
inip := NewIniParser(p)
inic := `
; Show verbose debug information
verbose = true
verbose = true
DefaultMap = another:"value\n1"
DefaultMap = some:value 2
[Application Options]
; A slice of pointers to string
; PtrSlice =
; Test default value
Default = "New\nvalue"
; Test env-default1 value
EnvDefault1 = New value
[Other Options]
# A slice of strings
StringSlice = "some\nvalue"
StringSlice = another value
; A map from string to int
int-map = a:2
int-map = b:3
`
b := strings.NewReader(inic)
err := inip.Parse(b)
if err != nil {
t.Fatalf("Unexpected error: %s", err)
}
assertBoolArray(t, opts.Verbose, []bool{true, true})
if v := map[string]string{"another": "value\n1", "some": "value 2"}; !reflect.DeepEqual(opts.DefaultMap, v) {
t.Fatalf("Expected %#v for DefaultMap but got %#v", v, opts.DefaultMap)
}
assertString(t, opts.Default, "New\nvalue")
assertString(t, opts.EnvDefault1, "New value")
assertStringArray(t, opts.Other.StringSlice, []string{"some\nvalue", "another value"})
if v, ok := opts.Other.IntMap["a"]; !ok {
t.Errorf("Expected \"a\" in Other.IntMap")
} else if v != 2 {
t.Errorf("Expected Other.IntMap[\"a\"] = 2, but got %v", v)
}
if v, ok := opts.Other.IntMap["b"]; !ok {
t.Errorf("Expected \"b\" in Other.IntMap")
} else if v != 3 {
t.Errorf("Expected Other.IntMap[\"b\"] = 3, but got %v", v)
}
}
func TestReadAndWriteIni(t *testing.T) {
var tests = []struct {
options IniOptions
read string
write string
}{
{
IniIncludeComments,
`[Application Options]
; Show verbose debug information
verbose = true
verbose = true
; Test default value
Default = "quote me"
; Test default array value
DefaultArray = 1
DefaultArray = "2"
DefaultArray = 3
; Testdefault map value
; DefaultMap =
; Test env-default1 value
EnvDefault1 = env-def
; Test env-default2 value
EnvDefault2 = env-def
[Other Options]
; A slice of strings
; StringSlice =
; A map from string to int
int-map = a:2
int-map = b:"3"
`,
`[Application Options]
; Show verbose debug information
verbose = true
verbose = true
; Test default value
Default = "quote me"
; Test default array value
DefaultArray = 1
DefaultArray = 2
DefaultArray = 3
; Testdefault map value
; DefaultMap =
; Test env-default1 value
EnvDefault1 = env-def
; Test env-default2 value
EnvDefault2 = env-def
[Other Options]
; A slice of strings
; StringSlice =
; A map from string to int
int-map = a:2
int-map = b:3
`,
},
{
IniIncludeComments,
`[Application Options]
; Show verbose debug information
verbose = true
verbose = true
; Test default value
Default = "quote me"
; Test default array value
DefaultArray = "1"
DefaultArray = "2"
DefaultArray = "3"
; Testdefault map value
; DefaultMap =
; Test env-default1 value
EnvDefault1 = env-def
; Test env-default2 value
EnvDefault2 = env-def
[Other Options]
; A slice of strings
; StringSlice =
; A map from string to int
int-map = a:"2"
int-map = b:"3"
`,
`[Application Options]
; Show verbose debug information
verbose = true
verbose = true
; Test default value
Default = "quote me"
; Test default array value
DefaultArray = "1"
DefaultArray = "2"
DefaultArray = "3"
; Testdefault map value
; DefaultMap =
; Test env-default1 value
EnvDefault1 = env-def
; Test env-default2 value
EnvDefault2 = env-def
[Other Options]
; A slice of strings
; StringSlice =
; A map from string to int
int-map = a:"2"
int-map = b:"3"
`,
},
}
for _, test := range tests {
var opts helpOptions
p := NewNamedParser("TestIni", Default)
p.AddGroup("Application Options", "The application options", &opts)
inip := NewIniParser(p)
read := strings.NewReader(test.read)
err := inip.Parse(read)
if err != nil {
t.Fatalf("Unexpected error: %s", err)
}
var write bytes.Buffer
inip.Write(&write, test.options)
got := write.String()
msg := fmt.Sprintf("with ini options %b", test.options)
assertDiff(t, got, test.write, msg)
}
}
func TestReadIniWrongQuoting(t *testing.T) {
var tests = []struct {
iniFile string
lineNumber uint
}{
{
iniFile: `Default = "New\nvalue`,
lineNumber: 1,
},
{
iniFile: `StringSlice = "New\nvalue`,
lineNumber: 1,
},
{
iniFile: `StringSlice = "New\nvalue"
StringSlice = "Second\nvalue`,
lineNumber: 2,
},
{
iniFile: `DefaultMap = some:"value`,
lineNumber: 1,
},
{
iniFile: `DefaultMap = some:value
DefaultMap = another:"value`,
lineNumber: 2,
},
}
for _, test := range tests {
var opts helpOptions
p := NewNamedParser("TestIni", Default)
p.AddGroup("Application Options", "The application options", &opts)
inip := NewIniParser(p)
inic := test.iniFile
b := strings.NewReader(inic)
err := inip.Parse(b)
if err == nil {
t.Fatalf("Expect error")
}
iniError := err.(*IniError)
if iniError.LineNumber != test.lineNumber {
t.Fatalf("Expect error on line %d", test.lineNumber)
}
}
}
func TestIniCommands(t *testing.T) {
var opts struct {
Value string `short:"v" long:"value"`
Add struct {
Name int `short:"n" long:"name" ini-name:"AliasName"`
Other struct {
O string `short:"o" long:"other"`
} `group:"Other Options"`
} `command:"add"`
}
p := NewNamedParser("TestIni", Default)
p.AddGroup("Application Options", "The application options", &opts)
inip := NewIniParser(p)
inic := `[Application Options]
value = some value
[add]
AliasName = 5
[add.Other Options]
other = subgroup
`
b := strings.NewReader(inic)
err := inip.Parse(b)
if err != nil {
t.Fatalf("Unexpected error: %s", err)
}
assertString(t, opts.Value, "some value")
if opts.Add.Name != 5 {
t.Errorf("Expected opts.Add.Name to be 5, but got %v", opts.Add.Name)
}
assertString(t, opts.Add.Other.O, "subgroup")
// Test writing it back
buf := &bytes.Buffer{}
inip.Write(buf, IniDefault)
assertDiff(t, buf.String(), inic, "ini contents")
}
func TestIniNoIni(t *testing.T) {
var opts struct {
NoValue string `short:"n" long:"novalue" no-ini:"yes"`
Value string `short:"v" long:"value"`
}
p := NewNamedParser("TestIni", Default)
p.AddGroup("Application Options", "The application options", &opts)
inip := NewIniParser(p)
// read INI
inic := `[Application Options]
novalue = some value
value = some other value
`
b := strings.NewReader(inic)
err := inip.Parse(b)
if err == nil {
t.Fatalf("Expected error")
}
iniError := err.(*IniError)
if v := uint(2); iniError.LineNumber != v {
t.Errorf("Expected opts.Add.Name to be %d, but got %d", v, iniError.LineNumber)
}
if v := "unknown option: novalue"; iniError.Message != v {
t.Errorf("Expected opts.Add.Name to be %s, but got %s", v, iniError.Message)
}
// write INI
opts.NoValue = "some value"
opts.Value = "some other value"
file, err := ioutil.TempFile("", "")
if err != nil {
t.Fatalf("Cannot create temporary file: %s", err)
}
defer os.Remove(file.Name())
err = inip.WriteFile(file.Name(), IniIncludeDefaults)
if err != nil {
t.Fatalf("Could not write ini file: %s", err)
}
found, err := ioutil.ReadFile(file.Name())
if err != nil {
t.Fatalf("Could not read written ini file: %s", err)
}
expected := "[Application Options]\nValue = some other value\n\n"
assertDiff(t, string(found), expected, "ini content")
}
func TestIniParse(t *testing.T) {
file, err := ioutil.TempFile("", "")
if err != nil {
t.Fatalf("Cannot create temporary file: %s", err)
}
defer os.Remove(file.Name())
_, err = file.WriteString("value = 123")
if err != nil {
t.Fatalf("Cannot write to temporary file: %s", err)
}
file.Close()
var opts struct {
Value int `long:"value"`
}
err = IniParse(file.Name(), &opts)
if err != nil {
t.Fatalf("Could not parse ini: %s", err)
}
if opts.Value != 123 {
t.Fatalf("Expected Value to be \"123\" but was \"%d\"", opts.Value)
}
}
func TestWriteFile(t *testing.T) {
file, err := ioutil.TempFile("", "")
if err != nil {
t.Fatalf("Cannot create temporary file: %s", err)
}
defer os.Remove(file.Name())
var opts struct {
Value int `long:"value"`
}
opts.Value = 123
p := NewParser(&opts, Default)
ini := NewIniParser(p)
err = ini.WriteFile(file.Name(), IniIncludeDefaults)
if err != nil {
t.Fatalf("Could not write ini file: %s", err)
}
found, err := ioutil.ReadFile(file.Name())
if err != nil {
t.Fatalf("Could not read written ini file: %s", err)
}
expected := "[Application Options]\nValue = 123\n\n"
assertDiff(t, string(found), expected, "ini content")
}
func TestOverwriteRequiredOptions(t *testing.T) {
var tests = []struct {
args []string
expected []string
}{
{
args: []string{"--value", "from CLI"},
expected: []string{
"from CLI",
"from default",
},
},
{
args: []string{"--value", "from CLI", "--default", "from CLI"},
expected: []string{
"from CLI",
"from CLI",
},
},
{
args: []string{"--config", "no file name"},
expected: []string{
"from INI",
"from INI",
},
},
{
args: []string{"--value", "from CLI before", "--default", "from CLI before", "--config", "no file name"},
expected: []string{
"from INI",
"from INI",
},
},
{
args: []string{"--value", "from CLI before", "--default", "from CLI before", "--config", "no file name", "--value", "from CLI after", "--default", "from CLI after"},
expected: []string{
"from CLI after",
"from CLI after",
},
},
}
for _, test := range tests {
var opts struct {
Config func(s string) error `long:"config" no-ini:"true"`
Value string `long:"value" required:"true"`
Default string `long:"default" required:"true" default:"from default"`
}
p := NewParser(&opts, Default)
opts.Config = func(s string) error {
ini := NewIniParser(p)
return ini.Parse(bytes.NewBufferString("value = from INI\ndefault = from INI"))
}
_, err := p.ParseArgs(test.args)
if err != nil {
t.Fatalf("Unexpected error %s with args %+v", err, test.args)
}
if opts.Value != test.expected[0] {
t.Fatalf("Expected Value to be \"%s\" but was \"%s\" with args %+v", test.expected[0], opts.Value, test.args)
}
if opts.Default != test.expected[1] {
t.Fatalf("Expected Default to be \"%s\" but was \"%s\" with args %+v", test.expected[1], opts.Default, test.args)
}
}
}

View File

@@ -1,85 +0,0 @@
package flags
import (
"testing"
)
func TestLong(t *testing.T) {
var opts = struct {
Value bool `long:"value"`
}{}
ret := assertParseSuccess(t, &opts, "--value")
assertStringArray(t, ret, []string{})
if !opts.Value {
t.Errorf("Expected Value to be true")
}
}
func TestLongArg(t *testing.T) {
var opts = struct {
Value string `long:"value"`
}{}
ret := assertParseSuccess(t, &opts, "--value", "value")
assertStringArray(t, ret, []string{})
assertString(t, opts.Value, "value")
}
func TestLongArgEqual(t *testing.T) {
var opts = struct {
Value string `long:"value"`
}{}
ret := assertParseSuccess(t, &opts, "--value=value")
assertStringArray(t, ret, []string{})
assertString(t, opts.Value, "value")
}
func TestLongDefault(t *testing.T) {
var opts = struct {
Value string `long:"value" default:"value"`
}{}
ret := assertParseSuccess(t, &opts)
assertStringArray(t, ret, []string{})
assertString(t, opts.Value, "value")
}
func TestLongOptional(t *testing.T) {
var opts = struct {
Value string `long:"value" optional:"yes" optional-value:"value"`
}{}
ret := assertParseSuccess(t, &opts, "--value")
assertStringArray(t, ret, []string{})
assertString(t, opts.Value, "value")
}
func TestLongOptionalArg(t *testing.T) {
var opts = struct {
Value string `long:"value" optional:"yes" optional-value:"value"`
}{}
ret := assertParseSuccess(t, &opts, "--value", "no")
assertStringArray(t, ret, []string{"no"})
assertString(t, opts.Value, "value")
}
func TestLongOptionalArgEqual(t *testing.T) {
var opts = struct {
Value string `long:"value" optional:"yes" optional-value:"value"`
}{}
ret := assertParseSuccess(t, &opts, "--value=value", "no")
assertStringArray(t, ret, []string{"no"})
assertString(t, opts.Value, "value")
}

View File

@@ -1,186 +0,0 @@
package flags
import (
"fmt"
"io"
"runtime"
"strings"
"time"
)
func manQuote(s string) string {
return strings.Replace(s, "\\", "\\\\", -1)
}
func formatForMan(wr io.Writer, s string) {
for {
idx := strings.IndexRune(s, '`')
if idx < 0 {
fmt.Fprintf(wr, "%s", manQuote(s))
break
}
fmt.Fprintf(wr, "%s", manQuote(s[:idx]))
s = s[idx+1:]
idx = strings.IndexRune(s, '\'')
if idx < 0 {
fmt.Fprintf(wr, "%s", manQuote(s))
break
}
fmt.Fprintf(wr, "\\fB%s\\fP", manQuote(s[:idx]))
s = s[idx+1:]
}
}
func writeManPageOptions(wr io.Writer, grp *Group) {
grp.eachGroup(func(group *Group) {
for _, opt := range group.options {
if !opt.canCli() {
continue
}
fmt.Fprintln(wr, ".TP")
fmt.Fprintf(wr, "\\fB")
if opt.ShortName != 0 {
fmt.Fprintf(wr, "\\fB\\-%c\\fR", opt.ShortName)
}
if len(opt.LongName) != 0 {
if opt.ShortName != 0 {
fmt.Fprintf(wr, ", ")
}
fmt.Fprintf(wr, "\\fB\\-\\-%s\\fR", manQuote(opt.LongNameWithNamespace()))
}
if len(opt.ValueName) != 0 || opt.OptionalArgument {
if opt.OptionalArgument {
fmt.Fprintf(wr, " [\\fI%s=%s\\fR]", manQuote(opt.ValueName), manQuote(strings.Join(quoteV(opt.OptionalValue), ", ")))
} else {
fmt.Fprintf(wr, " \\fI%s\\fR", manQuote(opt.ValueName))
}
}
if len(opt.Default) != 0 {
fmt.Fprintf(wr, " <default: \\fI%s\\fR>", manQuote(strings.Join(quoteV(opt.Default), ", ")))
} else if len(opt.EnvDefaultKey) != 0 {
if runtime.GOOS == "windows" {
fmt.Fprintf(wr, " <default: \\fI%%%s%%\\fR>", manQuote(opt.EnvDefaultKey))
} else {
fmt.Fprintf(wr, " <default: \\fI$%s\\fR>", manQuote(opt.EnvDefaultKey))
}
}
if opt.Required {
fmt.Fprintf(wr, " (\\fIrequired\\fR)")
}
fmt.Fprintln(wr, "\\fP")
if len(opt.Description) != 0 {
formatForMan(wr, opt.Description)
fmt.Fprintln(wr, "")
}
}
})
}
func writeManPageSubcommands(wr io.Writer, name string, root *Command) {
commands := root.sortedCommands()
for _, c := range commands {
var nn string
if len(name) != 0 {
nn = name + " " + c.Name
} else {
nn = c.Name
}
writeManPageCommand(wr, nn, root, c)
}
}
func writeManPageCommand(wr io.Writer, name string, root *Command, command *Command) {
fmt.Fprintf(wr, ".SS %s\n", name)
fmt.Fprintln(wr, command.ShortDescription)
if len(command.LongDescription) > 0 {
fmt.Fprintln(wr, "")
cmdstart := fmt.Sprintf("The %s command", manQuote(command.Name))
if strings.HasPrefix(command.LongDescription, cmdstart) {
fmt.Fprintf(wr, "The \\fI%s\\fP command", manQuote(command.Name))
formatForMan(wr, command.LongDescription[len(cmdstart):])
fmt.Fprintln(wr, "")
} else {
formatForMan(wr, command.LongDescription)
fmt.Fprintln(wr, "")
}
}
var usage string
if us, ok := command.data.(Usage); ok {
usage = us.Usage()
} else if command.hasCliOptions() {
usage = fmt.Sprintf("[%s-OPTIONS]", command.Name)
}
var pre string
if root.hasCliOptions() {
pre = fmt.Sprintf("%s [OPTIONS] %s", root.Name, command.Name)
} else {
pre = fmt.Sprintf("%s %s", root.Name, command.Name)
}
if len(usage) > 0 {
fmt.Fprintf(wr, "\n\\fBUsage\\fP: %s %s\n\n", manQuote(pre), manQuote(usage))
}
if len(command.Aliases) > 0 {
fmt.Fprintf(wr, "\n\\fBAliases\\fP: %s\n\n", manQuote(strings.Join(command.Aliases, ", ")))
}
writeManPageOptions(wr, command.Group)
writeManPageSubcommands(wr, name, command)
}
// WriteManPage writes a basic man page in groff format to the specified
// writer.
func (p *Parser) WriteManPage(wr io.Writer) {
t := time.Now()
fmt.Fprintf(wr, ".TH %s 1 \"%s\"\n", manQuote(p.Name), t.Format("2 January 2006"))
fmt.Fprintln(wr, ".SH NAME")
fmt.Fprintf(wr, "%s \\- %s\n", manQuote(p.Name), manQuote(p.ShortDescription))
fmt.Fprintln(wr, ".SH SYNOPSIS")
usage := p.Usage
if len(usage) == 0 {
usage = "[OPTIONS]"
}
fmt.Fprintf(wr, "\\fB%s\\fP %s\n", manQuote(p.Name), manQuote(usage))
fmt.Fprintln(wr, ".SH DESCRIPTION")
formatForMan(wr, p.LongDescription)
fmt.Fprintln(wr, "")
fmt.Fprintln(wr, ".SH OPTIONS")
writeManPageOptions(wr, p.Command.Group)
if len(p.commands) > 0 {
fmt.Fprintln(wr, ".SH COMMANDS")
writeManPageSubcommands(wr, "", p.Command)
}
}

View File

@@ -1,97 +0,0 @@
package flags
import (
"fmt"
"testing"
)
type marshalled bool
func (m *marshalled) UnmarshalFlag(value string) error {
if value == "yes" {
*m = true
} else if value == "no" {
*m = false
} else {
return fmt.Errorf("`%s' is not a valid value, please specify `yes' or `no'", value)
}
return nil
}
func (m marshalled) MarshalFlag() (string, error) {
if m {
return "yes", nil
}
return "no", nil
}
type marshalledError bool
func (m marshalledError) MarshalFlag() (string, error) {
return "", newErrorf(ErrMarshal, "Failed to marshal")
}
func TestUnmarshal(t *testing.T) {
var opts = struct {
Value marshalled `short:"v"`
}{}
ret := assertParseSuccess(t, &opts, "-v=yes")
assertStringArray(t, ret, []string{})
if !opts.Value {
t.Errorf("Expected Value to be true")
}
}
func TestUnmarshalDefault(t *testing.T) {
var opts = struct {
Value marshalled `short:"v" default:"yes"`
}{}
ret := assertParseSuccess(t, &opts)
assertStringArray(t, ret, []string{})
if !opts.Value {
t.Errorf("Expected Value to be true")
}
}
func TestUnmarshalOptional(t *testing.T) {
var opts = struct {
Value marshalled `short:"v" optional:"yes" optional-value:"yes"`
}{}
ret := assertParseSuccess(t, &opts, "-v")
assertStringArray(t, ret, []string{})
if !opts.Value {
t.Errorf("Expected Value to be true")
}
}
func TestUnmarshalError(t *testing.T) {
var opts = struct {
Value marshalled `short:"v"`
}{}
assertParseFail(t, ErrMarshal, fmt.Sprintf("invalid argument for flag `%cv' (expected flags.marshalled): `invalid' is not a valid value, please specify `yes' or `no'", defaultShortOptDelimiter), &opts, "-vinvalid")
}
func TestMarshalError(t *testing.T) {
var opts = struct {
Value marshalledError `short:"v"`
}{}
p := NewParser(&opts, Default)
o := p.Command.Groups()[0].Options()[0]
_, err := convertToString(o.value, o.tag)
assertError(t, err, ErrMarshal, "Failed to marshal")
}

View File

@@ -1,140 +0,0 @@
package flags
import (
"strconv"
)
type multiTag struct {
value string
cache map[string][]string
}
func newMultiTag(v string) multiTag {
return multiTag{
value: v,
}
}
func (x *multiTag) scan() (map[string][]string, error) {
v := x.value
ret := make(map[string][]string)
// This is mostly copied from reflect.StructTag.Get
for v != "" {
i := 0
// Skip whitespace
for i < len(v) && v[i] == ' ' {
i++
}
v = v[i:]
if v == "" {
break
}
// Scan to colon to find key
i = 0
for i < len(v) && v[i] != ' ' && v[i] != ':' && v[i] != '"' {
i++
}
if i >= len(v) {
return nil, newErrorf(ErrTag, "expected `:' after key name, but got end of tag (in `%v`)", x.value)
}
if v[i] != ':' {
return nil, newErrorf(ErrTag, "expected `:' after key name, but got `%v' (in `%v`)", v[i], x.value)
}
if i+1 >= len(v) {
return nil, newErrorf(ErrTag, "expected `\"' to start tag value at end of tag (in `%v`)", x.value)
}
if v[i+1] != '"' {
return nil, newErrorf(ErrTag, "expected `\"' to start tag value, but got `%v' (in `%v`)", v[i+1], x.value)
}
name := v[:i]
v = v[i+1:]
// Scan quoted string to find value
i = 1
for i < len(v) && v[i] != '"' {
if v[i] == '\n' {
return nil, newErrorf(ErrTag, "unexpected newline in tag value `%v' (in `%v`)", name, x.value)
}
if v[i] == '\\' {
i++
}
i++
}
if i >= len(v) {
return nil, newErrorf(ErrTag, "expected end of tag value `\"' at end of tag (in `%v`)", x.value)
}
val, err := strconv.Unquote(v[:i+1])
if err != nil {
return nil, newErrorf(ErrTag, "Malformed value of tag `%v:%v` => %v (in `%v`)", name, v[:i+1], err, x.value)
}
v = v[i+1:]
ret[name] = append(ret[name], val)
}
return ret, nil
}
func (x *multiTag) Parse() error {
vals, err := x.scan()
x.cache = vals
return err
}
func (x *multiTag) cached() map[string][]string {
if x.cache == nil {
cache, _ := x.scan()
if cache == nil {
cache = make(map[string][]string)
}
x.cache = cache
}
return x.cache
}
func (x *multiTag) Get(key string) string {
c := x.cached()
if v, ok := c[key]; ok {
return v[len(v)-1]
}
return ""
}
func (x *multiTag) GetMany(key string) []string {
c := x.cached()
return c[key]
}
func (x *multiTag) Set(key string, value string) {
c := x.cached()
c[key] = []string{value}
}
func (x *multiTag) SetMany(key string, value []string) {
c := x.cached()
c[key] = value
}

View File

@@ -1,162 +0,0 @@
package flags
import (
"fmt"
"reflect"
"unicode/utf8"
)
// Option flag information. Contains a description of the option, short and
// long name as well as a default value and whether an argument for this
// flag is optional.
type Option struct {
// The description of the option flag. This description is shown
// automatically in the built-in help.
Description string
// The short name of the option (a single character). If not 0, the
// option flag can be 'activated' using -<ShortName>. Either ShortName
// or LongName needs to be non-empty.
ShortName rune
// The long name of the option. If not "", the option flag can be
// activated using --<LongName>. Either ShortName or LongName needs
// to be non-empty.
LongName string
// The default value of the option.
Default []string
// The optional environment default value key name.
EnvDefaultKey string
// The optional delimiter string for EnvDefaultKey values.
EnvDefaultDelim string
// If true, specifies that the argument to an option flag is optional.
// When no argument to the flag is specified on the command line, the
// value of OptionalValue will be set in the field this option represents.
// This is only valid for non-boolean options.
OptionalArgument bool
// The optional value of the option. The optional value is used when
// the option flag is marked as having an OptionalArgument. This means
// that when the flag is specified, but no option argument is given,
// the value of the field this option represents will be set to
// OptionalValue. This is only valid for non-boolean options.
OptionalValue []string
// If true, the option _must_ be specified on the command line. If the
// option is not specified, the parser will generate an ErrRequired type
// error.
Required bool
// A name for the value of an option shown in the Help as --flag [ValueName]
ValueName string
// A mask value to show in the help instead of the default value. This
// is useful for hiding sensitive information in the help, such as
// passwords.
DefaultMask string
// The group which the option belongs to
group *Group
// The struct field which the option represents.
field reflect.StructField
// The struct field value which the option represents.
value reflect.Value
// Determines if the option will be always quoted in the INI output
iniQuote bool
tag multiTag
isSet bool
}
// LongNameWithNamespace returns the option's long name with the group namespaces
// prepended by walking up the option's group tree. Namespaces and the long name
// itself are separated by the parser's namespace delimiter. If the long name is
// empty an empty string is returned.
func (option *Option) LongNameWithNamespace() string {
if len(option.LongName) == 0 {
return ""
}
// fetch the namespace delimiter from the parser which is always at the
// end of the group hierarchy
namespaceDelimiter := ""
g := option.group
for {
if p, ok := g.parent.(*Parser); ok {
namespaceDelimiter = p.NamespaceDelimiter
break
}
switch i := g.parent.(type) {
case *Command:
g = i.Group
case *Group:
g = i
}
}
// concatenate long name with namespace
longName := option.LongName
g = option.group
for g != nil {
if g.Namespace != "" {
longName = g.Namespace + namespaceDelimiter + longName
}
switch i := g.parent.(type) {
case *Command:
g = i.Group
case *Group:
g = i
case *Parser:
g = nil
}
}
return longName
}
// String converts an option to a human friendly readable string describing the
// option.
func (option *Option) String() string {
var s string
var short string
if option.ShortName != 0 {
data := make([]byte, utf8.RuneLen(option.ShortName))
utf8.EncodeRune(data, option.ShortName)
short = string(data)
if len(option.LongName) != 0 {
s = fmt.Sprintf("%s%s, %s%s",
string(defaultShortOptDelimiter), short,
defaultLongOptDelimiter, option.LongNameWithNamespace())
} else {
s = fmt.Sprintf("%s%s", string(defaultShortOptDelimiter), short)
}
} else if len(option.LongName) != 0 {
s = fmt.Sprintf("%s%s", defaultLongOptDelimiter, option.LongNameWithNamespace())
}
return s
}
// Value returns the option value as an interface{}.
func (option *Option) Value() interface{} {
return option.value.Interface()
}
// IsSet returns true if option has been set
func (option *Option) IsSet() bool {
return option.isSet
}

View File

@@ -1,182 +0,0 @@
package flags
import (
"reflect"
"strings"
"syscall"
)
// Set the value of an option to the specified value. An error will be returned
// if the specified value could not be converted to the corresponding option
// value type.
func (option *Option) set(value *string) error {
option.isSet = true
if option.isFunc() {
return option.call(value)
} else if value != nil {
return convert(*value, option.value, option.tag)
}
return convert("", option.value, option.tag)
}
func (option *Option) canCli() bool {
return option.ShortName != 0 || len(option.LongName) != 0
}
func (option *Option) canArgument() bool {
if u := option.isUnmarshaler(); u != nil {
return true
}
return !option.isBool()
}
func (option *Option) emptyValue() reflect.Value {
tp := option.value.Type()
if tp.Kind() == reflect.Map {
return reflect.MakeMap(tp)
}
return reflect.Zero(tp)
}
func (option *Option) empty() {
if !option.isFunc() {
option.value.Set(option.emptyValue())
}
}
func (option *Option) clearDefault() {
usedDefault := option.Default
if envKey := option.EnvDefaultKey; envKey != "" {
// os.Getenv() makes no distinction between undefined and
// empty values, so we use syscall.Getenv()
if value, ok := syscall.Getenv(envKey); ok {
if option.EnvDefaultDelim != "" {
usedDefault = strings.Split(value,
option.EnvDefaultDelim)
} else {
usedDefault = []string{value}
}
}
}
if len(usedDefault) > 0 {
option.empty()
for _, d := range usedDefault {
option.set(&d)
}
} else {
tp := option.value.Type()
switch tp.Kind() {
case reflect.Map:
if option.value.IsNil() {
option.empty()
}
case reflect.Slice:
if option.value.IsNil() {
option.empty()
}
}
}
}
func (option *Option) valueIsDefault() bool {
// Check if the value of the option corresponds to its
// default value
emptyval := option.emptyValue()
checkvalptr := reflect.New(emptyval.Type())
checkval := reflect.Indirect(checkvalptr)
checkval.Set(emptyval)
if len(option.Default) != 0 {
for _, v := range option.Default {
convert(v, checkval, option.tag)
}
}
return reflect.DeepEqual(option.value.Interface(), checkval.Interface())
}
func (option *Option) isUnmarshaler() Unmarshaler {
v := option.value
for {
if !v.CanInterface() {
break
}
i := v.Interface()
if u, ok := i.(Unmarshaler); ok {
return u
}
if !v.CanAddr() {
break
}
v = v.Addr()
}
return nil
}
func (option *Option) isBool() bool {
tp := option.value.Type()
for {
switch tp.Kind() {
case reflect.Bool:
return true
case reflect.Slice:
return (tp.Elem().Kind() == reflect.Bool)
case reflect.Func:
return tp.NumIn() == 0
case reflect.Ptr:
tp = tp.Elem()
default:
return false
}
}
}
func (option *Option) isFunc() bool {
return option.value.Type().Kind() == reflect.Func
}
func (option *Option) call(value *string) error {
var retval []reflect.Value
if value == nil {
retval = option.value.Call(nil)
} else {
tp := option.value.Type().In(0)
val := reflect.New(tp)
val = reflect.Indirect(val)
if err := convert(*value, val, option.tag); err != nil {
return err
}
retval = option.value.Call([]reflect.Value{val})
}
if len(retval) == 1 && retval[0].Type() == reflect.TypeOf((*error)(nil)).Elem() {
if retval[0].Interface() == nil {
return nil
}
return retval[0].Interface().(error)
}
return nil
}

View File

@@ -1,45 +0,0 @@
package flags
import (
"testing"
)
func TestPassDoubleDash(t *testing.T) {
var opts = struct {
Value bool `short:"v"`
}{}
p := NewParser(&opts, PassDoubleDash)
ret, err := p.ParseArgs([]string{"-v", "--", "-v", "-g"})
if err != nil {
t.Fatalf("Unexpected error: %v", err)
return
}
if !opts.Value {
t.Errorf("Expected Value to be true")
}
assertStringArray(t, ret, []string{"-v", "-g"})
}
func TestPassAfterNonOption(t *testing.T) {
var opts = struct {
Value bool `short:"v"`
}{}
p := NewParser(&opts, PassAfterNonOption)
ret, err := p.ParseArgs([]string{"-v", "arg", "-v", "-g"})
if err != nil {
t.Fatalf("Unexpected error: %v", err)
return
}
if !opts.Value {
t.Errorf("Expected Value to be true")
}
assertStringArray(t, ret, []string{"arg", "-v", "-g"})
}

View File

@@ -1,67 +0,0 @@
// +build !windows
package flags
import (
"strings"
)
const (
defaultShortOptDelimiter = '-'
defaultLongOptDelimiter = "--"
defaultNameArgDelimiter = '='
)
func argumentStartsOption(arg string) bool {
return len(arg) > 0 && arg[0] == '-'
}
func argumentIsOption(arg string) bool {
if len(arg) > 1 && arg[0] == '-' && arg[1] != '-' {
return true
}
if len(arg) > 2 && arg[0] == '-' && arg[1] == '-' && arg[2] != '-' {
return true
}
return false
}
// stripOptionPrefix returns the option without the prefix and whether or
// not the option is a long option or not.
func stripOptionPrefix(optname string) (prefix string, name string, islong bool) {
if strings.HasPrefix(optname, "--") {
return "--", optname[2:], true
} else if strings.HasPrefix(optname, "-") {
return "-", optname[1:], false
}
return "", optname, false
}
// splitOption attempts to split the passed option into a name and an argument.
// When there is no argument specified, nil will be returned for it.
func splitOption(prefix string, option string, islong bool) (string, string, *string) {
pos := strings.Index(option, "=")
if (islong && pos >= 0) || (!islong && pos == 1) {
rest := option[pos+1:]
return option[:pos], "=", &rest
}
return option, "", nil
}
// addHelpGroup adds a new group that contains default help parameters.
func (c *Command) addHelpGroup(showHelp func() error) *Group {
var help struct {
ShowHelp func() error `short:"h" long:"help" description:"Show this help message"`
}
help.ShowHelp = showHelp
ret, _ := c.AddGroup("Help Options", "", &help)
ret.isBuiltinHelp = true
return ret
}

View File

@@ -1,106 +0,0 @@
package flags
import (
"strings"
)
// Windows uses a front slash for both short and long options. Also it uses
// a colon for name/argument delimter.
const (
defaultShortOptDelimiter = '/'
defaultLongOptDelimiter = "/"
defaultNameArgDelimiter = ':'
)
func argumentStartsOption(arg string) bool {
return len(arg) > 0 && (arg[0] == '-' || arg[0] == '/')
}
func argumentIsOption(arg string) bool {
// Windows-style options allow front slash for the option
// delimiter.
if len(arg) > 1 && arg[0] == '/' {
return true
}
if len(arg) > 1 && arg[0] == '-' && arg[1] != '-' {
return true
}
if len(arg) > 2 && arg[0] == '-' && arg[1] == '-' && arg[2] != '-' {
return true
}
return false
}
// stripOptionPrefix returns the option without the prefix and whether or
// not the option is a long option or not.
func stripOptionPrefix(optname string) (prefix string, name string, islong bool) {
// Determine if the argument is a long option or not. Windows
// typically supports both long and short options with a single
// front slash as the option delimiter, so handle this situation
// nicely.
possplit := 0
if strings.HasPrefix(optname, "--") {
possplit = 2
islong = true
} else if strings.HasPrefix(optname, "-") {
possplit = 1
islong = false
} else if strings.HasPrefix(optname, "/") {
possplit = 1
islong = len(optname) > 2
}
return optname[:possplit], optname[possplit:], islong
}
// splitOption attempts to split the passed option into a name and an argument.
// When there is no argument specified, nil will be returned for it.
func splitOption(prefix string, option string, islong bool) (string, string, *string) {
if len(option) == 0 {
return option, "", nil
}
// Windows typically uses a colon for the option name and argument
// delimiter while POSIX typically uses an equals. Support both styles,
// but don't allow the two to be mixed. That is to say /foo:bar and
// --foo=bar are acceptable, but /foo=bar and --foo:bar are not.
var pos int
var sp string
if prefix == "/" {
sp = ":"
pos = strings.Index(option, sp)
} else if len(prefix) > 0 {
sp = "="
pos = strings.Index(option, sp)
}
if (islong && pos >= 0) || (!islong && pos == 1) {
rest := option[pos+1:]
return option[:pos], sp, &rest
}
return option, "", nil
}
// addHelpGroup adds a new group that contains default help parameters.
func (c *Command) addHelpGroup(showHelp func() error) *Group {
// Windows CLI applications typically use /? for help, so make both
// that available as well as the POSIX style h and help.
var help struct {
ShowHelpWindows func() error `short:"?" description:"Show this help message"`
ShowHelpPosix func() error `short:"h" long:"help" description:"Show this help message"`
}
help.ShowHelpWindows = showHelp
help.ShowHelpPosix = showHelp
ret, _ := c.AddGroup("Help Options", "", &help)
ret.isBuiltinHelp = true
return ret
}

View File

@@ -1,286 +0,0 @@
// Copyright 2012 Jesse van den Kieboom. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package flags
import (
"os"
"path"
)
// A Parser provides command line option parsing. It can contain several
// option groups each with their own set of options.
type Parser struct {
// Embedded, see Command for more information
*Command
// A usage string to be displayed in the help message.
Usage string
// Option flags changing the behavior of the parser.
Options Options
// NamespaceDelimiter separates group namespaces and option long names
NamespaceDelimiter string
// UnknownOptionsHandler is a function which gets called when the parser
// encounters an unknown option. The function receives the unknown option
// name, a SplitArgument which specifies its value if set with an argument
// separator, and the remaining command line arguments.
// It should return a new list of remaining arguments to continue parsing,
// or an error to indicate a parse failure.
UnknownOptionHandler func(option string, arg SplitArgument, args []string) ([]string, error)
internalError error
}
// SplitArgument represents the argument value of an option that was passed using
// an argument separator.
type SplitArgument interface {
// String returns the option's value as a string, and a boolean indicating
// if the option was present.
Value() (string, bool)
}
type strArgument struct {
value *string
}
func (s strArgument) Value() (string, bool) {
if s.value == nil {
return "", false
}
return *s.value, true
}
// Options provides parser options that change the behavior of the option
// parser.
type Options uint
const (
// None indicates no options.
None Options = 0
// HelpFlag adds a default Help Options group to the parser containing
// -h and --help options. When either -h or --help is specified on the
// command line, the parser will return the special error of type
// ErrHelp. When PrintErrors is also specified, then the help message
// will also be automatically printed to os.Stderr.
HelpFlag = 1 << iota
// PassDoubleDash passes all arguments after a double dash, --, as
// remaining command line arguments (i.e. they will not be parsed for
// flags).
PassDoubleDash
// IgnoreUnknown ignores any unknown options and passes them as
// remaining command line arguments instead of generating an error.
IgnoreUnknown
// PrintErrors prints any errors which occurred during parsing to
// os.Stderr.
PrintErrors
// PassAfterNonOption passes all arguments after the first non option
// as remaining command line arguments. This is equivalent to strict
// POSIX processing.
PassAfterNonOption
// Default is a convenient default set of options which should cover
// most of the uses of the flags package.
Default = HelpFlag | PrintErrors | PassDoubleDash
)
// Parse is a convenience function to parse command line options with default
// settings. The provided data is a pointer to a struct representing the
// default option group (named "Application Options"). For more control, use
// flags.NewParser.
func Parse(data interface{}) ([]string, error) {
return NewParser(data, Default).Parse()
}
// ParseArgs is a convenience function to parse command line options with default
// settings. The provided data is a pointer to a struct representing the
// default option group (named "Application Options"). The args argument is
// the list of command line arguments to parse. If you just want to parse the
// default program command line arguments (i.e. os.Args), then use flags.Parse
// instead. For more control, use flags.NewParser.
func ParseArgs(data interface{}, args []string) ([]string, error) {
return NewParser(data, Default).ParseArgs(args)
}
// NewParser creates a new parser. It uses os.Args[0] as the application
// name and then calls Parser.NewNamedParser (see Parser.NewNamedParser for
// more details). The provided data is a pointer to a struct representing the
// default option group (named "Application Options"), or nil if the default
// group should not be added. The options parameter specifies a set of options
// for the parser.
func NewParser(data interface{}, options Options) *Parser {
p := NewNamedParser(path.Base(os.Args[0]), options)
if data != nil {
g, err := p.AddGroup("Application Options", "", data)
if err == nil {
g.parent = p
}
p.internalError = err
}
return p
}
// NewNamedParser creates a new parser. The appname is used to display the
// executable name in the built-in help message. Option groups and commands can
// be added to this parser by using AddGroup and AddCommand.
func NewNamedParser(appname string, options Options) *Parser {
p := &Parser{
Command: newCommand(appname, "", "", nil),
Options: options,
NamespaceDelimiter: ".",
}
p.Command.parent = p
return p
}
// Parse parses the command line arguments from os.Args using Parser.ParseArgs.
// For more detailed information see ParseArgs.
func (p *Parser) Parse() ([]string, error) {
return p.ParseArgs(os.Args[1:])
}
// ParseArgs parses the command line arguments according to the option groups that
// were added to the parser. On successful parsing of the arguments, the
// remaining, non-option, arguments (if any) are returned. The returned error
// indicates a parsing error and can be used with PrintError to display
// contextual information on where the error occurred exactly.
//
// When the common help group has been added (AddHelp) and either -h or --help
// was specified in the command line arguments, a help message will be
// automatically printed. Furthermore, the special error type ErrHelp is returned.
// It is up to the caller to exit the program if so desired.
func (p *Parser) ParseArgs(args []string) ([]string, error) {
if p.internalError != nil {
return nil, p.internalError
}
p.clearIsSet()
// Add built-in help group to all commands if necessary
if (p.Options & HelpFlag) != None {
p.addHelpGroups(p.showBuiltinHelp)
}
compval := os.Getenv("GO_FLAGS_COMPLETION")
if len(compval) != 0 {
comp := &completion{parser: p}
if compval == "verbose" {
comp.ShowDescriptions = true
}
comp.execute(args)
return nil, nil
}
s := &parseState{
args: args,
retargs: make([]string, 0, len(args)),
}
p.fillParseState(s)
for !s.eof() {
arg := s.pop()
// When PassDoubleDash is set and we encounter a --, then
// simply append all the rest as arguments and break out
if (p.Options&PassDoubleDash) != None && arg == "--" {
s.addArgs(s.args...)
break
}
if !argumentIsOption(arg) {
// Note: this also sets s.err, so we can just check for
// nil here and use s.err later
if p.parseNonOption(s) != nil {
break
}
continue
}
var err error
prefix, optname, islong := stripOptionPrefix(arg)
optname, _, argument := splitOption(prefix, optname, islong)
if islong {
err = p.parseLong(s, optname, argument)
} else {
err = p.parseShort(s, optname, argument)
}
if err != nil {
ignoreUnknown := (p.Options & IgnoreUnknown) != None
parseErr := wrapError(err)
if parseErr.Type != ErrUnknownFlag || (!ignoreUnknown && p.UnknownOptionHandler == nil) {
s.err = parseErr
break
}
if ignoreUnknown {
s.addArgs(arg)
} else if p.UnknownOptionHandler != nil {
modifiedArgs, err := p.UnknownOptionHandler(optname, strArgument{argument}, s.args)
if err != nil {
s.err = err
break
}
s.args = modifiedArgs
}
}
}
if s.err == nil {
p.eachCommand(func(c *Command) {
c.eachGroup(func(g *Group) {
for _, option := range g.options {
if option.isSet {
continue
}
option.clearDefault()
}
})
}, true)
s.checkRequired(p)
}
var reterr error
if s.err != nil {
reterr = s.err
} else if len(s.command.commands) != 0 && !s.command.SubcommandsOptional {
reterr = s.estimateCommand()
} else if cmd, ok := s.command.data.(Commander); ok {
reterr = cmd.Execute(s.retargs)
}
if reterr != nil {
return append([]string{s.arg}, s.args...), p.printError(reterr)
}
return s.retargs, nil
}

View File

@@ -1,340 +0,0 @@
package flags
import (
"bytes"
"fmt"
"os"
"sort"
"strings"
"unicode/utf8"
)
type parseState struct {
arg string
args []string
retargs []string
positional []*Arg
err error
command *Command
lookup lookup
}
func (p *parseState) eof() bool {
return len(p.args) == 0
}
func (p *parseState) pop() string {
if p.eof() {
return ""
}
p.arg = p.args[0]
p.args = p.args[1:]
return p.arg
}
func (p *parseState) peek() string {
if p.eof() {
return ""
}
return p.args[0]
}
func (p *parseState) checkRequired(parser *Parser) error {
c := parser.Command
var required []*Option
for c != nil {
c.eachGroup(func(g *Group) {
for _, option := range g.options {
if !option.isSet && option.Required {
required = append(required, option)
}
}
})
c = c.Active
}
if len(required) == 0 {
if len(p.positional) > 0 && p.command.ArgsRequired {
var reqnames []string
for _, arg := range p.positional {
if arg.isRemaining() {
break
}
reqnames = append(reqnames, "`"+arg.Name+"`")
}
if len(reqnames) == 0 {
return nil
}
var msg string
if len(reqnames) == 1 {
msg = fmt.Sprintf("the required argument %s was not provided", reqnames[0])
} else {
msg = fmt.Sprintf("the required arguments %s and %s were not provided",
strings.Join(reqnames[:len(reqnames)-1], ", "), reqnames[len(reqnames)-1])
}
p.err = newError(ErrRequired, msg)
return p.err
}
return nil
}
names := make([]string, 0, len(required))
for _, k := range required {
names = append(names, "`"+k.String()+"'")
}
sort.Strings(names)
var msg string
if len(names) == 1 {
msg = fmt.Sprintf("the required flag %s was not specified", names[0])
} else {
msg = fmt.Sprintf("the required flags %s and %s were not specified",
strings.Join(names[:len(names)-1], ", "), names[len(names)-1])
}
p.err = newError(ErrRequired, msg)
return p.err
}
func (p *parseState) estimateCommand() error {
commands := p.command.sortedCommands()
cmdnames := make([]string, len(commands))
for i, v := range commands {
cmdnames[i] = v.Name
}
var msg string
var errtype ErrorType
if len(p.retargs) != 0 {
c, l := closestChoice(p.retargs[0], cmdnames)
msg = fmt.Sprintf("Unknown command `%s'", p.retargs[0])
errtype = ErrUnknownCommand
if float32(l)/float32(len(c)) < 0.5 {
msg = fmt.Sprintf("%s, did you mean `%s'?", msg, c)
} else if len(cmdnames) == 1 {
msg = fmt.Sprintf("%s. You should use the %s command",
msg,
cmdnames[0])
} else {
msg = fmt.Sprintf("%s. Please specify one command of: %s or %s",
msg,
strings.Join(cmdnames[:len(cmdnames)-1], ", "),
cmdnames[len(cmdnames)-1])
}
} else {
errtype = ErrCommandRequired
if len(cmdnames) == 1 {
msg = fmt.Sprintf("Please specify the %s command", cmdnames[0])
} else {
msg = fmt.Sprintf("Please specify one command of: %s or %s",
strings.Join(cmdnames[:len(cmdnames)-1], ", "),
cmdnames[len(cmdnames)-1])
}
}
return newError(errtype, msg)
}
func (p *Parser) parseOption(s *parseState, name string, option *Option, canarg bool, argument *string) (err error) {
if !option.canArgument() {
if argument != nil {
return newErrorf(ErrNoArgumentForBool, "bool flag `%s' cannot have an argument", option)
}
err = option.set(nil)
} else if argument != nil || (canarg && !s.eof()) {
var arg string
if argument != nil {
arg = *argument
} else {
arg = s.pop()
if argumentIsOption(arg) {
return newErrorf(ErrExpectedArgument, "expected argument for flag `%s', but got option `%s'", option, arg)
} else if p.Options&PassDoubleDash != 0 && arg == "--" {
return newErrorf(ErrExpectedArgument, "expected argument for flag `%s', but got double dash `--'", option)
}
}
if option.tag.Get("unquote") != "false" {
arg, err = unquoteIfPossible(arg)
}
if err == nil {
err = option.set(&arg)
}
} else if option.OptionalArgument {
option.empty()
for _, v := range option.OptionalValue {
err = option.set(&v)
if err != nil {
break
}
}
} else {
err = newErrorf(ErrExpectedArgument, "expected argument for flag `%s'", option)
}
if err != nil {
if _, ok := err.(*Error); !ok {
err = newErrorf(ErrMarshal, "invalid argument for flag `%s' (expected %s): %s",
option,
option.value.Type(),
err.Error())
}
}
return err
}
func (p *Parser) parseLong(s *parseState, name string, argument *string) error {
if option := s.lookup.longNames[name]; option != nil {
// Only long options that are required can consume an argument
// from the argument list
canarg := !option.OptionalArgument
return p.parseOption(s, name, option, canarg, argument)
}
return newErrorf(ErrUnknownFlag, "unknown flag `%s'", name)
}
func (p *Parser) splitShortConcatArg(s *parseState, optname string) (string, *string) {
c, n := utf8.DecodeRuneInString(optname)
if n == len(optname) {
return optname, nil
}
first := string(c)
if option := s.lookup.shortNames[first]; option != nil && option.canArgument() {
arg := optname[n:]
return first, &arg
}
return optname, nil
}
func (p *Parser) parseShort(s *parseState, optname string, argument *string) error {
if argument == nil {
optname, argument = p.splitShortConcatArg(s, optname)
}
for i, c := range optname {
shortname := string(c)
if option := s.lookup.shortNames[shortname]; option != nil {
// Only the last short argument can consume an argument from
// the arguments list, and only if it's non optional
canarg := (i+utf8.RuneLen(c) == len(optname)) && !option.OptionalArgument
if err := p.parseOption(s, shortname, option, canarg, argument); err != nil {
return err
}
} else {
return newErrorf(ErrUnknownFlag, "unknown flag `%s'", shortname)
}
// Only the first option can have a concatted argument, so just
// clear argument here
argument = nil
}
return nil
}
func (p *parseState) addArgs(args ...string) error {
for len(p.positional) > 0 && len(args) > 0 {
arg := p.positional[0]
if err := convert(args[0], arg.value, arg.tag); err != nil {
return err
}
if !arg.isRemaining() {
p.positional = p.positional[1:]
}
args = args[1:]
}
p.retargs = append(p.retargs, args...)
return nil
}
func (p *Parser) parseNonOption(s *parseState) error {
if len(s.positional) > 0 {
return s.addArgs(s.arg)
}
if cmd := s.lookup.commands[s.arg]; cmd != nil {
s.command.Active = cmd
cmd.fillParseState(s)
} else if (p.Options & PassAfterNonOption) != None {
// If PassAfterNonOption is set then all remaining arguments
// are considered positional
if err := s.addArgs(s.arg); err != nil {
return err
}
if err := s.addArgs(s.args...); err != nil {
return err
}
s.args = []string{}
} else {
return s.addArgs(s.arg)
}
return nil
}
func (p *Parser) showBuiltinHelp() error {
var b bytes.Buffer
p.WriteHelp(&b)
return newError(ErrHelp, b.String())
}
func (p *Parser) printError(err error) error {
if err != nil && (p.Options&PrintErrors) != None {
fmt.Fprintln(os.Stderr, err)
}
return err
}
func (p *Parser) clearIsSet() {
p.eachCommand(func(c *Command) {
c.eachGroup(func(g *Group) {
for _, option := range g.options {
option.isSet = false
}
})
}, true)
}

View File

@@ -1,431 +0,0 @@
package flags
import (
"fmt"
"os"
"reflect"
"strconv"
"strings"
"testing"
"time"
)
type defaultOptions struct {
Int int `long:"i"`
IntDefault int `long:"id" default:"1"`
String string `long:"str"`
StringDefault string `long:"strd" default:"abc"`
StringNotUnquoted string `long:"strnot" unquote:"false"`
Time time.Duration `long:"t"`
TimeDefault time.Duration `long:"td" default:"1m"`
Map map[string]int `long:"m"`
MapDefault map[string]int `long:"md" default:"a:1"`
Slice []int `long:"s"`
SliceDefault []int `long:"sd" default:"1" default:"2"`
}
func TestDefaults(t *testing.T) {
var tests = []struct {
msg string
args []string
expected defaultOptions
}{
{
msg: "no arguments, expecting default values",
args: []string{},
expected: defaultOptions{
Int: 0,
IntDefault: 1,
String: "",
StringDefault: "abc",
Time: 0,
TimeDefault: time.Minute,
Map: map[string]int{},
MapDefault: map[string]int{"a": 1},
Slice: []int{},
SliceDefault: []int{1, 2},
},
},
{
msg: "non-zero value arguments, expecting overwritten arguments",
args: []string{"--i=3", "--id=3", "--str=def", "--strd=def", "--t=3ms", "--td=3ms", "--m=c:3", "--md=c:3", "--s=3", "--sd=3"},
expected: defaultOptions{
Int: 3,
IntDefault: 3,
String: "def",
StringDefault: "def",
Time: 3 * time.Millisecond,
TimeDefault: 3 * time.Millisecond,
Map: map[string]int{"c": 3},
MapDefault: map[string]int{"c": 3},
Slice: []int{3},
SliceDefault: []int{3},
},
},
{
msg: "zero value arguments, expecting overwritten arguments",
args: []string{"--i=0", "--id=0", "--str", "", "--strd=\"\"", "--t=0ms", "--td=0s", "--m=:0", "--md=:0", "--s=0", "--sd=0"},
expected: defaultOptions{
Int: 0,
IntDefault: 0,
String: "",
StringDefault: "",
Time: 0,
TimeDefault: 0,
Map: map[string]int{"": 0},
MapDefault: map[string]int{"": 0},
Slice: []int{0},
SliceDefault: []int{0},
},
},
}
for _, test := range tests {
var opts defaultOptions
_, err := ParseArgs(&opts, test.args)
if err != nil {
t.Fatalf("%s:\nUnexpected error: %v", test.msg, err)
}
if opts.Slice == nil {
opts.Slice = []int{}
}
if !reflect.DeepEqual(opts, test.expected) {
t.Errorf("%s:\nUnexpected options with arguments %+v\nexpected\n%+v\nbut got\n%+v\n", test.msg, test.args, test.expected, opts)
}
}
}
func TestUnquoting(t *testing.T) {
var tests = []struct {
arg string
err error
value string
}{
{
arg: "\"abc",
err: strconv.ErrSyntax,
value: "",
},
{
arg: "\"\"abc\"",
err: strconv.ErrSyntax,
value: "",
},
{
arg: "\"abc\"",
err: nil,
value: "abc",
},
{
arg: "\"\\\"abc\\\"\"",
err: nil,
value: "\"abc\"",
},
{
arg: "\"\\\"abc\"",
err: nil,
value: "\"abc",
},
}
for _, test := range tests {
var opts defaultOptions
for _, delimiter := range []bool{false, true} {
p := NewParser(&opts, None)
var err error
if delimiter {
_, err = p.ParseArgs([]string{"--str=" + test.arg, "--strnot=" + test.arg})
} else {
_, err = p.ParseArgs([]string{"--str", test.arg, "--strnot", test.arg})
}
if test.err == nil {
if err != nil {
t.Fatalf("Expected no error but got: %v", err)
}
if test.value != opts.String {
t.Fatalf("Expected String to be %q but got %q", test.value, opts.String)
}
if q := strconv.Quote(test.value); q != opts.StringNotUnquoted {
t.Fatalf("Expected StringDefault to be %q but got %q", q, opts.StringNotUnquoted)
}
} else {
if err == nil {
t.Fatalf("Expected error")
} else if e, ok := err.(*Error); ok {
if strings.HasPrefix(e.Message, test.err.Error()) {
t.Fatalf("Expected error message to end with %q but got %v", test.err.Error(), e.Message)
}
}
}
}
}
}
// envRestorer keeps a copy of a set of env variables and can restore the env from them
type envRestorer struct {
env map[string]string
}
func (r *envRestorer) Restore() {
os.Clearenv()
for k, v := range r.env {
os.Setenv(k, v)
}
}
// EnvSnapshot returns a snapshot of the currently set env variables
func EnvSnapshot() *envRestorer {
r := envRestorer{make(map[string]string)}
for _, kv := range os.Environ() {
parts := strings.SplitN(kv, "=", 2)
if len(parts) != 2 {
panic("got a weird env variable: " + kv)
}
r.env[parts[0]] = parts[1]
}
return &r
}
type envDefaultOptions struct {
Int int `long:"i" default:"1" env:"TEST_I"`
Time time.Duration `long:"t" default:"1m" env:"TEST_T"`
Map map[string]int `long:"m" default:"a:1" env:"TEST_M" env-delim:";"`
Slice []int `long:"s" default:"1" default:"2" env:"TEST_S" env-delim:","`
}
func TestEnvDefaults(t *testing.T) {
var tests = []struct {
msg string
args []string
expected envDefaultOptions
env map[string]string
}{
{
msg: "no arguments, no env, expecting default values",
args: []string{},
expected: envDefaultOptions{
Int: 1,
Time: time.Minute,
Map: map[string]int{"a": 1},
Slice: []int{1, 2},
},
},
{
msg: "no arguments, env defaults, expecting env default values",
args: []string{},
expected: envDefaultOptions{
Int: 2,
Time: 2 * time.Minute,
Map: map[string]int{"a": 2, "b": 3},
Slice: []int{4, 5, 6},
},
env: map[string]string{
"TEST_I": "2",
"TEST_T": "2m",
"TEST_M": "a:2;b:3",
"TEST_S": "4,5,6",
},
},
{
msg: "non-zero value arguments, expecting overwritten arguments",
args: []string{"--i=3", "--t=3ms", "--m=c:3", "--s=3"},
expected: envDefaultOptions{
Int: 3,
Time: 3 * time.Millisecond,
Map: map[string]int{"c": 3},
Slice: []int{3},
},
env: map[string]string{
"TEST_I": "2",
"TEST_T": "2m",
"TEST_M": "a:2;b:3",
"TEST_S": "4,5,6",
},
},
{
msg: "zero value arguments, expecting overwritten arguments",
args: []string{"--i=0", "--t=0ms", "--m=:0", "--s=0"},
expected: envDefaultOptions{
Int: 0,
Time: 0,
Map: map[string]int{"": 0},
Slice: []int{0},
},
env: map[string]string{
"TEST_I": "2",
"TEST_T": "2m",
"TEST_M": "a:2;b:3",
"TEST_S": "4,5,6",
},
},
}
oldEnv := EnvSnapshot()
defer oldEnv.Restore()
for _, test := range tests {
var opts envDefaultOptions
oldEnv.Restore()
for envKey, envValue := range test.env {
os.Setenv(envKey, envValue)
}
_, err := ParseArgs(&opts, test.args)
if err != nil {
t.Fatalf("%s:\nUnexpected error: %v", test.msg, err)
}
if opts.Slice == nil {
opts.Slice = []int{}
}
if !reflect.DeepEqual(opts, test.expected) {
t.Errorf("%s:\nUnexpected options with arguments %+v\nexpected\n%+v\nbut got\n%+v\n", test.msg, test.args, test.expected, opts)
}
}
}
func TestOptionAsArgument(t *testing.T) {
var tests = []struct {
args []string
expectError bool
errType ErrorType
errMsg string
rest []string
}{
{
// short option must not be accepted as argument
args: []string{"--string-slice", "foobar", "--string-slice", "-o"},
expectError: true,
errType: ErrExpectedArgument,
errMsg: "expected argument for flag `--string-slice', but got option `-o'",
},
{
// long option must not be accepted as argument
args: []string{"--string-slice", "foobar", "--string-slice", "--other-option"},
expectError: true,
errType: ErrExpectedArgument,
errMsg: "expected argument for flag `--string-slice', but got option `--other-option'",
},
{
// long option must not be accepted as argument
args: []string{"--string-slice", "--"},
expectError: true,
errType: ErrExpectedArgument,
errMsg: "expected argument for flag `--string-slice', but got double dash `--'",
},
{
// quoted and appended option should be accepted as argument (even if it looks like an option)
args: []string{"--string-slice", "foobar", "--string-slice=\"--other-option\""},
},
{
// Accept any single character arguments including '-'
args: []string{"--string-slice", "-"},
},
{
args: []string{"-o", "-", "-"},
rest: []string{"-", "-"},
},
}
var opts struct {
StringSlice []string `long:"string-slice"`
OtherOption bool `long:"other-option" short:"o"`
}
for _, test := range tests {
if test.expectError {
assertParseFail(t, test.errType, test.errMsg, &opts, test.args...)
} else {
args := assertParseSuccess(t, &opts, test.args...)
assertStringArray(t, args, test.rest)
}
}
}
func TestUnknownFlagHandler(t *testing.T) {
var opts struct {
Flag1 string `long:"flag1"`
Flag2 string `long:"flag2"`
}
p := NewParser(&opts, None)
var unknownFlag1 string
var unknownFlag2 bool
var unknownFlag3 string
// Set up a callback to intercept unknown options during parsing
p.UnknownOptionHandler = func(option string, arg SplitArgument, args []string) ([]string, error) {
if option == "unknownFlag1" {
if argValue, ok := arg.Value(); ok {
unknownFlag1 = argValue
return args, nil
}
// consume a value from remaining args list
unknownFlag1 = args[0]
return args[1:], nil
} else if option == "unknownFlag2" {
// treat this one as a bool switch, don't consume any args
unknownFlag2 = true
return args, nil
} else if option == "unknownFlag3" {
if argValue, ok := arg.Value(); ok {
unknownFlag3 = argValue
return args, nil
}
// consume a value from remaining args list
unknownFlag3 = args[0]
return args[1:], nil
}
return args, fmt.Errorf("Unknown flag: %v", option)
}
// Parse args containing some unknown flags, verify that
// our callback can handle all of them
_, err := p.ParseArgs([]string{"--flag1=stuff", "--unknownFlag1", "blah", "--unknownFlag2", "--unknownFlag3=baz", "--flag2=foo"})
if err != nil {
assertErrorf(t, "Parser returned unexpected error %v", err)
}
assertString(t, opts.Flag1, "stuff")
assertString(t, opts.Flag2, "foo")
assertString(t, unknownFlag1, "blah")
assertString(t, unknownFlag3, "baz")
if !unknownFlag2 {
assertErrorf(t, "Flag should have been set by unknown handler, but had value: %v", unknownFlag2)
}
// Parse args with unknown flags that callback doesn't handle, verify it returns error
_, err = p.ParseArgs([]string{"--flag1=stuff", "--unknownFlagX", "blah", "--flag2=foo"})
if err == nil {
assertErrorf(t, "Parser should have returned error, but returned nil")
}
}

View File

@@ -1,81 +0,0 @@
package flags
import (
"testing"
)
func TestPointerBool(t *testing.T) {
var opts = struct {
Value *bool `short:"v"`
}{}
ret := assertParseSuccess(t, &opts, "-v")
assertStringArray(t, ret, []string{})
if !*opts.Value {
t.Errorf("Expected Value to be true")
}
}
func TestPointerString(t *testing.T) {
var opts = struct {
Value *string `short:"v"`
}{}
ret := assertParseSuccess(t, &opts, "-v", "value")
assertStringArray(t, ret, []string{})
assertString(t, *opts.Value, "value")
}
func TestPointerSlice(t *testing.T) {
var opts = struct {
Value *[]string `short:"v"`
}{}
ret := assertParseSuccess(t, &opts, "-v", "value1", "-v", "value2")
assertStringArray(t, ret, []string{})
assertStringArray(t, *opts.Value, []string{"value1", "value2"})
}
func TestPointerMap(t *testing.T) {
var opts = struct {
Value *map[string]int `short:"v"`
}{}
ret := assertParseSuccess(t, &opts, "-v", "k1:2", "-v", "k2:-5")
assertStringArray(t, ret, []string{})
if v, ok := (*opts.Value)["k1"]; !ok {
t.Errorf("Expected key \"k1\" to exist")
} else if v != 2 {
t.Errorf("Expected \"k1\" to be 2, but got %#v", v)
}
if v, ok := (*opts.Value)["k2"]; !ok {
t.Errorf("Expected key \"k2\" to exist")
} else if v != -5 {
t.Errorf("Expected \"k2\" to be -5, but got %#v", v)
}
}
type PointerGroup struct {
Value bool `short:"v"`
}
func TestPointerGroup(t *testing.T) {
var opts = struct {
Group *PointerGroup `group:"Group Options"`
}{}
ret := assertParseSuccess(t, &opts, "-v")
assertStringArray(t, ret, []string{})
if !opts.Group.Value {
t.Errorf("Expected Group.Value to be true")
}
}

View File

@@ -1,194 +0,0 @@
package flags
import (
"fmt"
"testing"
)
func TestShort(t *testing.T) {
var opts = struct {
Value bool `short:"v"`
}{}
ret := assertParseSuccess(t, &opts, "-v")
assertStringArray(t, ret, []string{})
if !opts.Value {
t.Errorf("Expected Value to be true")
}
}
func TestShortTooLong(t *testing.T) {
var opts = struct {
Value bool `short:"vv"`
}{}
assertParseFail(t, ErrShortNameTooLong, "short names can only be 1 character long, not `vv'", &opts)
}
func TestShortRequired(t *testing.T) {
var opts = struct {
Value bool `short:"v" required:"true"`
}{}
assertParseFail(t, ErrRequired, fmt.Sprintf("the required flag `%cv' was not specified", defaultShortOptDelimiter), &opts)
}
func TestShortMultiConcat(t *testing.T) {
var opts = struct {
V bool `short:"v"`
O bool `short:"o"`
F bool `short:"f"`
}{}
ret := assertParseSuccess(t, &opts, "-vo", "-f")
assertStringArray(t, ret, []string{})
if !opts.V {
t.Errorf("Expected V to be true")
}
if !opts.O {
t.Errorf("Expected O to be true")
}
if !opts.F {
t.Errorf("Expected F to be true")
}
}
func TestShortMultiRequiredConcat(t *testing.T) {
var opts = struct {
V bool `short:"v" required:"true"`
O bool `short:"o" required:"true"`
F bool `short:"f" required:"true"`
}{}
ret := assertParseSuccess(t, &opts, "-vo", "-f")
assertStringArray(t, ret, []string{})
if !opts.V {
t.Errorf("Expected V to be true")
}
if !opts.O {
t.Errorf("Expected O to be true")
}
if !opts.F {
t.Errorf("Expected F to be true")
}
}
func TestShortMultiSlice(t *testing.T) {
var opts = struct {
Values []bool `short:"v"`
}{}
ret := assertParseSuccess(t, &opts, "-v", "-v")
assertStringArray(t, ret, []string{})
assertBoolArray(t, opts.Values, []bool{true, true})
}
func TestShortMultiSliceConcat(t *testing.T) {
var opts = struct {
Values []bool `short:"v"`
}{}
ret := assertParseSuccess(t, &opts, "-vvv")
assertStringArray(t, ret, []string{})
assertBoolArray(t, opts.Values, []bool{true, true, true})
}
func TestShortWithEqualArg(t *testing.T) {
var opts = struct {
Value string `short:"v"`
}{}
ret := assertParseSuccess(t, &opts, "-v=value")
assertStringArray(t, ret, []string{})
assertString(t, opts.Value, "value")
}
func TestShortWithArg(t *testing.T) {
var opts = struct {
Value string `short:"v"`
}{}
ret := assertParseSuccess(t, &opts, "-vvalue")
assertStringArray(t, ret, []string{})
assertString(t, opts.Value, "value")
}
func TestShortArg(t *testing.T) {
var opts = struct {
Value string `short:"v"`
}{}
ret := assertParseSuccess(t, &opts, "-v", "value")
assertStringArray(t, ret, []string{})
assertString(t, opts.Value, "value")
}
func TestShortMultiWithEqualArg(t *testing.T) {
var opts = struct {
F []bool `short:"f"`
Value string `short:"v"`
}{}
assertParseFail(t, ErrExpectedArgument, fmt.Sprintf("expected argument for flag `%cv'", defaultShortOptDelimiter), &opts, "-ffv=value")
}
func TestShortMultiArg(t *testing.T) {
var opts = struct {
F []bool `short:"f"`
Value string `short:"v"`
}{}
ret := assertParseSuccess(t, &opts, "-ffv", "value")
assertStringArray(t, ret, []string{})
assertBoolArray(t, opts.F, []bool{true, true})
assertString(t, opts.Value, "value")
}
func TestShortMultiArgConcatFail(t *testing.T) {
var opts = struct {
F []bool `short:"f"`
Value string `short:"v"`
}{}
assertParseFail(t, ErrExpectedArgument, fmt.Sprintf("expected argument for flag `%cv'", defaultShortOptDelimiter), &opts, "-ffvvalue")
}
func TestShortMultiArgConcat(t *testing.T) {
var opts = struct {
F []bool `short:"f"`
Value string `short:"v"`
}{}
ret := assertParseSuccess(t, &opts, "-vff")
assertStringArray(t, ret, []string{})
assertString(t, opts.Value, "ff")
}
func TestShortOptional(t *testing.T) {
var opts = struct {
F []bool `short:"f"`
Value string `short:"v" optional:"yes" optional-value:"value"`
}{}
ret := assertParseSuccess(t, &opts, "-fv", "f")
assertStringArray(t, ret, []string{"f"})
assertString(t, opts.Value, "value")
}

View File

@@ -1,38 +0,0 @@
package flags
import (
"testing"
)
func TestTagMissingColon(t *testing.T) {
var opts = struct {
Value bool `short`
}{}
assertParseFail(t, ErrTag, "expected `:' after key name, but got end of tag (in `short`)", &opts, "")
}
func TestTagMissingValue(t *testing.T) {
var opts = struct {
Value bool `short:`
}{}
assertParseFail(t, ErrTag, "expected `\"' to start tag value at end of tag (in `short:`)", &opts, "")
}
func TestTagMissingQuote(t *testing.T) {
var opts = struct {
Value bool `short:"v`
}{}
assertParseFail(t, ErrTag, "expected end of tag value `\"' at end of tag (in `short:\"v`)", &opts, "")
}
func TestTagNewline(t *testing.T) {
var opts = struct {
Value bool `long:"verbose" description:"verbose
something"`
}{}
assertParseFail(t, ErrTag, "unexpected newline in tag value `description' (in `long:\"verbose\" description:\"verbose\nsomething\"`)", &opts, "")
}

View File

@@ -1,28 +0,0 @@
// +build !windows,!plan9,!solaris
package flags
import (
"syscall"
"unsafe"
)
type winsize struct {
row, col uint16
xpixel, ypixel uint16
}
func getTerminalColumns() int {
ws := winsize{}
if tIOCGWINSZ != 0 {
syscall.Syscall(syscall.SYS_IOCTL,
uintptr(0),
uintptr(tIOCGWINSZ),
uintptr(unsafe.Pointer(&ws)))
return int(ws.col)
}
return 80
}

View File

@@ -1,7 +0,0 @@
// +build linux
package flags
const (
tIOCGWINSZ = 0x5413
)

View File

@@ -1,7 +0,0 @@
// +build windows plan9 solaris
package flags
func getTerminalColumns() int {
return 80
}

View File

@@ -1,7 +0,0 @@
// +build !darwin,!freebsd,!netbsd,!openbsd,!linux
package flags
const (
tIOCGWINSZ = 0
)

View File

@@ -1,7 +0,0 @@
// +build darwin freebsd netbsd openbsd
package flags
const (
tIOCGWINSZ = 0x40087468
)

View File

@@ -1,66 +0,0 @@
package flags
import (
"testing"
)
func TestUnknownFlags(t *testing.T) {
var opts = struct {
Verbose []bool `short:"v" long:"verbose" description:"Verbose output"`
}{}
args := []string{
"-f",
}
p := NewParser(&opts, 0)
args, err := p.ParseArgs(args)
if err == nil {
t.Fatal("Expected error for unknown argument")
}
}
func TestIgnoreUnknownFlags(t *testing.T) {
var opts = struct {
Verbose []bool `short:"v" long:"verbose" description:"Verbose output"`
}{}
args := []string{
"hello",
"world",
"-v",
"--foo=bar",
"--verbose",
"-f",
}
p := NewParser(&opts, IgnoreUnknown)
args, err := p.ParseArgs(args)
if err != nil {
t.Fatal(err)
}
exargs := []string{
"hello",
"world",
"--foo=bar",
"-f",
}
issame := (len(args) == len(exargs))
if issame {
for i := 0; i < len(args); i++ {
if args[i] != exargs[i] {
issame = false
break
}
}
}
if !issame {
t.Fatalf("Expected %v but got %v", exargs, args)
}
}

View File

@@ -1,23 +0,0 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*.test

View File

@@ -1,191 +0,0 @@
All files in this repository are licensed as follows. If you contribute
to this repository, it is assumed that you license your contribution
under the same license unless you state otherwise.
All files Copyright (C) 2015 Canonical Ltd. unless otherwise specified in the file.
This software is licensed under the LGPLv3, included below.
As a special exception to the GNU Lesser General Public License version 3
("LGPL3"), the copyright holders of this Library give you permission to
convey to a third party a Combined Work that links statically or dynamically
to this Library without providing any Minimal Corresponding Source or
Minimal Application Code as set out in 4d or providing the installation
information set out in section 4e, provided that you comply with the other
provisions of LGPL3 and provided that you meet, for the Application the
terms and conditions of the license(s) which apply to the Application.
Except as stated in this special exception, the provisions of LGPL3 will
continue to comply in full to this Library. If you modify this Library, you
may apply this exception to your version of this Library, but you are not
obliged to do so. If you do not wish to do so, delete this exception
statement from your version. This exception does not (and cannot) modify any
license terms which apply to the Application, with which you must still
comply.
GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked
Version".
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
version:
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license
document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
a) Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license
document.
c) For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time
a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked
Version.
e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option 4d0, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License "or any later version"
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the
Library.

View File

@@ -1,11 +0,0 @@
default: check
check:
go test && go test -compiler gccgo
docs:
godoc2md github.com/juju/errors > README.md
sed -i 's|\[godoc-link-here\]|[![GoDoc](https://godoc.org/github.com/juju/errors?status.svg)](https://godoc.org/github.com/juju/errors)|' README.md
.PHONY: default check docs

View File

@@ -1,536 +0,0 @@
# errors
import "github.com/juju/errors"
[![GoDoc](https://godoc.org/github.com/juju/errors?status.svg)](https://godoc.org/github.com/juju/errors)
The juju/errors provides an easy way to annotate errors without losing the
orginal error context.
The exported `New` and `Errorf` functions are designed to replace the
`errors.New` and `fmt.Errorf` functions respectively. The same underlying
error is there, but the package also records the location at which the error
was created.
A primary use case for this library is to add extra context any time an
error is returned from a function.
if err := SomeFunc(); err != nil {
return err
}
This instead becomes:
if err := SomeFunc(); err != nil {
return errors.Trace(err)
}
which just records the file and line number of the Trace call, or
if err := SomeFunc(); err != nil {
return errors.Annotate(err, "more context")
}
which also adds an annotation to the error.
When you want to check to see if an error is of a particular type, a helper
function is normally exported by the package that returned the error, like the
`os` package does. The underlying cause of the error is available using the
`Cause` function.
os.IsNotExist(errors.Cause(err))
The result of the `Error()` call on an annotated error is the annotations joined
with colons, then the result of the `Error()` method for the underlying error
that was the cause.
err := errors.Errorf("original")
err = errors.Annotatef(err, "context")
err = errors.Annotatef(err, "more context")
err.Error() -> "more context: context: original"
Obviously recording the file, line and functions is not very useful if you
cannot get them back out again.
errors.ErrorStack(err)
will return something like:
first error
github.com/juju/errors/annotation_test.go:193:
github.com/juju/errors/annotation_test.go:194: annotation
github.com/juju/errors/annotation_test.go:195:
github.com/juju/errors/annotation_test.go:196: more context
github.com/juju/errors/annotation_test.go:197:
The first error was generated by an external system, so there was no location
associated. The second, fourth, and last lines were generated with Trace calls,
and the other two through Annotate.
Sometimes when responding to an error you want to return a more specific error
for the situation.
if err := FindField(field); err != nil {
return errors.Wrap(err, errors.NotFoundf(field))
}
This returns an error where the complete error stack is still available, and
`errors.Cause()` will return the `NotFound` error.
## func AlreadyExistsf
``` go
func AlreadyExistsf(format string, args ...interface{}) error
```
AlreadyExistsf returns an error which satisfies IsAlreadyExists().
## func Annotate
``` go
func Annotate(other error, message string) error
```
Annotate is used to add extra context to an existing error. The location of
the Annotate call is recorded with the annotations. The file, line and
function are also recorded.
For example:
if err := SomeFunc(); err != nil {
return errors.Annotate(err, "failed to frombulate")
}
## func Annotatef
``` go
func Annotatef(other error, format string, args ...interface{}) error
```
Annotatef is used to add extra context to an existing error. The location of
the Annotate call is recorded with the annotations. The file, line and
function are also recorded.
For example:
if err := SomeFunc(); err != nil {
return errors.Annotatef(err, "failed to frombulate the %s", arg)
}
## func Cause
``` go
func Cause(err error) error
```
Cause returns the cause of the given error. This will be either the
original error, or the result of a Wrap or Mask call.
Cause is the usual way to diagnose errors that may have been wrapped by
the other errors functions.
## func DeferredAnnotatef
``` go
func DeferredAnnotatef(err *error, format string, args ...interface{})
```
DeferredAnnotatef annotates the given error (when it is not nil) with the given
format string and arguments (like fmt.Sprintf). If *err is nil, DeferredAnnotatef
does nothing. This method is used in a defer statement in order to annotate any
resulting error with the same message.
For example:
defer DeferredAnnotatef(&err, "failed to frombulate the %s", arg)
## func Details
``` go
func Details(err error) string
```
Details returns information about the stack of errors wrapped by err, in
the format:
[{filename:99: error one} {otherfile:55: cause of error one}]
This is a terse alternative to ErrorStack as it returns a single line.
## func ErrorStack
``` go
func ErrorStack(err error) string
```
ErrorStack returns a string representation of the annotated error. If the
error passed as the parameter is not an annotated error, the result is
simply the result of the Error() method on that error.
If the error is an annotated error, a multi-line string is returned where
each line represents one entry in the annotation stack. The full filename
from the call stack is used in the output.
first error
github.com/juju/errors/annotation_test.go:193:
github.com/juju/errors/annotation_test.go:194: annotation
github.com/juju/errors/annotation_test.go:195:
github.com/juju/errors/annotation_test.go:196: more context
github.com/juju/errors/annotation_test.go:197:
## func Errorf
``` go
func Errorf(format string, args ...interface{}) error
```
Errorf creates a new annotated error and records the location that the
error is created. This should be a drop in replacement for fmt.Errorf.
For example:
return errors.Errorf("validation failed: %s", message)
## func IsAlreadyExists
``` go
func IsAlreadyExists(err error) bool
```
IsAlreadyExists reports whether the error was created with
AlreadyExistsf() or NewAlreadyExists().
## func IsNotFound
``` go
func IsNotFound(err error) bool
```
IsNotFound reports whether err was created with NotFoundf() or
NewNotFound().
## func IsNotImplemented
``` go
func IsNotImplemented(err error) bool
```
IsNotImplemented reports whether err was created with
NotImplementedf() or NewNotImplemented().
## func IsNotSupported
``` go
func IsNotSupported(err error) bool
```
IsNotSupported reports whether the error was created with
NotSupportedf() or NewNotSupported().
## func IsNotValid
``` go
func IsNotValid(err error) bool
```
IsNotValid reports whether the error was created with NotValidf() or
NewNotValid().
## func IsUnauthorized
``` go
func IsUnauthorized(err error) bool
```
IsUnauthorized reports whether err was created with Unauthorizedf() or
NewUnauthorized().
## func Mask
``` go
func Mask(other error) error
```
Mask hides the underlying error type, and records the location of the masking.
## func Maskf
``` go
func Maskf(other error, format string, args ...interface{}) error
```
Mask masks the given error with the given format string and arguments (like
fmt.Sprintf), returning a new error that maintains the error stack, but
hides the underlying error type. The error string still contains the full
annotations. If you want to hide the annotations, call Wrap.
## func New
``` go
func New(message string) error
```
New is a drop in replacement for the standard libary errors module that records
the location that the error is created.
For example:
return errors.New("validation failed")
## func NewAlreadyExists
``` go
func NewAlreadyExists(err error, msg string) error
```
NewAlreadyExists returns an error which wraps err and satisfies
IsAlreadyExists().
## func NewNotFound
``` go
func NewNotFound(err error, msg string) error
```
NewNotFound returns an error which wraps err that satisfies
IsNotFound().
## func NewNotImplemented
``` go
func NewNotImplemented(err error, msg string) error
```
NewNotImplemented returns an error which wraps err and satisfies
IsNotImplemented().
## func NewNotSupported
``` go
func NewNotSupported(err error, msg string) error
```
NewNotSupported returns an error which wraps err and satisfies
IsNotSupported().
## func NewNotValid
``` go
func NewNotValid(err error, msg string) error
```
NewNotValid returns an error which wraps err and satisfies IsNotValid().
## func NewUnauthorized
``` go
func NewUnauthorized(err error, msg string) error
```
NewUnauthorized returns an error which wraps err and satisfies
IsUnauthorized().
## func NotFoundf
``` go
func NotFoundf(format string, args ...interface{}) error
```
NotFoundf returns an error which satisfies IsNotFound().
## func NotImplementedf
``` go
func NotImplementedf(format string, args ...interface{}) error
```
NotImplementedf returns an error which satisfies IsNotImplemented().
## func NotSupportedf
``` go
func NotSupportedf(format string, args ...interface{}) error
```
NotSupportedf returns an error which satisfies IsNotSupported().
## func NotValidf
``` go
func NotValidf(format string, args ...interface{}) error
```
NotValidf returns an error which satisfies IsNotValid().
## func Trace
``` go
func Trace(other error) error
```
Trace adds the location of the Trace call to the stack. The Cause of the
resulting error is the same as the error parameter. If the other error is
nil, the result will be nil.
For example:
if err := SomeFunc(); err != nil {
return errors.Trace(err)
}
## func Unauthorizedf
``` go
func Unauthorizedf(format string, args ...interface{}) error
```
Unauthorizedf returns an error which satisfies IsUnauthorized().
## func Wrap
``` go
func Wrap(other, newDescriptive error) error
```
Wrap changes the Cause of the error. The location of the Wrap call is also
stored in the error stack.
For example:
if err := SomeFunc(); err != nil {
newErr := &packageError{"more context", private_value}
return errors.Wrap(err, newErr)
}
## func Wrapf
``` go
func Wrapf(other, newDescriptive error, format string, args ...interface{}) error
```
Wrapf changes the Cause of the error, and adds an annotation. The location
of the Wrap call is also stored in the error stack.
For example:
if err := SomeFunc(); err != nil {
return errors.Wrapf(err, simpleErrorType, "invalid value %q", value)
}
## type Err
``` go
type Err struct {
// contains filtered or unexported fields
}
```
Err holds a description of an error along with information about
where the error was created.
It may be embedded in custom error types to add extra information that
this errors package can understand.
### func NewErr
``` go
func NewErr(format string, args ...interface{}) Err
```
NewErr is used to return an Err for the purpose of embedding in other
structures. The location is not specified, and needs to be set with a call
to SetLocation.
For example:
type FooError struct {
errors.Err
code int
}
func NewFooError(code int) error {
err := &FooError{errors.NewErr("foo"), code}
err.SetLocation(1)
return err
}
### func (\*Err) Cause
``` go
func (e *Err) Cause() error
```
The Cause of an error is the most recent error in the error stack that
meets one of these criteria: the original error that was raised; the new
error that was passed into the Wrap function; the most recently masked
error; or nil if the error itself is considered the Cause. Normally this
method is not invoked directly, but instead through the Cause stand alone
function.
### func (\*Err) Error
``` go
func (e *Err) Error() string
```
Error implements error.Error.
### func (\*Err) Location
``` go
func (e *Err) Location() (filename string, line int)
```
Location is the file and line of where the error was most recently
created or annotated.
### func (\*Err) Message
``` go
func (e *Err) Message() string
```
Message returns the message stored with the most recent location. This is
the empty string if the most recent call was Trace, or the message stored
with Annotate or Mask.
### func (\*Err) SetLocation
``` go
func (e *Err) SetLocation(callDepth int)
```
SetLocation records the source location of the error at callDepth stack
frames above the call.
### func (\*Err) StackTrace
``` go
func (e *Err) StackTrace() []string
```
StackTrace returns one string for each location recorded in the stack of
errors. The first value is the originating error, with a line for each
other annotation or tracing of the error.
### func (\*Err) Underlying
``` go
func (e *Err) Underlying() error
```
Underlying returns the previous error in the error stack, if any. A client
should not ever really call this method. It is used to build the error
stack and should not be introspected by client calls. Or more
specifically, clients should not depend on anything but the `Cause` of an
error.
- - -
Generated by [godoc2md](http://godoc.org/github.com/davecheney/godoc2md)

View File

@@ -1,81 +0,0 @@
// Copyright 2013, 2014 Canonical Ltd.
// Licensed under the LGPLv3, see LICENCE file for details.
/*
[godoc-link-here]
The juju/errors provides an easy way to annotate errors without losing the
orginal error context.
The exported `New` and `Errorf` functions are designed to replace the
`errors.New` and `fmt.Errorf` functions respectively. The same underlying
error is there, but the package also records the location at which the error
was created.
A primary use case for this library is to add extra context any time an
error is returned from a function.
if err := SomeFunc(); err != nil {
return err
}
This instead becomes:
if err := SomeFunc(); err != nil {
return errors.Trace(err)
}
which just records the file and line number of the Trace call, or
if err := SomeFunc(); err != nil {
return errors.Annotate(err, "more context")
}
which also adds an annotation to the error.
When you want to check to see if an error is of a particular type, a helper
function is normally exported by the package that returned the error, like the
`os` package does. The underlying cause of the error is available using the
`Cause` function.
os.IsNotExist(errors.Cause(err))
The result of the `Error()` call on an annotated error is the annotations joined
with colons, then the result of the `Error()` method for the underlying error
that was the cause.
err := errors.Errorf("original")
err = errors.Annotatef(err, "context")
err = errors.Annotatef(err, "more context")
err.Error() -> "more context: context: original"
Obviously recording the file, line and functions is not very useful if you
cannot get them back out again.
errors.ErrorStack(err)
will return something like:
first error
github.com/juju/errors/annotation_test.go:193:
github.com/juju/errors/annotation_test.go:194: annotation
github.com/juju/errors/annotation_test.go:195:
github.com/juju/errors/annotation_test.go:196: more context
github.com/juju/errors/annotation_test.go:197:
The first error was generated by an external system, so there was no location
associated. The second, fourth, and last lines were generated with Trace calls,
and the other two through Annotate.
Sometimes when responding to an error you want to return a more specific error
for the situation.
if err := FindField(field); err != nil {
return errors.Wrap(err, errors.NotFoundf(field))
}
This returns an error where the complete error stack is still available, and
`errors.Cause()` will return the `NotFound` error.
*/
package errors

View File

@@ -1,122 +0,0 @@
// Copyright 2014 Canonical Ltd.
// Licensed under the LGPLv3, see LICENCE file for details.
package errors
import (
"fmt"
"reflect"
"runtime"
)
// Err holds a description of an error along with information about
// where the error was created.
//
// It may be embedded in custom error types to add extra information that
// this errors package can understand.
type Err struct {
// message holds an annotation of the error.
message string
// cause holds the cause of the error as returned
// by the Cause method.
cause error
// previous holds the previous error in the error stack, if any.
previous error
// file and line hold the source code location where the error was
// created.
file string
line int
}
// NewErr is used to return an Err for the purpose of embedding in other
// structures. The location is not specified, and needs to be set with a call
// to SetLocation.
//
// For example:
// type FooError struct {
// errors.Err
// code int
// }
//
// func NewFooError(code int) error {
// err := &FooError{errors.NewErr("foo"), code}
// err.SetLocation(1)
// return err
// }
func NewErr(format string, args ...interface{}) Err {
return Err{
message: fmt.Sprintf(format, args...),
}
}
// Location is the file and line of where the error was most recently
// created or annotated.
func (e *Err) Location() (filename string, line int) {
return e.file, e.line
}
// Underlying returns the previous error in the error stack, if any. A client
// should not ever really call this method. It is used to build the error
// stack and should not be introspected by client calls. Or more
// specifically, clients should not depend on anything but the `Cause` of an
// error.
func (e *Err) Underlying() error {
return e.previous
}
// The Cause of an error is the most recent error in the error stack that
// meets one of these criteria: the original error that was raised; the new
// error that was passed into the Wrap function; the most recently masked
// error; or nil if the error itself is considered the Cause. Normally this
// method is not invoked directly, but instead through the Cause stand alone
// function.
func (e *Err) Cause() error {
return e.cause
}
// Message returns the message stored with the most recent location. This is
// the empty string if the most recent call was Trace, or the message stored
// with Annotate or Mask.
func (e *Err) Message() string {
return e.message
}
// Error implements error.Error.
func (e *Err) Error() string {
// We want to walk up the stack of errors showing the annotations
// as long as the cause is the same.
err := e.previous
if !sameError(Cause(err), e.cause) && e.cause != nil {
err = e.cause
}
switch {
case err == nil:
return e.message
case e.message == "":
return err.Error()
}
return fmt.Sprintf("%s: %v", e.message, err)
}
// SetLocation records the source location of the error at callDepth stack
// frames above the call.
func (e *Err) SetLocation(callDepth int) {
_, file, line, _ := runtime.Caller(callDepth + 1)
e.file = trimGoPath(file)
e.line = line
}
// StackTrace returns one string for each location recorded in the stack of
// errors. The first value is the originating error, with a line for each
// other annotation or tracing of the error.
func (e *Err) StackTrace() []string {
return errorStack(e)
}
// Ideally we'd have a way to check identity, but deep equals will do.
func sameError(e1, e2 error) bool {
return reflect.DeepEqual(e1, e2)
}

View File

@@ -1,161 +0,0 @@
// Copyright 2014 Canonical Ltd.
// Licensed under the LGPLv3, see LICENCE file for details.
package errors_test
import (
"fmt"
"runtime"
jc "github.com/juju/testing/checkers"
gc "gopkg.in/check.v1"
"github.com/juju/errors"
)
type errorsSuite struct{}
var _ = gc.Suite(&errorsSuite{})
var someErr = errors.New("some error") //err varSomeErr
func (*errorsSuite) TestErrorString(c *gc.C) {
for i, test := range []struct {
message string
generator func() error
expected string
}{
{
message: "uncomparable errors",
generator: func() error {
err := errors.Annotatef(newNonComparableError("uncomparable"), "annotation")
return errors.Annotatef(err, "another")
},
expected: "another: annotation: uncomparable",
}, {
message: "Errorf",
generator: func() error {
return errors.Errorf("first error")
},
expected: "first error",
}, {
message: "annotated error",
generator: func() error {
err := errors.Errorf("first error")
return errors.Annotatef(err, "annotation")
},
expected: "annotation: first error",
}, {
message: "test annotation format",
generator: func() error {
err := errors.Errorf("first %s", "error")
return errors.Annotatef(err, "%s", "annotation")
},
expected: "annotation: first error",
}, {
message: "wrapped error",
generator: func() error {
err := newError("first error")
return errors.Wrap(err, newError("detailed error"))
},
expected: "detailed error",
}, {
message: "wrapped annotated error",
generator: func() error {
err := errors.Errorf("first error")
err = errors.Annotatef(err, "annotated")
return errors.Wrap(err, fmt.Errorf("detailed error"))
},
expected: "detailed error",
}, {
message: "annotated wrapped error",
generator: func() error {
err := errors.Errorf("first error")
err = errors.Wrap(err, fmt.Errorf("detailed error"))
return errors.Annotatef(err, "annotated")
},
expected: "annotated: detailed error",
}, {
message: "traced, and annotated",
generator: func() error {
err := errors.New("first error")
err = errors.Trace(err)
err = errors.Annotate(err, "some context")
err = errors.Trace(err)
err = errors.Annotate(err, "more context")
return errors.Trace(err)
},
expected: "more context: some context: first error",
}, {
message: "traced, and annotated, masked and annotated",
generator: func() error {
err := errors.New("first error")
err = errors.Trace(err)
err = errors.Annotate(err, "some context")
err = errors.Maskf(err, "masked")
err = errors.Annotate(err, "more context")
return errors.Trace(err)
},
expected: "more context: masked: some context: first error",
},
} {
c.Logf("%v: %s", i, test.message)
err := test.generator()
ok := c.Check(err.Error(), gc.Equals, test.expected)
if !ok {
c.Logf("%#v", test.generator())
}
}
}
type embed struct {
errors.Err
}
func newEmbed(format string, args ...interface{}) *embed {
err := &embed{errors.NewErr(format, args...)}
err.SetLocation(1)
return err
}
func (*errorsSuite) TestNewErr(c *gc.C) {
if runtime.Compiler == "gccgo" {
c.Skip("gccgo can't determine the location")
}
err := newEmbed("testing %d", 42) //err embedErr
c.Assert(err.Error(), gc.Equals, "testing 42")
c.Assert(errors.Cause(err), gc.Equals, err)
c.Assert(errors.Details(err), jc.Contains, tagToLocation["embedErr"].String())
}
var _ error = (*embed)(nil)
// This is an uncomparable error type, as it is a struct that supports the
// error interface (as opposed to a pointer type).
type error_ struct {
info string
slice []string
}
// Create a non-comparable error
func newNonComparableError(message string) error {
return error_{info: message}
}
func (e error_) Error() string {
return e.info
}
func newError(message string) error {
return testError{message}
}
// The testError is a value type error for ease of seeing results
// when the test fails.
type testError struct {
message string
}
func (e testError) Error() string {
return e.message
}

View File

@@ -1,235 +0,0 @@
// Copyright 2014 Canonical Ltd.
// Licensed under the LGPLv3, see LICENCE file for details.
package errors
import (
"fmt"
)
// wrap is a helper to construct an *wrapper.
func wrap(err error, format, suffix string, args ...interface{}) Err {
newErr := Err{
message: fmt.Sprintf(format+suffix, args...),
previous: err,
}
newErr.SetLocation(2)
return newErr
}
// notFound represents an error when something has not been found.
type notFound struct {
Err
}
// NotFoundf returns an error which satisfies IsNotFound().
func NotFoundf(format string, args ...interface{}) error {
return &notFound{wrap(nil, format, " not found", args...)}
}
// NewNotFound returns an error which wraps err that satisfies
// IsNotFound().
func NewNotFound(err error, msg string) error {
return &notFound{wrap(err, msg, "")}
}
// IsNotFound reports whether err was created with NotFoundf() or
// NewNotFound().
func IsNotFound(err error) bool {
err = Cause(err)
_, ok := err.(*notFound)
return ok
}
// userNotFound represents an error when an inexistent user is looked up.
type userNotFound struct {
Err
}
// UserNotFoundf returns an error which satisfies IsUserNotFound().
func UserNotFoundf(format string, args ...interface{}) error {
return &userNotFound{wrap(nil, format, " user not found", args...)}
}
// NewUserNotFound returns an error which wraps err and satisfies
// IsUserNotFound().
func NewUserNotFound(err error, msg string) error {
return &userNotFound{wrap(err, msg, "")}
}
// IsUserNotFound reports whether err was created with UserNotFoundf() or
// NewUserNotFound().
func IsUserNotFound(err error) bool {
err = Cause(err)
_, ok := err.(*userNotFound)
return ok
}
// unauthorized represents an error when an operation is unauthorized.
type unauthorized struct {
Err
}
// Unauthorizedf returns an error which satisfies IsUnauthorized().
func Unauthorizedf(format string, args ...interface{}) error {
return &unauthorized{wrap(nil, format, "", args...)}
}
// NewUnauthorized returns an error which wraps err and satisfies
// IsUnauthorized().
func NewUnauthorized(err error, msg string) error {
return &unauthorized{wrap(err, msg, "")}
}
// IsUnauthorized reports whether err was created with Unauthorizedf() or
// NewUnauthorized().
func IsUnauthorized(err error) bool {
err = Cause(err)
_, ok := err.(*unauthorized)
return ok
}
// notImplemented represents an error when something is not
// implemented.
type notImplemented struct {
Err
}
// NotImplementedf returns an error which satisfies IsNotImplemented().
func NotImplementedf(format string, args ...interface{}) error {
return &notImplemented{wrap(nil, format, " not implemented", args...)}
}
// NewNotImplemented returns an error which wraps err and satisfies
// IsNotImplemented().
func NewNotImplemented(err error, msg string) error {
return &notImplemented{wrap(err, msg, "")}
}
// IsNotImplemented reports whether err was created with
// NotImplementedf() or NewNotImplemented().
func IsNotImplemented(err error) bool {
err = Cause(err)
_, ok := err.(*notImplemented)
return ok
}
// alreadyExists represents and error when something already exists.
type alreadyExists struct {
Err
}
// AlreadyExistsf returns an error which satisfies IsAlreadyExists().
func AlreadyExistsf(format string, args ...interface{}) error {
return &alreadyExists{wrap(nil, format, " already exists", args...)}
}
// NewAlreadyExists returns an error which wraps err and satisfies
// IsAlreadyExists().
func NewAlreadyExists(err error, msg string) error {
return &alreadyExists{wrap(err, msg, "")}
}
// IsAlreadyExists reports whether the error was created with
// AlreadyExistsf() or NewAlreadyExists().
func IsAlreadyExists(err error) bool {
err = Cause(err)
_, ok := err.(*alreadyExists)
return ok
}
// notSupported represents an error when something is not supported.
type notSupported struct {
Err
}
// NotSupportedf returns an error which satisfies IsNotSupported().
func NotSupportedf(format string, args ...interface{}) error {
return &notSupported{wrap(nil, format, " not supported", args...)}
}
// NewNotSupported returns an error which wraps err and satisfies
// IsNotSupported().
func NewNotSupported(err error, msg string) error {
return &notSupported{wrap(err, msg, "")}
}
// IsNotSupported reports whether the error was created with
// NotSupportedf() or NewNotSupported().
func IsNotSupported(err error) bool {
err = Cause(err)
_, ok := err.(*notSupported)
return ok
}
// notValid represents an error when something is not valid.
type notValid struct {
Err
}
// NotValidf returns an error which satisfies IsNotValid().
func NotValidf(format string, args ...interface{}) error {
return &notValid{wrap(nil, format, " not valid", args...)}
}
// NewNotValid returns an error which wraps err and satisfies IsNotValid().
func NewNotValid(err error, msg string) error {
return &notValid{wrap(err, msg, "")}
}
// IsNotValid reports whether the error was created with NotValidf() or
// NewNotValid().
func IsNotValid(err error) bool {
err = Cause(err)
_, ok := err.(*notValid)
return ok
}
// notProvisioned represents an error when something is not yet provisioned.
type notProvisioned struct {
Err
}
// NotProvisionedf returns an error which satisfies IsNotProvisioned().
func NotProvisionedf(format string, args ...interface{}) error {
return &notProvisioned{wrap(nil, format, " not provisioned", args...)}
}
// NewNotProvisioned returns an error which wraps err that satisfies
// IsNotProvisioned().
func NewNotProvisioned(err error, msg string) error {
return &notProvisioned{wrap(err, msg, "")}
}
// IsNotProvisioned reports whether err was created with NotProvisionedf() or
// NewNotProvisioned().
func IsNotProvisioned(err error) bool {
err = Cause(err)
_, ok := err.(*notProvisioned)
return ok
}
// notAssigned represents an error when something is not yet assigned to
// something else.
type notAssigned struct {
Err
}
// NotAssignedf returns an error which satisfies IsNotAssigned().
func NotAssignedf(format string, args ...interface{}) error {
return &notAssigned{wrap(nil, format, " not assigned", args...)}
}
// NewNotAssigned returns an error which wraps err that satisfies
// IsNotAssigned().
func NewNotAssigned(err error, msg string) error {
return &notAssigned{wrap(err, msg, "")}
}
// IsNotAssigned reports whether err was created with NotAssignedf() or
// NewNotAssigned().
func IsNotAssigned(err error) bool {
err = Cause(err)
_, ok := err.(*notAssigned)
return ok
}

View File

@@ -1,171 +0,0 @@
// Copyright 2013, 2014 Canonical Ltd.
// Licensed under the LGPLv3, see LICENCE file for details.
package errors_test
import (
stderrors "errors"
"fmt"
"reflect"
"runtime"
"github.com/juju/errors"
jc "github.com/juju/testing/checkers"
gc "gopkg.in/check.v1"
)
// errorInfo holds information about a single error type: a satisfier
// function, wrapping and variable arguments constructors and message
// suffix.
type errorInfo struct {
satisfier func(error) bool
argsConstructor func(string, ...interface{}) error
wrapConstructor func(error, string) error
suffix string
}
// allErrors holds information for all defined errors. When adding new
// errors, add them here as well to include them in tests.
var allErrors = []*errorInfo{
&errorInfo{errors.IsNotFound, errors.NotFoundf, errors.NewNotFound, " not found"},
&errorInfo{errors.IsUserNotFound, errors.UserNotFoundf, errors.NewUserNotFound, " user not found"},
&errorInfo{errors.IsUnauthorized, errors.Unauthorizedf, errors.NewUnauthorized, ""},
&errorInfo{errors.IsNotImplemented, errors.NotImplementedf, errors.NewNotImplemented, " not implemented"},
&errorInfo{errors.IsAlreadyExists, errors.AlreadyExistsf, errors.NewAlreadyExists, " already exists"},
&errorInfo{errors.IsNotSupported, errors.NotSupportedf, errors.NewNotSupported, " not supported"},
&errorInfo{errors.IsNotValid, errors.NotValidf, errors.NewNotValid, " not valid"},
&errorInfo{errors.IsNotProvisioned, errors.NotProvisionedf, errors.NewNotProvisioned, " not provisioned"},
&errorInfo{errors.IsNotAssigned, errors.NotAssignedf, errors.NewNotAssigned, " not assigned"},
}
type errorTypeSuite struct{}
var _ = gc.Suite(&errorTypeSuite{})
func (t *errorInfo) satisfierName() string {
value := reflect.ValueOf(t.satisfier)
f := runtime.FuncForPC(value.Pointer())
return f.Name()
}
func (t *errorInfo) equal(t0 *errorInfo) bool {
if t0 == nil {
return false
}
return t.satisfierName() == t0.satisfierName()
}
type errorTest struct {
err error
message string
errInfo *errorInfo
}
func deferredAnnotatef(err error, format string, args ...interface{}) error {
errors.DeferredAnnotatef(&err, format, args...)
return err
}
func mustSatisfy(c *gc.C, err error, errInfo *errorInfo) {
if errInfo != nil {
msg := fmt.Sprintf("%#v must satisfy %v", err, errInfo.satisfierName())
c.Check(err, jc.Satisfies, errInfo.satisfier, gc.Commentf(msg))
}
}
func mustNotSatisfy(c *gc.C, err error, errInfo *errorInfo) {
if errInfo != nil {
msg := fmt.Sprintf("%#v must not satisfy %v", err, errInfo.satisfierName())
c.Check(err, gc.Not(jc.Satisfies), errInfo.satisfier, gc.Commentf(msg))
}
}
func checkErrorMatches(c *gc.C, err error, message string, errInfo *errorInfo) {
if message == "<nil>" {
c.Check(err, gc.IsNil)
c.Check(errInfo, gc.IsNil)
} else {
c.Check(err, gc.ErrorMatches, message)
}
}
func runErrorTests(c *gc.C, errorTests []errorTest, checkMustSatisfy bool) {
for i, t := range errorTests {
c.Logf("test %d: %T: %v", i, t.err, t.err)
checkErrorMatches(c, t.err, t.message, t.errInfo)
if checkMustSatisfy {
mustSatisfy(c, t.err, t.errInfo)
}
// Check all other satisfiers to make sure none match.
for _, otherErrInfo := range allErrors {
if checkMustSatisfy && otherErrInfo.equal(t.errInfo) {
continue
}
mustNotSatisfy(c, t.err, otherErrInfo)
}
}
}
func (*errorTypeSuite) TestDeferredAnnotatef(c *gc.C) {
// Ensure DeferredAnnotatef annotates the errors.
errorTests := []errorTest{}
for _, errInfo := range allErrors {
errorTests = append(errorTests, []errorTest{{
deferredAnnotatef(nil, "comment"),
"<nil>",
nil,
}, {
deferredAnnotatef(stderrors.New("blast"), "comment"),
"comment: blast",
nil,
}, {
deferredAnnotatef(errInfo.argsConstructor("foo %d", 42), "comment %d", 69),
"comment 69: foo 42" + errInfo.suffix,
errInfo,
}, {
deferredAnnotatef(errInfo.argsConstructor(""), "comment"),
"comment: " + errInfo.suffix,
errInfo,
}, {
deferredAnnotatef(errInfo.wrapConstructor(stderrors.New("pow!"), "woo"), "comment"),
"comment: woo: pow!",
errInfo,
}}...)
}
runErrorTests(c, errorTests, true)
}
func (*errorTypeSuite) TestAllErrors(c *gc.C) {
errorTests := []errorTest{}
for _, errInfo := range allErrors {
errorTests = append(errorTests, []errorTest{{
nil,
"<nil>",
nil,
}, {
errInfo.argsConstructor("foo %d", 42),
"foo 42" + errInfo.suffix,
errInfo,
}, {
errInfo.argsConstructor(""),
errInfo.suffix,
errInfo,
}, {
errInfo.wrapConstructor(stderrors.New("pow!"), "prefix"),
"prefix: pow!",
errInfo,
}, {
errInfo.wrapConstructor(stderrors.New("pow!"), ""),
"pow!",
errInfo,
}, {
errInfo.wrapConstructor(nil, "prefix"),
"prefix",
errInfo,
}}...)
}
runErrorTests(c, errorTests, true)
}

View File

@@ -1,23 +0,0 @@
// Copyright 2013, 2014 Canonical Ltd.
// Licensed under the LGPLv3, see LICENCE file for details.
package errors_test
import (
"fmt"
"github.com/juju/errors"
)
func ExampleTrace() {
var err1 error = fmt.Errorf("something wicked this way comes")
var err2 error = nil
// Tracing a non nil error will return an error
fmt.Println(errors.Trace(err1))
// Tracing nil will return nil
fmt.Println(errors.Trace(err2))
// Output: something wicked this way comes
// <nil>
}

View File

@@ -1,12 +0,0 @@
// Copyright 2013, 2014 Canonical Ltd.
// Licensed under the LGPLv3, see LICENCE file for details.
package errors
// Since variables are declared before the init block, in order to get the goPath
// we need to return it rather than just reference it.
func GoPath() string {
return goPath
}
var TrimGoPath = trimGoPath

View File

@@ -1,330 +0,0 @@
// Copyright 2014 Canonical Ltd.
// Licensed under the LGPLv3, see LICENCE file for details.
package errors
import (
"fmt"
"strings"
)
// New is a drop in replacement for the standard libary errors module that records
// the location that the error is created.
//
// For example:
// return errors.New("validation failed")
//
func New(message string) error {
err := &Err{message: message}
err.SetLocation(1)
return err
}
// Errorf creates a new annotated error and records the location that the
// error is created. This should be a drop in replacement for fmt.Errorf.
//
// For example:
// return errors.Errorf("validation failed: %s", message)
//
func Errorf(format string, args ...interface{}) error {
err := &Err{message: fmt.Sprintf(format, args...)}
err.SetLocation(1)
return err
}
// Trace adds the location of the Trace call to the stack. The Cause of the
// resulting error is the same as the error parameter. If the other error is
// nil, the result will be nil.
//
// For example:
// if err := SomeFunc(); err != nil {
// return errors.Trace(err)
// }
//
func Trace(other error) error {
if other == nil {
return nil
}
err := &Err{previous: other, cause: Cause(other)}
err.SetLocation(1)
return err
}
// Annotate is used to add extra context to an existing error. The location of
// the Annotate call is recorded with the annotations. The file, line and
// function are also recorded.
//
// For example:
// if err := SomeFunc(); err != nil {
// return errors.Annotate(err, "failed to frombulate")
// }
//
func Annotate(other error, message string) error {
if other == nil {
return nil
}
err := &Err{
previous: other,
cause: Cause(other),
message: message,
}
err.SetLocation(1)
return err
}
// Annotatef is used to add extra context to an existing error. The location of
// the Annotate call is recorded with the annotations. The file, line and
// function are also recorded.
//
// For example:
// if err := SomeFunc(); err != nil {
// return errors.Annotatef(err, "failed to frombulate the %s", arg)
// }
//
func Annotatef(other error, format string, args ...interface{}) error {
if other == nil {
return nil
}
err := &Err{
previous: other,
cause: Cause(other),
message: fmt.Sprintf(format, args...),
}
err.SetLocation(1)
return err
}
// DeferredAnnotatef annotates the given error (when it is not nil) with the given
// format string and arguments (like fmt.Sprintf). If *err is nil, DeferredAnnotatef
// does nothing. This method is used in a defer statement in order to annotate any
// resulting error with the same message.
//
// For example:
//
// defer DeferredAnnotatef(&err, "failed to frombulate the %s", arg)
//
func DeferredAnnotatef(err *error, format string, args ...interface{}) {
if *err == nil {
return
}
newErr := &Err{
message: fmt.Sprintf(format, args...),
cause: Cause(*err),
previous: *err,
}
newErr.SetLocation(1)
*err = newErr
}
// Wrap changes the Cause of the error. The location of the Wrap call is also
// stored in the error stack.
//
// For example:
// if err := SomeFunc(); err != nil {
// newErr := &packageError{"more context", private_value}
// return errors.Wrap(err, newErr)
// }
//
func Wrap(other, newDescriptive error) error {
err := &Err{
previous: other,
cause: newDescriptive,
}
err.SetLocation(1)
return err
}
// Wrapf changes the Cause of the error, and adds an annotation. The location
// of the Wrap call is also stored in the error stack.
//
// For example:
// if err := SomeFunc(); err != nil {
// return errors.Wrapf(err, simpleErrorType, "invalid value %q", value)
// }
//
func Wrapf(other, newDescriptive error, format string, args ...interface{}) error {
err := &Err{
message: fmt.Sprintf(format, args...),
previous: other,
cause: newDescriptive,
}
err.SetLocation(1)
return err
}
// Mask masks the given error with the given format string and arguments (like
// fmt.Sprintf), returning a new error that maintains the error stack, but
// hides the underlying error type. The error string still contains the full
// annotations. If you want to hide the annotations, call Wrap.
func Maskf(other error, format string, args ...interface{}) error {
if other == nil {
return nil
}
err := &Err{
message: fmt.Sprintf(format, args...),
previous: other,
}
err.SetLocation(1)
return err
}
// Mask hides the underlying error type, and records the location of the masking.
func Mask(other error) error {
if other == nil {
return nil
}
err := &Err{
previous: other,
}
err.SetLocation(1)
return err
}
// Cause returns the cause of the given error. This will be either the
// original error, or the result of a Wrap or Mask call.
//
// Cause is the usual way to diagnose errors that may have been wrapped by
// the other errors functions.
func Cause(err error) error {
var diag error
if err, ok := err.(causer); ok {
diag = err.Cause()
}
if diag != nil {
return diag
}
return err
}
type causer interface {
Cause() error
}
type wrapper interface {
// Message returns the top level error message,
// not including the message from the Previous
// error.
Message() string
// Underlying returns the Previous error, or nil
// if there is none.
Underlying() error
}
type locationer interface {
Location() (string, int)
}
var (
_ wrapper = (*Err)(nil)
_ locationer = (*Err)(nil)
_ causer = (*Err)(nil)
)
// Details returns information about the stack of errors wrapped by err, in
// the format:
//
// [{filename:99: error one} {otherfile:55: cause of error one}]
//
// This is a terse alternative to ErrorStack as it returns a single line.
func Details(err error) string {
if err == nil {
return "[]"
}
var s []byte
s = append(s, '[')
for {
s = append(s, '{')
if err, ok := err.(locationer); ok {
file, line := err.Location()
if file != "" {
s = append(s, fmt.Sprintf("%s:%d", file, line)...)
s = append(s, ": "...)
}
}
if cerr, ok := err.(wrapper); ok {
s = append(s, cerr.Message()...)
err = cerr.Underlying()
} else {
s = append(s, err.Error()...)
err = nil
}
s = append(s, '}')
if err == nil {
break
}
s = append(s, ' ')
}
s = append(s, ']')
return string(s)
}
// ErrorStack returns a string representation of the annotated error. If the
// error passed as the parameter is not an annotated error, the result is
// simply the result of the Error() method on that error.
//
// If the error is an annotated error, a multi-line string is returned where
// each line represents one entry in the annotation stack. The full filename
// from the call stack is used in the output.
//
// first error
// github.com/juju/errors/annotation_test.go:193:
// github.com/juju/errors/annotation_test.go:194: annotation
// github.com/juju/errors/annotation_test.go:195:
// github.com/juju/errors/annotation_test.go:196: more context
// github.com/juju/errors/annotation_test.go:197:
func ErrorStack(err error) string {
return strings.Join(errorStack(err), "\n")
}
func errorStack(err error) []string {
if err == nil {
return nil
}
// We want the first error first
var lines []string
for {
var buff []byte
if err, ok := err.(locationer); ok {
file, line := err.Location()
// Strip off the leading GOPATH/src path elements.
file = trimGoPath(file)
if file != "" {
buff = append(buff, fmt.Sprintf("%s:%d", file, line)...)
buff = append(buff, ": "...)
}
}
if cerr, ok := err.(wrapper); ok {
message := cerr.Message()
buff = append(buff, message...)
// If there is a cause for this error, and it is different to the cause
// of the underlying error, then output the error string in the stack trace.
var cause error
if err1, ok := err.(causer); ok {
cause = err1.Cause()
}
err = cerr.Underlying()
if cause != nil && !sameError(Cause(err), cause) {
if message != "" {
buff = append(buff, ": "...)
}
buff = append(buff, cause.Error()...)
}
} else {
buff = append(buff, err.Error()...)
err = nil
}
lines = append(lines, string(buff))
if err == nil {
break
}
}
// reverse the lines to get the original error, which was at the end of
// the list, back to the start.
var result []string
for i := len(lines); i > 0; i-- {
result = append(result, lines[i-1])
}
return result
}

View File

@@ -1,305 +0,0 @@
// Copyright 2014 Canonical Ltd.
// Licensed under the LGPLv3, see LICENCE file for details.
package errors_test
import (
"fmt"
"os"
"path/filepath"
"runtime"
"strings"
jc "github.com/juju/testing/checkers"
gc "gopkg.in/check.v1"
"github.com/juju/errors"
)
type functionSuite struct {
}
var _ = gc.Suite(&functionSuite{})
func (*functionSuite) TestNew(c *gc.C) {
err := errors.New("testing") //err newTest
c.Assert(err.Error(), gc.Equals, "testing")
c.Assert(errors.Cause(err), gc.Equals, err)
c.Assert(errors.Details(err), jc.Contains, tagToLocation["newTest"].String())
}
func (*functionSuite) TestErrorf(c *gc.C) {
err := errors.Errorf("testing %d", 42) //err errorfTest
c.Assert(err.Error(), gc.Equals, "testing 42")
c.Assert(errors.Cause(err), gc.Equals, err)
c.Assert(errors.Details(err), jc.Contains, tagToLocation["errorfTest"].String())
}
func (*functionSuite) TestTrace(c *gc.C) {
first := errors.New("first")
err := errors.Trace(first) //err traceTest
c.Assert(err.Error(), gc.Equals, "first")
c.Assert(errors.Cause(err), gc.Equals, first)
c.Assert(errors.Details(err), jc.Contains, tagToLocation["traceTest"].String())
c.Assert(errors.Trace(nil), gc.IsNil)
}
func (*functionSuite) TestAnnotate(c *gc.C) {
first := errors.New("first")
err := errors.Annotate(first, "annotation") //err annotateTest
c.Assert(err.Error(), gc.Equals, "annotation: first")
c.Assert(errors.Cause(err), gc.Equals, first)
c.Assert(errors.Details(err), jc.Contains, tagToLocation["annotateTest"].String())
c.Assert(errors.Annotate(nil, "annotate"), gc.IsNil)
}
func (*functionSuite) TestAnnotatef(c *gc.C) {
first := errors.New("first")
err := errors.Annotatef(first, "annotation %d", 2) //err annotatefTest
c.Assert(err.Error(), gc.Equals, "annotation 2: first")
c.Assert(errors.Cause(err), gc.Equals, first)
c.Assert(errors.Details(err), jc.Contains, tagToLocation["annotatefTest"].String())
c.Assert(errors.Annotatef(nil, "annotate"), gc.IsNil)
}
func (*functionSuite) TestDeferredAnnotatef(c *gc.C) {
// NOTE: this test fails with gccgo
if runtime.Compiler == "gccgo" {
c.Skip("gccgo can't determine the location")
}
first := errors.New("first")
test := func() (err error) {
defer errors.DeferredAnnotatef(&err, "deferred %s", "annotate")
return first
} //err deferredAnnotate
err := test()
c.Assert(err.Error(), gc.Equals, "deferred annotate: first")
c.Assert(errors.Cause(err), gc.Equals, first)
c.Assert(errors.Details(err), jc.Contains, tagToLocation["deferredAnnotate"].String())
err = nil
errors.DeferredAnnotatef(&err, "deferred %s", "annotate")
c.Assert(err, gc.IsNil)
}
func (*functionSuite) TestWrap(c *gc.C) {
first := errors.New("first") //err wrapFirst
detailed := errors.New("detailed")
err := errors.Wrap(first, detailed) //err wrapTest
c.Assert(err.Error(), gc.Equals, "detailed")
c.Assert(errors.Cause(err), gc.Equals, detailed)
c.Assert(errors.Details(err), jc.Contains, tagToLocation["wrapFirst"].String())
c.Assert(errors.Details(err), jc.Contains, tagToLocation["wrapTest"].String())
}
func (*functionSuite) TestWrapOfNil(c *gc.C) {
detailed := errors.New("detailed")
err := errors.Wrap(nil, detailed) //err nilWrapTest
c.Assert(err.Error(), gc.Equals, "detailed")
c.Assert(errors.Cause(err), gc.Equals, detailed)
c.Assert(errors.Details(err), jc.Contains, tagToLocation["nilWrapTest"].String())
}
func (*functionSuite) TestWrapf(c *gc.C) {
first := errors.New("first") //err wrapfFirst
detailed := errors.New("detailed")
err := errors.Wrapf(first, detailed, "value %d", 42) //err wrapfTest
c.Assert(err.Error(), gc.Equals, "value 42: detailed")
c.Assert(errors.Cause(err), gc.Equals, detailed)
c.Assert(errors.Details(err), jc.Contains, tagToLocation["wrapfFirst"].String())
c.Assert(errors.Details(err), jc.Contains, tagToLocation["wrapfTest"].String())
}
func (*functionSuite) TestWrapfOfNil(c *gc.C) {
detailed := errors.New("detailed")
err := errors.Wrapf(nil, detailed, "value %d", 42) //err nilWrapfTest
c.Assert(err.Error(), gc.Equals, "value 42: detailed")
c.Assert(errors.Cause(err), gc.Equals, detailed)
c.Assert(errors.Details(err), jc.Contains, tagToLocation["nilWrapfTest"].String())
}
func (*functionSuite) TestMask(c *gc.C) {
first := errors.New("first")
err := errors.Mask(first) //err maskTest
c.Assert(err.Error(), gc.Equals, "first")
c.Assert(errors.Cause(err), gc.Equals, err)
c.Assert(errors.Details(err), jc.Contains, tagToLocation["maskTest"].String())
c.Assert(errors.Mask(nil), gc.IsNil)
}
func (*functionSuite) TestMaskf(c *gc.C) {
first := errors.New("first")
err := errors.Maskf(first, "masked %d", 42) //err maskfTest
c.Assert(err.Error(), gc.Equals, "masked 42: first")
c.Assert(errors.Cause(err), gc.Equals, err)
c.Assert(errors.Details(err), jc.Contains, tagToLocation["maskfTest"].String())
c.Assert(errors.Maskf(nil, "mask"), gc.IsNil)
}
func (*functionSuite) TestCause(c *gc.C) {
c.Assert(errors.Cause(nil), gc.IsNil)
c.Assert(errors.Cause(someErr), gc.Equals, someErr)
fmtErr := fmt.Errorf("simple")
c.Assert(errors.Cause(fmtErr), gc.Equals, fmtErr)
err := errors.Wrap(someErr, fmtErr)
c.Assert(errors.Cause(err), gc.Equals, fmtErr)
err = errors.Annotate(err, "annotated")
c.Assert(errors.Cause(err), gc.Equals, fmtErr)
err = errors.Maskf(err, "maksed")
c.Assert(errors.Cause(err), gc.Equals, err)
// Look for a file that we know isn't there.
dir := c.MkDir()
_, err = os.Stat(filepath.Join(dir, "not-there"))
c.Assert(os.IsNotExist(err), jc.IsTrue)
err = errors.Annotatef(err, "wrap it")
// Now the error itself isn't a 'IsNotExist'.
c.Assert(os.IsNotExist(err), jc.IsFalse)
// However if we use the Check method, it is.
c.Assert(os.IsNotExist(errors.Cause(err)), jc.IsTrue)
}
func (s *functionSuite) TestDetails(c *gc.C) {
if runtime.Compiler == "gccgo" {
c.Skip("gccgo can't determine the location")
}
c.Assert(errors.Details(nil), gc.Equals, "[]")
otherErr := fmt.Errorf("other")
checkDetails(c, otherErr, "[{other}]")
err0 := newEmbed("foo") //err TestStack#0
checkDetails(c, err0, "[{$TestStack#0$: foo}]")
err1 := errors.Annotate(err0, "bar") //err TestStack#1
checkDetails(c, err1, "[{$TestStack#1$: bar} {$TestStack#0$: foo}]")
err2 := errors.Trace(err1) //err TestStack#2
checkDetails(c, err2, "[{$TestStack#2$: } {$TestStack#1$: bar} {$TestStack#0$: foo}]")
}
type tracer interface {
StackTrace() []string
}
func (*functionSuite) TestErrorStack(c *gc.C) {
for i, test := range []struct {
message string
generator func() error
expected string
tracer bool
}{
{
message: "nil",
generator: func() error {
return nil
},
}, {
message: "raw error",
generator: func() error {
return fmt.Errorf("raw")
},
expected: "raw",
}, {
message: "single error stack",
generator: func() error {
return errors.New("first error") //err single
},
expected: "$single$: first error",
tracer: true,
}, {
message: "annotated error",
generator: func() error {
err := errors.New("first error") //err annotated-0
return errors.Annotate(err, "annotation") //err annotated-1
},
expected: "" +
"$annotated-0$: first error\n" +
"$annotated-1$: annotation",
tracer: true,
}, {
message: "wrapped error",
generator: func() error {
err := errors.New("first error") //err wrapped-0
return errors.Wrap(err, newError("detailed error")) //err wrapped-1
},
expected: "" +
"$wrapped-0$: first error\n" +
"$wrapped-1$: detailed error",
tracer: true,
}, {
message: "annotated wrapped error",
generator: func() error {
err := errors.Errorf("first error") //err ann-wrap-0
err = errors.Wrap(err, fmt.Errorf("detailed error")) //err ann-wrap-1
return errors.Annotatef(err, "annotated") //err ann-wrap-2
},
expected: "" +
"$ann-wrap-0$: first error\n" +
"$ann-wrap-1$: detailed error\n" +
"$ann-wrap-2$: annotated",
tracer: true,
}, {
message: "traced, and annotated",
generator: func() error {
err := errors.New("first error") //err stack-0
err = errors.Trace(err) //err stack-1
err = errors.Annotate(err, "some context") //err stack-2
err = errors.Trace(err) //err stack-3
err = errors.Annotate(err, "more context") //err stack-4
return errors.Trace(err) //err stack-5
},
expected: "" +
"$stack-0$: first error\n" +
"$stack-1$: \n" +
"$stack-2$: some context\n" +
"$stack-3$: \n" +
"$stack-4$: more context\n" +
"$stack-5$: ",
tracer: true,
}, {
message: "uncomparable, wrapped with a value error",
generator: func() error {
err := newNonComparableError("first error") //err mixed-0
err = errors.Trace(err) //err mixed-1
err = errors.Wrap(err, newError("value error")) //err mixed-2
err = errors.Maskf(err, "masked") //err mixed-3
err = errors.Annotate(err, "more context") //err mixed-4
return errors.Trace(err) //err mixed-5
},
expected: "" +
"first error\n" +
"$mixed-1$: \n" +
"$mixed-2$: value error\n" +
"$mixed-3$: masked\n" +
"$mixed-4$: more context\n" +
"$mixed-5$: ",
tracer: true,
},
} {
c.Logf("%v: %s", i, test.message)
err := test.generator()
expected := replaceLocations(test.expected)
stack := errors.ErrorStack(err)
ok := c.Check(stack, gc.Equals, expected)
if !ok {
c.Logf("%#v", err)
}
tracer, ok := err.(tracer)
c.Check(ok, gc.Equals, test.tracer)
if ok {
stackTrace := tracer.StackTrace()
c.Check(stackTrace, gc.DeepEquals, strings.Split(stack, "\n"))
}
}
}

View File

@@ -1,95 +0,0 @@
// Copyright 2013, 2014 Canonical Ltd.
// Licensed under the LGPLv3, see LICENCE file for details.
package errors_test
import (
"fmt"
"io/ioutil"
"strings"
"testing"
gc "gopkg.in/check.v1"
"github.com/juju/errors"
)
func Test(t *testing.T) {
gc.TestingT(t)
}
func checkDetails(c *gc.C, err error, details string) {
c.Assert(err, gc.NotNil)
expectedDetails := replaceLocations(details)
c.Assert(errors.Details(err), gc.Equals, expectedDetails)
}
func checkErr(c *gc.C, err, cause error, msg string, details string) {
c.Assert(err, gc.NotNil)
c.Assert(err.Error(), gc.Equals, msg)
c.Assert(errors.Cause(err), gc.Equals, cause)
expectedDetails := replaceLocations(details)
c.Assert(errors.Details(err), gc.Equals, expectedDetails)
}
func replaceLocations(line string) string {
result := ""
for {
i := strings.Index(line, "$")
if i == -1 {
break
}
result += line[0:i]
line = line[i+1:]
i = strings.Index(line, "$")
if i == -1 {
panic("no second $")
}
result += location(line[0:i]).String()
line = line[i+1:]
}
result += line
return result
}
func location(tag string) Location {
loc, ok := tagToLocation[tag]
if !ok {
panic(fmt.Sprintf("tag %q not found", tag))
}
return loc
}
type Location struct {
file string
line int
}
func (loc Location) String() string {
return fmt.Sprintf("%s:%d", loc.file, loc.line)
}
var tagToLocation = make(map[string]Location)
func setLocationsForErrorTags(filename string) {
data, err := ioutil.ReadFile(filename)
if err != nil {
panic(err)
}
filename = "github.com/juju/errors/" + filename
lines := strings.Split(string(data), "\n")
for i, line := range lines {
if j := strings.Index(line, "//err "); j >= 0 {
tag := line[j+len("//err "):]
if _, found := tagToLocation[tag]; found {
panic(fmt.Sprintf("tag %q already processed previously", tag))
}
tagToLocation[tag] = Location{file: filename, line: i + 1}
}
}
}
func init() {
setLocationsForErrorTags("error_test.go")
setLocationsForErrorTags("functions_test.go")
}

View File

@@ -1,35 +0,0 @@
// Copyright 2013, 2014 Canonical Ltd.
// Licensed under the LGPLv3, see LICENCE file for details.
package errors
import (
"runtime"
"strings"
)
// prefixSize is used internally to trim the user specific path from the
// front of the returned filenames from the runtime call stack.
var prefixSize int
// goPath is the deduced path based on the location of this file as compiled.
var goPath string
func init() {
_, file, _, ok := runtime.Caller(0)
if ok {
// We know that the end of the file should be:
// github.com/juju/errors/path.go
size := len(file)
suffix := len("github.com/juju/errors/path.go")
goPath = file[:size-suffix]
prefixSize = len(goPath)
}
}
func trimGoPath(filename string) string {
if strings.HasPrefix(filename, goPath) {
return filename[prefixSize:]
}
return filename
}

View File

@@ -1,29 +0,0 @@
// Copyright 2013, 2014 Canonical Ltd.
// Licensed under the LGPLv3, see LICENCE file for details.
package errors_test
import (
"path"
gc "gopkg.in/check.v1"
"github.com/juju/errors"
)
type pathSuite struct{}
var _ = gc.Suite(&pathSuite{})
func (*pathSuite) TestGoPathSet(c *gc.C) {
c.Assert(errors.GoPath(), gc.Not(gc.Equals), "")
}
func (*pathSuite) TestTrimGoPath(c *gc.C) {
relativeImport := "github.com/foo/bar/baz.go"
filename := path.Join(errors.GoPath(), relativeImport)
c.Assert(errors.TrimGoPath(filename), gc.Equals, relativeImport)
absoluteImport := "/usr/share/foo/bar/baz.go"
c.Assert(errors.TrimGoPath(absoluteImport), gc.Equals, absoluteImport)
}

View File

@@ -1,74 +0,0 @@
package aws
import (
"time"
)
// AttemptStrategy represents a strategy for waiting for an action
// to complete successfully. This is an internal type used by the
// implementation of other goamz packages.
type AttemptStrategy struct {
Total time.Duration // total duration of attempt.
Delay time.Duration // interval between each try in the burst.
Min int // minimum number of retries; overrides Total
}
type Attempt struct {
strategy AttemptStrategy
last time.Time
end time.Time
force bool
count int
}
// Start begins a new sequence of attempts for the given strategy.
func (s AttemptStrategy) Start() *Attempt {
now := time.Now()
return &Attempt{
strategy: s,
last: now,
end: now.Add(s.Total),
force: true,
}
}
// Next waits until it is time to perform the next attempt or returns
// false if it is time to stop trying.
func (a *Attempt) Next() bool {
now := time.Now()
sleep := a.nextSleep(now)
if !a.force && !now.Add(sleep).Before(a.end) && a.strategy.Min <= a.count {
return false
}
a.force = false
if sleep > 0 && a.count > 0 {
time.Sleep(sleep)
now = time.Now()
}
a.count++
a.last = now
return true
}
func (a *Attempt) nextSleep(now time.Time) time.Duration {
sleep := a.strategy.Delay - now.Sub(a.last)
if sleep < 0 {
return 0
}
return sleep
}
// HasNext returns whether another attempt will be made if the current
// one fails. If it returns true, the following call to Next is
// guaranteed to return true.
func (a *Attempt) HasNext() bool {
if a.force || a.strategy.Min > a.count {
return true
}
now := time.Now()
if now.Add(a.nextSleep(now)).Before(a.end) {
a.force = true
return true
}
return false
}

View File

@@ -1,57 +0,0 @@
package aws_test
import (
"github.com/mitchellh/goamz/aws"
. "github.com/motain/gocheck"
"time"
)
func (S) TestAttemptTiming(c *C) {
testAttempt := aws.AttemptStrategy{
Total: 0.25e9,
Delay: 0.1e9,
}
want := []time.Duration{0, 0.1e9, 0.2e9, 0.2e9}
got := make([]time.Duration, 0, len(want)) // avoid allocation when testing timing
t0 := time.Now()
for a := testAttempt.Start(); a.Next(); {
got = append(got, time.Now().Sub(t0))
}
got = append(got, time.Now().Sub(t0))
c.Assert(got, HasLen, len(want))
const margin = 0.01e9
for i, got := range want {
lo := want[i] - margin
hi := want[i] + margin
if got < lo || got > hi {
c.Errorf("attempt %d want %g got %g", i, want[i].Seconds(), got.Seconds())
}
}
}
func (S) TestAttemptNextHasNext(c *C) {
a := aws.AttemptStrategy{}.Start()
c.Assert(a.Next(), Equals, true)
c.Assert(a.Next(), Equals, false)
a = aws.AttemptStrategy{}.Start()
c.Assert(a.Next(), Equals, true)
c.Assert(a.HasNext(), Equals, false)
c.Assert(a.Next(), Equals, false)
a = aws.AttemptStrategy{Total: 2e8}.Start()
c.Assert(a.Next(), Equals, true)
c.Assert(a.HasNext(), Equals, true)
time.Sleep(2e8)
c.Assert(a.HasNext(), Equals, true)
c.Assert(a.Next(), Equals, true)
c.Assert(a.Next(), Equals, false)
a = aws.AttemptStrategy{Total: 1e8, Min: 2}.Start()
time.Sleep(1e8)
c.Assert(a.Next(), Equals, true)
c.Assert(a.HasNext(), Equals, true)
c.Assert(a.Next(), Equals, true)
c.Assert(a.HasNext(), Equals, false)
c.Assert(a.Next(), Equals, false)
}

View File

@@ -1,445 +0,0 @@
//
// goamz - Go packages to interact with the Amazon Web Services.
//
// https://wiki.ubuntu.com/goamz
//
// Copyright (c) 2011 Canonical Ltd.
//
// Written by Gustavo Niemeyer <gustavo.niemeyer@canonical.com>
//
package aws
import (
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"os"
"github.com/vaughan0/go-ini"
)
// Region defines the URLs where AWS services may be accessed.
//
// See http://goo.gl/d8BP1 for more details.
type Region struct {
Name string // the canonical name of this region.
EC2Endpoint string
S3Endpoint string
S3BucketEndpoint string // Not needed by AWS S3. Use ${bucket} for bucket name.
S3LocationConstraint bool // true if this region requires a LocationConstraint declaration.
S3LowercaseBucket bool // true if the region requires bucket names to be lower case.
SDBEndpoint string
SNSEndpoint string
SQSEndpoint string
IAMEndpoint string
ELBEndpoint string
AutoScalingEndpoint string
RdsEndpoint string
Route53Endpoint string
}
var USGovWest = Region{
"us-gov-west-1",
"https://ec2.us-gov-west-1.amazonaws.com",
"https://s3-fips-us-gov-west-1.amazonaws.com",
"",
true,
true,
"",
"https://sns.us-gov-west-1.amazonaws.com",
"https://sqs.us-gov-west-1.amazonaws.com",
"https://iam.us-gov.amazonaws.com",
"https://elasticloadbalancing.us-gov-west-1.amazonaws.com",
"https://autoscaling.us-gov-west-1.amazonaws.com",
"https://rds.us-gov-west-1.amazonaws.com",
"https://route53.amazonaws.com",
}
var USEast = Region{
"us-east-1",
"https://ec2.us-east-1.amazonaws.com",
"https://s3.amazonaws.com",
"",
false,
false,
"https://sdb.amazonaws.com",
"https://sns.us-east-1.amazonaws.com",
"https://sqs.us-east-1.amazonaws.com",
"https://iam.amazonaws.com",
"https://elasticloadbalancing.us-east-1.amazonaws.com",
"https://autoscaling.us-east-1.amazonaws.com",
"https://rds.us-east-1.amazonaws.com",
"https://route53.amazonaws.com",
}
var USWest = Region{
"us-west-1",
"https://ec2.us-west-1.amazonaws.com",
"https://s3-us-west-1.amazonaws.com",
"",
true,
true,
"https://sdb.us-west-1.amazonaws.com",
"https://sns.us-west-1.amazonaws.com",
"https://sqs.us-west-1.amazonaws.com",
"https://iam.amazonaws.com",
"https://elasticloadbalancing.us-west-1.amazonaws.com",
"https://autoscaling.us-west-1.amazonaws.com",
"https://rds.us-west-1.amazonaws.com",
"https://route53.amazonaws.com",
}
var USWest2 = Region{
"us-west-2",
"https://ec2.us-west-2.amazonaws.com",
"https://s3-us-west-2.amazonaws.com",
"",
true,
true,
"https://sdb.us-west-2.amazonaws.com",
"https://sns.us-west-2.amazonaws.com",
"https://sqs.us-west-2.amazonaws.com",
"https://iam.amazonaws.com",
"https://elasticloadbalancing.us-west-2.amazonaws.com",
"https://autoscaling.us-west-2.amazonaws.com",
"https://rds.us-west-2.amazonaws.com",
"https://route53.amazonaws.com",
}
var EUWest = Region{
"eu-west-1",
"https://ec2.eu-west-1.amazonaws.com",
"https://s3-eu-west-1.amazonaws.com",
"",
true,
true,
"https://sdb.eu-west-1.amazonaws.com",
"https://sns.eu-west-1.amazonaws.com",
"https://sqs.eu-west-1.amazonaws.com",
"https://iam.amazonaws.com",
"https://elasticloadbalancing.eu-west-1.amazonaws.com",
"https://autoscaling.eu-west-1.amazonaws.com",
"https://rds.eu-west-1.amazonaws.com",
"https://route53.amazonaws.com",
}
var EUCentral = Region{
"eu-central-1",
"https://ec2.eu-central-1.amazonaws.com",
"https://s3-eu-central-1.amazonaws.com",
"",
true,
true,
"",
"https://sns.eu-central-1.amazonaws.com",
"https://sqs.eu-central-1.amazonaws.com",
"https://iam.amazonaws.com",
"https://elasticloadbalancing.eu-central-1.amazonaws.com",
"https://autoscaling.eu-central-1.amazonaws.com",
"https://rds.eu-central-1.amazonaws.com",
"https://route53.amazonaws.com",
}
var APSoutheast = Region{
"ap-southeast-1",
"https://ec2.ap-southeast-1.amazonaws.com",
"https://s3-ap-southeast-1.amazonaws.com",
"",
true,
true,
"https://sdb.ap-southeast-1.amazonaws.com",
"https://sns.ap-southeast-1.amazonaws.com",
"https://sqs.ap-southeast-1.amazonaws.com",
"https://iam.amazonaws.com",
"https://elasticloadbalancing.ap-southeast-1.amazonaws.com",
"https://autoscaling.ap-southeast-1.amazonaws.com",
"https://rds.ap-southeast-1.amazonaws.com",
"https://route53.amazonaws.com",
}
var APSoutheast2 = Region{
"ap-southeast-2",
"https://ec2.ap-southeast-2.amazonaws.com",
"https://s3-ap-southeast-2.amazonaws.com",
"",
true,
true,
"https://sdb.ap-southeast-2.amazonaws.com",
"https://sns.ap-southeast-2.amazonaws.com",
"https://sqs.ap-southeast-2.amazonaws.com",
"https://iam.amazonaws.com",
"https://elasticloadbalancing.ap-southeast-2.amazonaws.com",
"https://autoscaling.ap-southeast-2.amazonaws.com",
"https://rds.ap-southeast-2.amazonaws.com",
"https://route53.amazonaws.com",
}
var APNortheast = Region{
"ap-northeast-1",
"https://ec2.ap-northeast-1.amazonaws.com",
"https://s3-ap-northeast-1.amazonaws.com",
"",
true,
true,
"https://sdb.ap-northeast-1.amazonaws.com",
"https://sns.ap-northeast-1.amazonaws.com",
"https://sqs.ap-northeast-1.amazonaws.com",
"https://iam.amazonaws.com",
"https://elasticloadbalancing.ap-northeast-1.amazonaws.com",
"https://autoscaling.ap-northeast-1.amazonaws.com",
"https://rds.ap-northeast-1.amazonaws.com",
"https://route53.amazonaws.com",
}
var SAEast = Region{
"sa-east-1",
"https://ec2.sa-east-1.amazonaws.com",
"https://s3-sa-east-1.amazonaws.com",
"",
true,
true,
"https://sdb.sa-east-1.amazonaws.com",
"https://sns.sa-east-1.amazonaws.com",
"https://sqs.sa-east-1.amazonaws.com",
"https://iam.amazonaws.com",
"https://elasticloadbalancing.sa-east-1.amazonaws.com",
"https://autoscaling.sa-east-1.amazonaws.com",
"https://rds.sa-east-1.amazonaws.com",
"https://route53.amazonaws.com",
}
var CNNorth = Region{
"cn-north-1",
"https://ec2.cn-north-1.amazonaws.com.cn",
"https://s3.cn-north-1.amazonaws.com.cn",
"",
true,
true,
"",
"https://sns.cn-north-1.amazonaws.com.cn",
"https://sqs.cn-north-1.amazonaws.com.cn",
"https://iam.cn-north-1.amazonaws.com.cn",
"https://elasticloadbalancing.cn-north-1.amazonaws.com.cn",
"https://autoscaling.cn-north-1.amazonaws.com.cn",
"https://rds.cn-north-1.amazonaws.com.cn",
"https://route53.amazonaws.com",
}
var Regions = map[string]Region{
APNortheast.Name: APNortheast,
APSoutheast.Name: APSoutheast,
APSoutheast2.Name: APSoutheast2,
EUWest.Name: EUWest,
EUCentral.Name: EUCentral,
USEast.Name: USEast,
USWest.Name: USWest,
USWest2.Name: USWest2,
SAEast.Name: SAEast,
USGovWest.Name: USGovWest,
CNNorth.Name: CNNorth,
}
type Auth struct {
AccessKey, SecretKey, Token string
}
var unreserved = make([]bool, 128)
var hex = "0123456789ABCDEF"
func init() {
// RFC3986
u := "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz01234567890-_.~"
for _, c := range u {
unreserved[c] = true
}
}
type credentials struct {
Code string
LastUpdated string
Type string
AccessKeyId string
SecretAccessKey string
Token string
Expiration string
}
// GetMetaData retrieves instance metadata about the current machine.
//
// See http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html for more details.
func GetMetaData(path string) (contents []byte, err error) {
url := "http://169.254.169.254/latest/meta-data/" + path
resp, err := RetryingClient.Get(url)
if err != nil {
return
}
defer resp.Body.Close()
if resp.StatusCode != 200 {
err = fmt.Errorf("Code %d returned for url %s", resp.StatusCode, url)
return
}
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return
}
return []byte(body), err
}
func getInstanceCredentials() (cred credentials, err error) {
credentialPath := "iam/security-credentials/"
// Get the instance role
role, err := GetMetaData(credentialPath)
if err != nil {
return
}
// Get the instance role credentials
credentialJSON, err := GetMetaData(credentialPath + string(role))
if err != nil {
return
}
err = json.Unmarshal([]byte(credentialJSON), &cred)
return
}
// GetAuth creates an Auth based on either passed in credentials,
// environment information or instance based role credentials.
func GetAuth(accessKey string, secretKey string) (auth Auth, err error) {
// First try passed in credentials
if accessKey != "" && secretKey != "" {
return Auth{accessKey, secretKey, ""}, nil
}
// Next try to get auth from the environment
auth, err = SharedAuth()
if err == nil {
// Found auth, return
return
}
// Next try to get auth from the environment
auth, err = EnvAuth()
if err == nil {
// Found auth, return
return
}
// Next try getting auth from the instance role
cred, err := getInstanceCredentials()
if err == nil {
// Found auth, return
auth.AccessKey = cred.AccessKeyId
auth.SecretKey = cred.SecretAccessKey
auth.Token = cred.Token
return
}
err = errors.New("No valid AWS authentication found")
return
}
// SharedAuth creates an Auth based on shared credentials stored in
// $HOME/.aws/credentials. The AWS_PROFILE environment variables is used to
// select the profile.
func SharedAuth() (auth Auth, err error) {
var profileName = os.Getenv("AWS_PROFILE")
if profileName == "" {
profileName = "default"
}
var credentialsFile = os.Getenv("AWS_CREDENTIAL_FILE")
if credentialsFile == "" {
var homeDir = os.Getenv("HOME")
if homeDir == "" {
err = errors.New("Could not get HOME")
return
}
credentialsFile = homeDir + "/.aws/credentials"
}
file, err := ini.LoadFile(credentialsFile)
if err != nil {
err = errors.New("Couldn't parse AWS credentials file")
return
}
var profile = file[profileName]
if profile == nil {
err = errors.New("Couldn't find profile in AWS credentials file")
return
}
auth.AccessKey = profile["aws_access_key_id"]
auth.SecretKey = profile["aws_secret_access_key"]
if auth.AccessKey == "" {
err = errors.New("AWS_ACCESS_KEY_ID not found in environment in credentials file")
}
if auth.SecretKey == "" {
err = errors.New("AWS_SECRET_ACCESS_KEY not found in credentials file")
}
return
}
// EnvAuth creates an Auth based on environment information.
// The AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment
// For accounts that require a security token, it is read from AWS_SECURITY_TOKEN
// variables are used.
func EnvAuth() (auth Auth, err error) {
auth.AccessKey = os.Getenv("AWS_ACCESS_KEY_ID")
if auth.AccessKey == "" {
auth.AccessKey = os.Getenv("AWS_ACCESS_KEY")
}
auth.SecretKey = os.Getenv("AWS_SECRET_ACCESS_KEY")
if auth.SecretKey == "" {
auth.SecretKey = os.Getenv("AWS_SECRET_KEY")
}
auth.Token = os.Getenv("AWS_SECURITY_TOKEN")
if auth.AccessKey == "" {
err = errors.New("AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY not found in environment")
}
if auth.SecretKey == "" {
err = errors.New("AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY not found in environment")
}
return
}
// Encode takes a string and URI-encodes it in a way suitable
// to be used in AWS signatures.
func Encode(s string) string {
encode := false
for i := 0; i != len(s); i++ {
c := s[i]
if c > 127 || !unreserved[c] {
encode = true
break
}
}
if !encode {
return s
}
e := make([]byte, len(s)*3)
ei := 0
for i := 0; i != len(s); i++ {
c := s[i]
if c > 127 || !unreserved[c] {
e[ei] = '%'
e[ei+1] = hex[c>>4]
e[ei+2] = hex[c&0xF]
ei += 3
} else {
e[ei] = c
ei += 1
}
}
return string(e[:ei])
}

View File

@@ -1,203 +0,0 @@
package aws_test
import (
"github.com/mitchellh/goamz/aws"
. "github.com/motain/gocheck"
"io/ioutil"
"os"
"strings"
"testing"
)
func Test(t *testing.T) {
TestingT(t)
}
var _ = Suite(&S{})
type S struct {
environ []string
}
func (s *S) SetUpSuite(c *C) {
s.environ = os.Environ()
}
func (s *S) TearDownTest(c *C) {
os.Clearenv()
for _, kv := range s.environ {
l := strings.SplitN(kv, "=", 2)
os.Setenv(l[0], l[1])
}
}
func (s *S) TestSharedAuthNoHome(c *C) {
os.Clearenv()
os.Setenv("AWS_PROFILE", "foo")
_, err := aws.SharedAuth()
c.Assert(err, ErrorMatches, "Could not get HOME")
}
func (s *S) TestSharedAuthNoCredentialsFile(c *C) {
os.Clearenv()
os.Setenv("AWS_PROFILE", "foo")
os.Setenv("HOME", "/tmp")
_, err := aws.SharedAuth()
c.Assert(err, ErrorMatches, "Couldn't parse AWS credentials file")
}
func (s *S) TestSharedAuthNoProfileInFile(c *C) {
os.Clearenv()
os.Setenv("AWS_PROFILE", "foo")
d, err := ioutil.TempDir("", "")
if err != nil {
panic(err)
}
defer os.RemoveAll(d)
err = os.Mkdir(d+"/.aws", 0755)
if err != nil {
panic(err)
}
ioutil.WriteFile(d+"/.aws/credentials", []byte("[bar]\n"), 0644)
os.Setenv("HOME", d)
_, err = aws.SharedAuth()
c.Assert(err, ErrorMatches, "Couldn't find profile in AWS credentials file")
}
func (s *S) TestSharedAuthNoKeysInProfile(c *C) {
os.Clearenv()
os.Setenv("AWS_PROFILE", "bar")
d, err := ioutil.TempDir("", "")
if err != nil {
panic(err)
}
defer os.RemoveAll(d)
err = os.Mkdir(d+"/.aws", 0755)
if err != nil {
panic(err)
}
ioutil.WriteFile(d+"/.aws/credentials", []byte("[bar]\nawsaccesskeyid = AK.."), 0644)
os.Setenv("HOME", d)
_, err = aws.SharedAuth()
c.Assert(err, ErrorMatches, "AWS_SECRET_ACCESS_KEY not found in credentials file")
}
func (s *S) TestSharedAuthDefaultCredentials(c *C) {
os.Clearenv()
d, err := ioutil.TempDir("", "")
if err != nil {
panic(err)
}
defer os.RemoveAll(d)
err = os.Mkdir(d+"/.aws", 0755)
if err != nil {
panic(err)
}
ioutil.WriteFile(d+"/.aws/credentials", []byte("[default]\naws_access_key_id = access\naws_secret_access_key = secret\n"), 0644)
os.Setenv("HOME", d)
auth, err := aws.SharedAuth()
c.Assert(err, IsNil)
c.Assert(auth, Equals, aws.Auth{SecretKey: "secret", AccessKey: "access"})
}
func (s *S) TestSharedAuth(c *C) {
os.Clearenv()
os.Setenv("AWS_PROFILE", "bar")
d, err := ioutil.TempDir("", "")
if err != nil {
panic(err)
}
defer os.RemoveAll(d)
err = os.Mkdir(d+"/.aws", 0755)
if err != nil {
panic(err)
}
ioutil.WriteFile(d+"/.aws/credentials", []byte("[bar]\naws_access_key_id = access\naws_secret_access_key = secret\n"), 0644)
os.Setenv("HOME", d)
auth, err := aws.SharedAuth()
c.Assert(err, IsNil)
c.Assert(auth, Equals, aws.Auth{SecretKey: "secret", AccessKey: "access"})
}
func (s *S) TestEnvAuthNoSecret(c *C) {
os.Clearenv()
_, err := aws.EnvAuth()
c.Assert(err, ErrorMatches, "AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY not found in environment")
}
func (s *S) TestEnvAuthNoAccess(c *C) {
os.Clearenv()
os.Setenv("AWS_SECRET_ACCESS_KEY", "foo")
_, err := aws.EnvAuth()
c.Assert(err, ErrorMatches, "AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY not found in environment")
}
func (s *S) TestEnvAuth(c *C) {
os.Clearenv()
os.Setenv("AWS_SECRET_ACCESS_KEY", "secret")
os.Setenv("AWS_ACCESS_KEY_ID", "access")
auth, err := aws.EnvAuth()
c.Assert(err, IsNil)
c.Assert(auth, Equals, aws.Auth{SecretKey: "secret", AccessKey: "access"})
}
func (s *S) TestEnvAuthWithToken(c *C) {
os.Clearenv()
os.Setenv("AWS_SECRET_ACCESS_KEY", "secret")
os.Setenv("AWS_ACCESS_KEY_ID", "access")
os.Setenv("AWS_SECURITY_TOKEN", "token")
auth, err := aws.EnvAuth()
c.Assert(err, IsNil)
c.Assert(auth, Equals, aws.Auth{SecretKey: "secret", AccessKey: "access", Token: "token"})
}
func (s *S) TestEnvAuthAlt(c *C) {
os.Clearenv()
os.Setenv("AWS_SECRET_KEY", "secret")
os.Setenv("AWS_ACCESS_KEY", "access")
auth, err := aws.EnvAuth()
c.Assert(err, IsNil)
c.Assert(auth, Equals, aws.Auth{SecretKey: "secret", AccessKey: "access"})
}
func (s *S) TestGetAuthStatic(c *C) {
auth, err := aws.GetAuth("access", "secret")
c.Assert(err, IsNil)
c.Assert(auth, Equals, aws.Auth{SecretKey: "secret", AccessKey: "access"})
}
func (s *S) TestGetAuthEnv(c *C) {
os.Clearenv()
os.Setenv("AWS_SECRET_ACCESS_KEY", "secret")
os.Setenv("AWS_ACCESS_KEY_ID", "access")
auth, err := aws.GetAuth("", "")
c.Assert(err, IsNil)
c.Assert(auth, Equals, aws.Auth{SecretKey: "secret", AccessKey: "access"})
}
func (s *S) TestEncode(c *C) {
c.Assert(aws.Encode("foo"), Equals, "foo")
c.Assert(aws.Encode("/"), Equals, "%2F")
}
func (s *S) TestRegionsAreNamed(c *C) {
for n, r := range aws.Regions {
c.Assert(n, Equals, r.Name)
}
}

View File

@@ -1,125 +0,0 @@
package aws
import (
"math"
"net"
"net/http"
"time"
)
type RetryableFunc func(*http.Request, *http.Response, error) bool
type WaitFunc func(try int)
type DeadlineFunc func() time.Time
type ResilientTransport struct {
// Timeout is the maximum amount of time a dial will wait for
// a connect to complete.
//
// The default is no timeout.
//
// With or without a timeout, the operating system may impose
// its own earlier timeout. For instance, TCP timeouts are
// often around 3 minutes.
DialTimeout time.Duration
// MaxTries, if non-zero, specifies the number of times we will retry on
// failure. Retries are only attempted for temporary network errors or known
// safe failures.
MaxTries int
Deadline DeadlineFunc
ShouldRetry RetryableFunc
Wait WaitFunc
transport *http.Transport
}
// Convenience method for creating an http client
func NewClient(rt *ResilientTransport) *http.Client {
rt.transport = &http.Transport{
Dial: func(netw, addr string) (net.Conn, error) {
c, err := net.DialTimeout(netw, addr, rt.DialTimeout)
if err != nil {
return nil, err
}
c.SetDeadline(rt.Deadline())
return c, nil
},
DisableKeepAlives: true,
Proxy: http.ProxyFromEnvironment,
}
// TODO: Would be nice is ResilientTransport allowed clients to initialize
// with http.Transport attributes.
return &http.Client{
Transport: rt,
}
}
var retryingTransport = &ResilientTransport{
Deadline: func() time.Time {
return time.Now().Add(5 * time.Second)
},
DialTimeout: 10 * time.Second,
MaxTries: 3,
ShouldRetry: awsRetry,
Wait: ExpBackoff,
}
// Exported default client
var RetryingClient = NewClient(retryingTransport)
func (t *ResilientTransport) RoundTrip(req *http.Request) (*http.Response, error) {
return t.tries(req)
}
// Retry a request a maximum of t.MaxTries times.
// We'll only retry if the proper criteria are met.
// If a wait function is specified, wait that amount of time
// In between requests.
func (t *ResilientTransport) tries(req *http.Request) (res *http.Response, err error) {
for try := 0; try < t.MaxTries; try += 1 {
res, err = t.transport.RoundTrip(req)
if !t.ShouldRetry(req, res, err) {
break
}
if res != nil {
res.Body.Close()
}
if t.Wait != nil {
t.Wait(try)
}
}
return
}
func ExpBackoff(try int) {
time.Sleep(100 * time.Millisecond *
time.Duration(math.Exp2(float64(try))))
}
func LinearBackoff(try int) {
time.Sleep(time.Duration(try*100) * time.Millisecond)
}
// Decide if we should retry a request.
// In general, the criteria for retrying a request is described here
// http://docs.aws.amazon.com/general/latest/gr/api-retries.html
func awsRetry(req *http.Request, res *http.Response, err error) bool {
retry := false
// Retry if there's a temporary network error.
if neterr, ok := err.(net.Error); ok {
if neterr.Temporary() {
retry = true
}
}
// Retry if we get a 5xx series error.
if res != nil {
if res.StatusCode >= 500 && res.StatusCode < 600 {
retry = true
}
}
return retry
}

View File

@@ -1,121 +0,0 @@
package aws_test
import (
"fmt"
"github.com/mitchellh/goamz/aws"
"io/ioutil"
"net/http"
"net/http/httptest"
"strings"
"testing"
"time"
)
// Retrieve the response from handler using aws.RetryingClient
func serveAndGet(handler http.HandlerFunc) (body string, err error) {
ts := httptest.NewServer(handler)
defer ts.Close()
resp, err := aws.RetryingClient.Get(ts.URL)
if err != nil {
return
}
if resp.StatusCode != 200 {
return "", fmt.Errorf("Bad status code: %d", resp.StatusCode)
}
greeting, err := ioutil.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
return
}
return strings.TrimSpace(string(greeting)), nil
}
func TestClient_expected(t *testing.T) {
body := "foo bar"
resp, err := serveAndGet(func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, body)
})
if err != nil {
t.Fatal(err)
}
if resp != body {
t.Fatal("Body not as expected.")
}
}
func TestClient_delay(t *testing.T) {
body := "baz"
wait := 4
resp, err := serveAndGet(func(w http.ResponseWriter, r *http.Request) {
if wait < 0 {
// If we dipped to zero delay and still failed.
t.Fatal("Never succeeded.")
}
wait -= 1
time.Sleep(time.Second * time.Duration(wait))
fmt.Fprintln(w, body)
})
if err != nil {
t.Fatal(err)
}
if resp != body {
t.Fatal("Body not as expected.", resp)
}
}
func TestClient_no4xxRetry(t *testing.T) {
tries := 0
// Fail once before succeeding.
_, err := serveAndGet(func(w http.ResponseWriter, r *http.Request) {
tries += 1
http.Error(w, "error", 404)
})
if err == nil {
t.Fatal("should have error")
}
if tries != 1 {
t.Fatalf("should only try once: %d", tries)
}
}
func TestClient_retries(t *testing.T) {
body := "biz"
failed := false
// Fail once before succeeding.
resp, err := serveAndGet(func(w http.ResponseWriter, r *http.Request) {
if !failed {
http.Error(w, "error", 500)
failed = true
} else {
fmt.Fprintln(w, body)
}
})
if failed != true {
t.Error("We didn't retry!")
}
if err != nil {
t.Fatal(err)
}
if resp != body {
t.Fatal("Body not as expected.")
}
}
func TestClient_fails(t *testing.T) {
tries := 0
// Fail 3 times and return the last error.
_, err := serveAndGet(func(w http.ResponseWriter, r *http.Request) {
tries += 1
http.Error(w, "error", 500)
})
if err == nil {
t.Fatal(err)
}
if tries != 3 {
t.Fatal("Didn't retry enough")
}
}

View File

@@ -1,27 +0,0 @@
package s3
import (
"github.com/mitchellh/goamz/aws"
)
var originalStrategy = attempts
func SetAttemptStrategy(s *aws.AttemptStrategy) {
if s == nil {
attempts = originalStrategy
} else {
attempts = *s
}
}
func Sign(auth aws.Auth, method, path string, params, headers map[string][]string) {
sign(auth, method, path, params, headers)
}
func SetListPartsMax(n int) {
listPartsMax = n
}
func SetListMultiMax(n int) {
listMultiMax = n
}

View File

@@ -1,409 +0,0 @@
package s3
import (
"bytes"
"crypto/md5"
"encoding/base64"
"encoding/hex"
"encoding/xml"
"errors"
"io"
"sort"
"strconv"
)
// Multi represents an unfinished multipart upload.
//
// Multipart uploads allow sending big objects in smaller chunks.
// After all parts have been sent, the upload must be explicitly
// completed by calling Complete with the list of parts.
//
// See http://goo.gl/vJfTG for an overview of multipart uploads.
type Multi struct {
Bucket *Bucket
Key string
UploadId string
}
// That's the default. Here just for testing.
var listMultiMax = 1000
type listMultiResp struct {
NextKeyMarker string
NextUploadIdMarker string
IsTruncated bool
Upload []Multi
CommonPrefixes []string `xml:"CommonPrefixes>Prefix"`
}
// ListMulti returns the list of unfinished multipart uploads in b.
//
// The prefix parameter limits the response to keys that begin with the
// specified prefix. You can use prefixes to separate a bucket into different
// groupings of keys (to get the feeling of folders, for example).
//
// The delim parameter causes the response to group all of the keys that
// share a common prefix up to the next delimiter in a single entry within
// the CommonPrefixes field. You can use delimiters to separate a bucket
// into different groupings of keys, similar to how folders would work.
//
// See http://goo.gl/ePioY for details.
func (b *Bucket) ListMulti(prefix, delim string) (multis []*Multi, prefixes []string, err error) {
params := map[string][]string{
"uploads": {""},
"max-uploads": {strconv.FormatInt(int64(listMultiMax), 10)},
"prefix": {prefix},
"delimiter": {delim},
}
for attempt := attempts.Start(); attempt.Next(); {
req := &request{
method: "GET",
bucket: b.Name,
params: params,
}
var resp listMultiResp
err := b.S3.query(req, &resp)
if shouldRetry(err) && attempt.HasNext() {
continue
}
if err != nil {
return nil, nil, err
}
for i := range resp.Upload {
multi := &resp.Upload[i]
multi.Bucket = b
multis = append(multis, multi)
}
prefixes = append(prefixes, resp.CommonPrefixes...)
if !resp.IsTruncated {
return multis, prefixes, nil
}
params["key-marker"] = []string{resp.NextKeyMarker}
params["upload-id-marker"] = []string{resp.NextUploadIdMarker}
attempt = attempts.Start() // Last request worked.
}
panic("unreachable")
}
// Multi returns a multipart upload handler for the provided key
// inside b. If a multipart upload exists for key, it is returned,
// otherwise a new multipart upload is initiated with contType and perm.
func (b *Bucket) Multi(key, contType string, perm ACL) (*Multi, error) {
multis, _, err := b.ListMulti(key, "")
if err != nil && !hasCode(err, "NoSuchUpload") {
return nil, err
}
for _, m := range multis {
if m.Key == key {
return m, nil
}
}
return b.InitMulti(key, contType, perm)
}
// InitMulti initializes a new multipart upload at the provided
// key inside b and returns a value for manipulating it.
//
// See http://goo.gl/XP8kL for details.
func (b *Bucket) InitMulti(key string, contType string, perm ACL) (*Multi, error) {
headers := map[string][]string{
"Content-Type": {contType},
"Content-Length": {"0"},
"x-amz-acl": {string(perm)},
}
params := map[string][]string{
"uploads": {""},
}
req := &request{
method: "POST",
bucket: b.Name,
path: key,
headers: headers,
params: params,
}
var err error
var resp struct {
UploadId string `xml:"UploadId"`
}
for attempt := attempts.Start(); attempt.Next(); {
err = b.S3.query(req, &resp)
if !shouldRetry(err) {
break
}
}
if err != nil {
return nil, err
}
return &Multi{Bucket: b, Key: key, UploadId: resp.UploadId}, nil
}
// PutPart sends part n of the multipart upload, reading all the content from r.
// Each part, except for the last one, must be at least 5MB in size.
//
// See http://goo.gl/pqZer for details.
func (m *Multi) PutPart(n int, r io.ReadSeeker) (Part, error) {
partSize, _, md5b64, err := seekerInfo(r)
if err != nil {
return Part{}, err
}
return m.putPart(n, r, partSize, md5b64)
}
func (m *Multi) putPart(n int, r io.ReadSeeker, partSize int64, md5b64 string) (Part, error) {
headers := map[string][]string{
"Content-Length": {strconv.FormatInt(partSize, 10)},
"Content-MD5": {md5b64},
}
params := map[string][]string{
"uploadId": {m.UploadId},
"partNumber": {strconv.FormatInt(int64(n), 10)},
}
for attempt := attempts.Start(); attempt.Next(); {
_, err := r.Seek(0, 0)
if err != nil {
return Part{}, err
}
req := &request{
method: "PUT",
bucket: m.Bucket.Name,
path: m.Key,
headers: headers,
params: params,
payload: r,
}
err = m.Bucket.S3.prepare(req)
if err != nil {
return Part{}, err
}
resp, err := m.Bucket.S3.run(req, nil)
if shouldRetry(err) && attempt.HasNext() {
continue
}
if err != nil {
return Part{}, err
}
etag := resp.Header.Get("ETag")
if etag == "" {
return Part{}, errors.New("part upload succeeded with no ETag")
}
return Part{n, etag, partSize}, nil
}
panic("unreachable")
}
func seekerInfo(r io.ReadSeeker) (size int64, md5hex string, md5b64 string, err error) {
_, err = r.Seek(0, 0)
if err != nil {
return 0, "", "", err
}
digest := md5.New()
size, err = io.Copy(digest, r)
if err != nil {
return 0, "", "", err
}
sum := digest.Sum(nil)
md5hex = hex.EncodeToString(sum)
md5b64 = base64.StdEncoding.EncodeToString(sum)
return size, md5hex, md5b64, nil
}
type Part struct {
N int `xml:"PartNumber"`
ETag string
Size int64
}
type partSlice []Part
func (s partSlice) Len() int { return len(s) }
func (s partSlice) Less(i, j int) bool { return s[i].N < s[j].N }
func (s partSlice) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
type listPartsResp struct {
NextPartNumberMarker string
IsTruncated bool
Part []Part
}
// That's the default. Here just for testing.
var listPartsMax = 1000
// ListParts returns the list of previously uploaded parts in m,
// ordered by part number.
//
// See http://goo.gl/ePioY for details.
func (m *Multi) ListParts() ([]Part, error) {
params := map[string][]string{
"uploadId": {m.UploadId},
"max-parts": {strconv.FormatInt(int64(listPartsMax), 10)},
}
var parts partSlice
for attempt := attempts.Start(); attempt.Next(); {
req := &request{
method: "GET",
bucket: m.Bucket.Name,
path: m.Key,
params: params,
}
var resp listPartsResp
err := m.Bucket.S3.query(req, &resp)
if shouldRetry(err) && attempt.HasNext() {
continue
}
if err != nil {
return nil, err
}
parts = append(parts, resp.Part...)
if !resp.IsTruncated {
sort.Sort(parts)
return parts, nil
}
params["part-number-marker"] = []string{resp.NextPartNumberMarker}
attempt = attempts.Start() // Last request worked.
}
panic("unreachable")
}
type ReaderAtSeeker interface {
io.ReaderAt
io.ReadSeeker
}
// PutAll sends all of r via a multipart upload with parts no larger
// than partSize bytes, which must be set to at least 5MB.
// Parts previously uploaded are either reused if their checksum
// and size match the new part, or otherwise overwritten with the
// new content.
// PutAll returns all the parts of m (reused or not).
func (m *Multi) PutAll(r ReaderAtSeeker, partSize int64) ([]Part, error) {
old, err := m.ListParts()
if err != nil && !hasCode(err, "NoSuchUpload") {
return nil, err
}
reuse := 0 // Index of next old part to consider reusing.
current := 1 // Part number of latest good part handled.
totalSize, err := r.Seek(0, 2)
if err != nil {
return nil, err
}
first := true // Must send at least one empty part if the file is empty.
var result []Part
NextSection:
for offset := int64(0); offset < totalSize || first; offset += partSize {
first = false
if offset+partSize > totalSize {
partSize = totalSize - offset
}
section := io.NewSectionReader(r, offset, partSize)
_, md5hex, md5b64, err := seekerInfo(section)
if err != nil {
return nil, err
}
for reuse < len(old) && old[reuse].N <= current {
// Looks like this part was already sent.
part := &old[reuse]
etag := `"` + md5hex + `"`
if part.N == current && part.Size == partSize && part.ETag == etag {
// Checksum matches. Reuse the old part.
result = append(result, *part)
current++
continue NextSection
}
reuse++
}
// Part wasn't found or doesn't match. Send it.
part, err := m.putPart(current, section, partSize, md5b64)
if err != nil {
return nil, err
}
result = append(result, part)
current++
}
return result, nil
}
type completeUpload struct {
XMLName xml.Name `xml:"CompleteMultipartUpload"`
Parts completeParts `xml:"Part"`
}
type completePart struct {
PartNumber int
ETag string
}
type completeParts []completePart
func (p completeParts) Len() int { return len(p) }
func (p completeParts) Less(i, j int) bool { return p[i].PartNumber < p[j].PartNumber }
func (p completeParts) Swap(i, j int) { p[i], p[j] = p[j], p[i] }
// Complete assembles the given previously uploaded parts into the
// final object. This operation may take several minutes.
//
// See http://goo.gl/2Z7Tw for details.
func (m *Multi) Complete(parts []Part) error {
params := map[string][]string{
"uploadId": {m.UploadId},
}
c := completeUpload{}
for _, p := range parts {
c.Parts = append(c.Parts, completePart{p.N, p.ETag})
}
sort.Sort(c.Parts)
data, err := xml.Marshal(&c)
if err != nil {
return err
}
for attempt := attempts.Start(); attempt.Next(); {
req := &request{
method: "POST",
bucket: m.Bucket.Name,
path: m.Key,
params: params,
payload: bytes.NewReader(data),
}
err := m.Bucket.S3.query(req, nil)
if shouldRetry(err) && attempt.HasNext() {
continue
}
return err
}
panic("unreachable")
}
// Abort deletes an unifinished multipart upload and any previously
// uploaded parts for it.
//
// After a multipart upload is aborted, no additional parts can be
// uploaded using it. However, if any part uploads are currently in
// progress, those part uploads might or might not succeed. As a result,
// it might be necessary to abort a given multipart upload multiple
// times in order to completely free all storage consumed by all parts.
//
// NOTE: If the described scenario happens to you, please report back to
// the goamz authors with details. In the future such retrying should be
// handled internally, but it's not clear what happens precisely (Is an
// error returned? Is the issue completely undetectable?).
//
// See http://goo.gl/dnyJw for details.
func (m *Multi) Abort() error {
params := map[string][]string{
"uploadId": {m.UploadId},
}
for attempt := attempts.Start(); attempt.Next(); {
req := &request{
method: "DELETE",
bucket: m.Bucket.Name,
path: m.Key,
params: params,
}
err := m.Bucket.S3.query(req, nil)
if shouldRetry(err) && attempt.HasNext() {
continue
}
return err
}
panic("unreachable")
}

View File

@@ -1,370 +0,0 @@
package s3_test
import (
"encoding/xml"
"github.com/mitchellh/goamz/s3"
. "github.com/motain/gocheck"
"io"
"io/ioutil"
"strings"
)
func (s *S) TestInitMulti(c *C) {
testServer.Response(200, nil, InitMultiResultDump)
b := s.s3.Bucket("sample")
multi, err := b.InitMulti("multi", "text/plain", s3.Private)
c.Assert(err, IsNil)
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "POST")
c.Assert(req.URL.Path, Equals, "/sample/multi")
c.Assert(req.Header["Content-Type"], DeepEquals, []string{"text/plain"})
c.Assert(req.Header["X-Amz-Acl"], DeepEquals, []string{"private"})
c.Assert(req.Form["uploads"], DeepEquals, []string{""})
c.Assert(multi.UploadId, Matches, "JNbR_[A-Za-z0-9.]+QQ--")
}
func (s *S) TestMultiNoPreviousUpload(c *C) {
// Don't retry the NoSuchUpload error.
s.DisableRetries()
testServer.Response(404, nil, NoSuchUploadErrorDump)
testServer.Response(200, nil, InitMultiResultDump)
b := s.s3.Bucket("sample")
multi, err := b.Multi("multi", "text/plain", s3.Private)
c.Assert(err, IsNil)
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "GET")
c.Assert(req.URL.Path, Equals, "/sample/")
c.Assert(req.Form["uploads"], DeepEquals, []string{""})
c.Assert(req.Form["prefix"], DeepEquals, []string{"multi"})
req = testServer.WaitRequest()
c.Assert(req.Method, Equals, "POST")
c.Assert(req.URL.Path, Equals, "/sample/multi")
c.Assert(req.Form["uploads"], DeepEquals, []string{""})
c.Assert(multi.UploadId, Matches, "JNbR_[A-Za-z0-9.]+QQ--")
}
func (s *S) TestMultiReturnOld(c *C) {
testServer.Response(200, nil, ListMultiResultDump)
b := s.s3.Bucket("sample")
multi, err := b.Multi("multi1", "text/plain", s3.Private)
c.Assert(err, IsNil)
c.Assert(multi.Key, Equals, "multi1")
c.Assert(multi.UploadId, Equals, "iUVug89pPvSswrikD")
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "GET")
c.Assert(req.URL.Path, Equals, "/sample/")
c.Assert(req.Form["uploads"], DeepEquals, []string{""})
c.Assert(req.Form["prefix"], DeepEquals, []string{"multi1"})
}
func (s *S) TestListParts(c *C) {
testServer.Response(200, nil, InitMultiResultDump)
testServer.Response(200, nil, ListPartsResultDump1)
testServer.Response(404, nil, NoSuchUploadErrorDump) // :-(
testServer.Response(200, nil, ListPartsResultDump2)
b := s.s3.Bucket("sample")
multi, err := b.InitMulti("multi", "text/plain", s3.Private)
c.Assert(err, IsNil)
parts, err := multi.ListParts()
c.Assert(err, IsNil)
c.Assert(parts, HasLen, 3)
c.Assert(parts[0].N, Equals, 1)
c.Assert(parts[0].Size, Equals, int64(5))
c.Assert(parts[0].ETag, Equals, `"ffc88b4ca90a355f8ddba6b2c3b2af5c"`)
c.Assert(parts[1].N, Equals, 2)
c.Assert(parts[1].Size, Equals, int64(5))
c.Assert(parts[1].ETag, Equals, `"d067a0fa9dc61a6e7195ca99696b5a89"`)
c.Assert(parts[2].N, Equals, 3)
c.Assert(parts[2].Size, Equals, int64(5))
c.Assert(parts[2].ETag, Equals, `"49dcd91231f801159e893fb5c6674985"`)
testServer.WaitRequest()
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "GET")
c.Assert(req.URL.Path, Equals, "/sample/multi")
c.Assert(req.Form.Get("uploadId"), Matches, "JNbR_[A-Za-z0-9.]+QQ--")
c.Assert(req.Form["max-parts"], DeepEquals, []string{"1000"})
testServer.WaitRequest() // The internal error.
req = testServer.WaitRequest()
c.Assert(req.Method, Equals, "GET")
c.Assert(req.URL.Path, Equals, "/sample/multi")
c.Assert(req.Form.Get("uploadId"), Matches, "JNbR_[A-Za-z0-9.]+QQ--")
c.Assert(req.Form["max-parts"], DeepEquals, []string{"1000"})
c.Assert(req.Form["part-number-marker"], DeepEquals, []string{"2"})
}
func (s *S) TestPutPart(c *C) {
headers := map[string]string{
"ETag": `"26f90efd10d614f100252ff56d88dad8"`,
}
testServer.Response(200, nil, InitMultiResultDump)
testServer.Response(200, headers, "")
b := s.s3.Bucket("sample")
multi, err := b.InitMulti("multi", "text/plain", s3.Private)
c.Assert(err, IsNil)
part, err := multi.PutPart(1, strings.NewReader("<part 1>"))
c.Assert(err, IsNil)
c.Assert(part.N, Equals, 1)
c.Assert(part.Size, Equals, int64(8))
c.Assert(part.ETag, Equals, headers["ETag"])
testServer.WaitRequest()
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "PUT")
c.Assert(req.URL.Path, Equals, "/sample/multi")
c.Assert(req.Form.Get("uploadId"), Matches, "JNbR_[A-Za-z0-9.]+QQ--")
c.Assert(req.Form["partNumber"], DeepEquals, []string{"1"})
c.Assert(req.Header["Content-Length"], DeepEquals, []string{"8"})
c.Assert(req.Header["Content-Md5"], DeepEquals, []string{"JvkO/RDWFPEAJS/1bYja2A=="})
}
func readAll(r io.Reader) string {
data, err := ioutil.ReadAll(r)
if err != nil {
panic(err)
}
return string(data)
}
func (s *S) TestPutAllNoPreviousUpload(c *C) {
// Don't retry the NoSuchUpload error.
s.DisableRetries()
etag1 := map[string]string{"ETag": `"etag1"`}
etag2 := map[string]string{"ETag": `"etag2"`}
etag3 := map[string]string{"ETag": `"etag3"`}
testServer.Response(200, nil, InitMultiResultDump)
testServer.Response(404, nil, NoSuchUploadErrorDump)
testServer.Response(200, etag1, "")
testServer.Response(200, etag2, "")
testServer.Response(200, etag3, "")
b := s.s3.Bucket("sample")
multi, err := b.InitMulti("multi", "text/plain", s3.Private)
c.Assert(err, IsNil)
parts, err := multi.PutAll(strings.NewReader("part1part2last"), 5)
c.Assert(parts, HasLen, 3)
c.Assert(parts[0].ETag, Equals, `"etag1"`)
c.Assert(parts[1].ETag, Equals, `"etag2"`)
c.Assert(parts[2].ETag, Equals, `"etag3"`)
c.Assert(err, IsNil)
// Init
testServer.WaitRequest()
// List old parts. Won't find anything.
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "GET")
c.Assert(req.URL.Path, Equals, "/sample/multi")
// Send part 1.
req = testServer.WaitRequest()
c.Assert(req.Method, Equals, "PUT")
c.Assert(req.URL.Path, Equals, "/sample/multi")
c.Assert(req.Form["partNumber"], DeepEquals, []string{"1"})
c.Assert(req.Header["Content-Length"], DeepEquals, []string{"5"})
c.Assert(readAll(req.Body), Equals, "part1")
// Send part 2.
req = testServer.WaitRequest()
c.Assert(req.Method, Equals, "PUT")
c.Assert(req.URL.Path, Equals, "/sample/multi")
c.Assert(req.Form["partNumber"], DeepEquals, []string{"2"})
c.Assert(req.Header["Content-Length"], DeepEquals, []string{"5"})
c.Assert(readAll(req.Body), Equals, "part2")
// Send part 3 with shorter body.
req = testServer.WaitRequest()
c.Assert(req.Method, Equals, "PUT")
c.Assert(req.URL.Path, Equals, "/sample/multi")
c.Assert(req.Form["partNumber"], DeepEquals, []string{"3"})
c.Assert(req.Header["Content-Length"], DeepEquals, []string{"4"})
c.Assert(readAll(req.Body), Equals, "last")
}
func (s *S) TestPutAllZeroSizeFile(c *C) {
// Don't retry the NoSuchUpload error.
s.DisableRetries()
etag1 := map[string]string{"ETag": `"etag1"`}
testServer.Response(200, nil, InitMultiResultDump)
testServer.Response(404, nil, NoSuchUploadErrorDump)
testServer.Response(200, etag1, "")
b := s.s3.Bucket("sample")
multi, err := b.InitMulti("multi", "text/plain", s3.Private)
c.Assert(err, IsNil)
// Must send at least one part, so that completing it will work.
parts, err := multi.PutAll(strings.NewReader(""), 5)
c.Assert(parts, HasLen, 1)
c.Assert(parts[0].ETag, Equals, `"etag1"`)
c.Assert(err, IsNil)
// Init
testServer.WaitRequest()
// List old parts. Won't find anything.
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "GET")
c.Assert(req.URL.Path, Equals, "/sample/multi")
// Send empty part.
req = testServer.WaitRequest()
c.Assert(req.Method, Equals, "PUT")
c.Assert(req.URL.Path, Equals, "/sample/multi")
c.Assert(req.Form["partNumber"], DeepEquals, []string{"1"})
c.Assert(req.Header["Content-Length"], DeepEquals, []string{"0"})
c.Assert(readAll(req.Body), Equals, "")
}
func (s *S) TestPutAllResume(c *C) {
etag2 := map[string]string{"ETag": `"etag2"`}
testServer.Response(200, nil, InitMultiResultDump)
testServer.Response(200, nil, ListPartsResultDump1)
testServer.Response(200, nil, ListPartsResultDump2)
testServer.Response(200, etag2, "")
b := s.s3.Bucket("sample")
multi, err := b.InitMulti("multi", "text/plain", s3.Private)
c.Assert(err, IsNil)
// "part1" and "part3" match the checksums in ResultDump1.
// The middle one is a mismatch (it refers to "part2").
parts, err := multi.PutAll(strings.NewReader("part1partXpart3"), 5)
c.Assert(parts, HasLen, 3)
c.Assert(parts[0].N, Equals, 1)
c.Assert(parts[0].Size, Equals, int64(5))
c.Assert(parts[0].ETag, Equals, `"ffc88b4ca90a355f8ddba6b2c3b2af5c"`)
c.Assert(parts[1].N, Equals, 2)
c.Assert(parts[1].Size, Equals, int64(5))
c.Assert(parts[1].ETag, Equals, `"etag2"`)
c.Assert(parts[2].N, Equals, 3)
c.Assert(parts[2].Size, Equals, int64(5))
c.Assert(parts[2].ETag, Equals, `"49dcd91231f801159e893fb5c6674985"`)
c.Assert(err, IsNil)
// Init
testServer.WaitRequest()
// List old parts, broken in two requests.
for i := 0; i < 2; i++ {
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "GET")
c.Assert(req.URL.Path, Equals, "/sample/multi")
}
// Send part 2, as it didn't match the checksum.
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "PUT")
c.Assert(req.URL.Path, Equals, "/sample/multi")
c.Assert(req.Form["partNumber"], DeepEquals, []string{"2"})
c.Assert(req.Header["Content-Length"], DeepEquals, []string{"5"})
c.Assert(readAll(req.Body), Equals, "partX")
}
func (s *S) TestMultiComplete(c *C) {
testServer.Response(200, nil, InitMultiResultDump)
// Note the 200 response. Completing will hold the connection on some
// kind of long poll, and may return a late error even after a 200.
testServer.Response(200, nil, InternalErrorDump)
testServer.Response(200, nil, "")
b := s.s3.Bucket("sample")
multi, err := b.InitMulti("multi", "text/plain", s3.Private)
c.Assert(err, IsNil)
err = multi.Complete([]s3.Part{{2, `"ETag2"`, 32}, {1, `"ETag1"`, 64}})
c.Assert(err, IsNil)
testServer.WaitRequest()
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "POST")
c.Assert(req.URL.Path, Equals, "/sample/multi")
c.Assert(req.Form.Get("uploadId"), Matches, "JNbR_[A-Za-z0-9.]+QQ--")
var payload struct {
XMLName xml.Name
Part []struct {
PartNumber int
ETag string
}
}
dec := xml.NewDecoder(req.Body)
err = dec.Decode(&payload)
c.Assert(err, IsNil)
c.Assert(payload.XMLName.Local, Equals, "CompleteMultipartUpload")
c.Assert(len(payload.Part), Equals, 2)
c.Assert(payload.Part[0].PartNumber, Equals, 1)
c.Assert(payload.Part[0].ETag, Equals, `"ETag1"`)
c.Assert(payload.Part[1].PartNumber, Equals, 2)
c.Assert(payload.Part[1].ETag, Equals, `"ETag2"`)
}
func (s *S) TestMultiAbort(c *C) {
testServer.Response(200, nil, InitMultiResultDump)
testServer.Response(200, nil, "")
b := s.s3.Bucket("sample")
multi, err := b.InitMulti("multi", "text/plain", s3.Private)
c.Assert(err, IsNil)
err = multi.Abort()
c.Assert(err, IsNil)
testServer.WaitRequest()
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "DELETE")
c.Assert(req.URL.Path, Equals, "/sample/multi")
c.Assert(req.Form.Get("uploadId"), Matches, "JNbR_[A-Za-z0-9.]+QQ--")
}
func (s *S) TestListMulti(c *C) {
testServer.Response(200, nil, ListMultiResultDump)
b := s.s3.Bucket("sample")
multis, prefixes, err := b.ListMulti("", "/")
c.Assert(err, IsNil)
c.Assert(prefixes, DeepEquals, []string{"a/", "b/"})
c.Assert(multis, HasLen, 2)
c.Assert(multis[0].Key, Equals, "multi1")
c.Assert(multis[0].UploadId, Equals, "iUVug89pPvSswrikD")
c.Assert(multis[1].Key, Equals, "multi2")
c.Assert(multis[1].UploadId, Equals, "DkirwsSvPp98guVUi")
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "GET")
c.Assert(req.URL.Path, Equals, "/sample/")
c.Assert(req.Form["uploads"], DeepEquals, []string{""})
c.Assert(req.Form["prefix"], DeepEquals, []string{""})
c.Assert(req.Form["delimiter"], DeepEquals, []string{"/"})
c.Assert(req.Form["max-uploads"], DeepEquals, []string{"1000"})
}

View File

@@ -1,241 +0,0 @@
package s3_test
var GetObjectErrorDump = `
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchBucket</Code><Message>The specified bucket does not exist</Message>
<BucketName>non-existent-bucket</BucketName><RequestId>3F1B667FAD71C3D8</RequestId>
<HostId>L4ee/zrm1irFXY5F45fKXIRdOf9ktsKY/8TDVawuMK2jWRb1RF84i1uBzkdNqS5D</HostId></Error>
`
var GetListResultDump1 = `
<?xml version="1.0" encoding="UTF-8"?>
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01">
<Name>quotes</Name>
<Prefix>N</Prefix>
<IsTruncated>false</IsTruncated>
<Contents>
<Key>Nelson</Key>
<LastModified>2006-01-01T12:00:00.000Z</LastModified>
<ETag>&quot;828ef3fdfa96f00ad9f27c383fc9ac7f&quot;</ETag>
<Size>5</Size>
<StorageClass>STANDARD</StorageClass>
<Owner>
<ID>bcaf161ca5fb16fd081034f</ID>
<DisplayName>webfile</DisplayName>
</Owner>
</Contents>
<Contents>
<Key>Neo</Key>
<LastModified>2006-01-01T12:00:00.000Z</LastModified>
<ETag>&quot;828ef3fdfa96f00ad9f27c383fc9ac7f&quot;</ETag>
<Size>4</Size>
<StorageClass>STANDARD</StorageClass>
<Owner>
<ID>bcaf1ffd86a5fb16fd081034f</ID>
<DisplayName>webfile</DisplayName>
</Owner>
</Contents>
</ListBucketResult>
`
var GetListResultDump2 = `
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Name>example-bucket</Name>
<Prefix>photos/2006/</Prefix>
<Marker>some-marker</Marker>
<MaxKeys>1000</MaxKeys>
<Delimiter>/</Delimiter>
<IsTruncated>false</IsTruncated>
<CommonPrefixes>
<Prefix>photos/2006/feb/</Prefix>
</CommonPrefixes>
<CommonPrefixes>
<Prefix>photos/2006/jan/</Prefix>
</CommonPrefixes>
</ListBucketResult>
`
var InitMultiResultDump = `
<?xml version="1.0" encoding="UTF-8"?>
<InitiateMultipartUploadResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Bucket>sample</Bucket>
<Key>multi</Key>
<UploadId>JNbR_cMdwnGiD12jKAd6WK2PUkfj2VxA7i4nCwjE6t71nI9Tl3eVDPFlU0nOixhftH7I17ZPGkV3QA.l7ZD.QQ--</UploadId>
</InitiateMultipartUploadResult>
`
var ListPartsResultDump1 = `
<?xml version="1.0" encoding="UTF-8"?>
<ListPartsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Bucket>sample</Bucket>
<Key>multi</Key>
<UploadId>JNbR_cMdwnGiD12jKAd6WK2PUkfj2VxA7i4nCwjE6t71nI9Tl3eVDPFlU0nOixhftH7I17ZPGkV3QA.l7ZD.QQ--</UploadId>
<Initiator>
<ID>bb5c0f63b0b25f2d099c</ID>
<DisplayName>joe</DisplayName>
</Initiator>
<Owner>
<ID>bb5c0f63b0b25f2d099c</ID>
<DisplayName>joe</DisplayName>
</Owner>
<StorageClass>STANDARD</StorageClass>
<PartNumberMarker>0</PartNumberMarker>
<NextPartNumberMarker>2</NextPartNumberMarker>
<MaxParts>2</MaxParts>
<IsTruncated>true</IsTruncated>
<Part>
<PartNumber>1</PartNumber>
<LastModified>2013-01-30T13:45:51.000Z</LastModified>
<ETag>&quot;ffc88b4ca90a355f8ddba6b2c3b2af5c&quot;</ETag>
<Size>5</Size>
</Part>
<Part>
<PartNumber>2</PartNumber>
<LastModified>2013-01-30T13:45:52.000Z</LastModified>
<ETag>&quot;d067a0fa9dc61a6e7195ca99696b5a89&quot;</ETag>
<Size>5</Size>
</Part>
</ListPartsResult>
`
var ListPartsResultDump2 = `
<?xml version="1.0" encoding="UTF-8"?>
<ListPartsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Bucket>sample</Bucket>
<Key>multi</Key>
<UploadId>JNbR_cMdwnGiD12jKAd6WK2PUkfj2VxA7i4nCwjE6t71nI9Tl3eVDPFlU0nOixhftH7I17ZPGkV3QA.l7ZD.QQ--</UploadId>
<Initiator>
<ID>bb5c0f63b0b25f2d099c</ID>
<DisplayName>joe</DisplayName>
</Initiator>
<Owner>
<ID>bb5c0f63b0b25f2d099c</ID>
<DisplayName>joe</DisplayName>
</Owner>
<StorageClass>STANDARD</StorageClass>
<PartNumberMarker>2</PartNumberMarker>
<NextPartNumberMarker>3</NextPartNumberMarker>
<MaxParts>2</MaxParts>
<IsTruncated>false</IsTruncated>
<Part>
<PartNumber>3</PartNumber>
<LastModified>2013-01-30T13:46:50.000Z</LastModified>
<ETag>&quot;49dcd91231f801159e893fb5c6674985&quot;</ETag>
<Size>5</Size>
</Part>
</ListPartsResult>
`
var ListMultiResultDump = `
<?xml version="1.0"?>
<ListMultipartUploadsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Bucket>goamz-test-bucket-us-east-1-akiajk3wyewhctyqbf7a</Bucket>
<KeyMarker/>
<UploadIdMarker/>
<NextKeyMarker>multi1</NextKeyMarker>
<NextUploadIdMarker>iUVug89pPvSswrikD72p8uO62EzhNtpDxRmwC5WSiWDdK9SfzmDqe3xpP1kMWimyimSnz4uzFc3waVM5ufrKYQ--</NextUploadIdMarker>
<Delimiter>/</Delimiter>
<MaxUploads>1000</MaxUploads>
<IsTruncated>false</IsTruncated>
<Upload>
<Key>multi1</Key>
<UploadId>iUVug89pPvSswrikD</UploadId>
<Initiator>
<ID>bb5c0f63b0b25f2d0</ID>
<DisplayName>gustavoniemeyer</DisplayName>
</Initiator>
<Owner>
<ID>bb5c0f63b0b25f2d0</ID>
<DisplayName>gustavoniemeyer</DisplayName>
</Owner>
<StorageClass>STANDARD</StorageClass>
<Initiated>2013-01-30T18:15:47.000Z</Initiated>
</Upload>
<Upload>
<Key>multi2</Key>
<UploadId>DkirwsSvPp98guVUi</UploadId>
<Initiator>
<ID>bb5c0f63b0b25f2d0</ID>
<DisplayName>joe</DisplayName>
</Initiator>
<Owner>
<ID>bb5c0f63b0b25f2d0</ID>
<DisplayName>joe</DisplayName>
</Owner>
<StorageClass>STANDARD</StorageClass>
<Initiated>2013-01-30T18:15:47.000Z</Initiated>
</Upload>
<CommonPrefixes>
<Prefix>a/</Prefix>
</CommonPrefixes>
<CommonPrefixes>
<Prefix>b/</Prefix>
</CommonPrefixes>
</ListMultipartUploadsResult>
`
var NoSuchUploadErrorDump = `
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>NoSuchUpload</Code>
<Message>Not relevant</Message>
<BucketName>sample</BucketName>
<RequestId>3F1B667FAD71C3D8</RequestId>
<HostId>kjhwqk</HostId>
</Error>
`
var InternalErrorDump = `
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>InternalError</Code>
<Message>Not relevant</Message>
<BucketName>sample</BucketName>
<RequestId>3F1B667FAD71C3D8</RequestId>
<HostId>kjhwqk</HostId>
</Error>
`
var GetKeyHeaderDump = map[string]string{
"x-amz-id-2": "ef8yU9AS1ed4OpIszj7UDNEHGran",
"x-amz-request-id": "318BC8BC143432E5",
"x-amz-version-id": "3HL4kqtJlcpXroDTDmjVBH40Nrjfkd",
"Date": "Wed, 28 Oct 2009 22:32:00 GMT",
"Last-Modified": "Sun, 1 Jan 2006 12:00:00 GMT",
"ETag": "fba9dede5f27731c9771645a39863328",
"Content-Length": "434234",
"Content-Type": "text/plain",
}
var GetListBucketsDump = `
<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Owner>
<ID>bb5c0f63b0b25f2d0</ID>
<DisplayName>joe</DisplayName>
</Owner>
<Buckets>
<Bucket>
<Name>bucket1</Name>
<CreationDate>2012-01-01T02:03:04.000Z</CreationDate>
</Bucket>
<Bucket>
<Name>bucket2</Name>
<CreationDate>2014-01-11T02:03:04.000Z</CreationDate>
</Bucket>
</Buckets>
</ListAllMyBucketsResult>
`
var MultiDelDump = `
<?xml version="1.0" encoding="UTF-8"?>
<DeleteResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Deleted>
<Key>a.go</Key>
</Deleted>
<Deleted>
<Key>b.go</Key>
</Deleted>
</DeleteResult>
`

View File

@@ -1,893 +0,0 @@
//
// goamz - Go packages to interact with the Amazon Web Services.
//
// https://wiki.ubuntu.com/goamz
//
// Copyright (c) 2011 Canonical Ltd.
//
// Written by Gustavo Niemeyer <gustavo.niemeyer@canonical.com>
//
package s3
import (
"bytes"
"crypto/md5"
"encoding/base64"
"encoding/xml"
"fmt"
"github.com/mitchellh/goamz/aws"
"io"
"io/ioutil"
"log"
"net"
"net/http"
"net/http/httputil"
"net/url"
"strconv"
"strings"
"time"
)
const debug = false
// The S3 type encapsulates operations with an S3 region.
type S3 struct {
aws.Auth
aws.Region
HTTPClient func() *http.Client
private byte // Reserve the right of using private data.
}
// The Bucket type encapsulates operations with an S3 bucket.
type Bucket struct {
*S3
Name string
}
// The Owner type represents the owner of the object in an S3 bucket.
type Owner struct {
ID string
DisplayName string
}
var attempts = aws.AttemptStrategy{
Min: 5,
Total: 5 * time.Second,
Delay: 200 * time.Millisecond,
}
// New creates a new S3.
func New(auth aws.Auth, region aws.Region) *S3 {
return &S3{
Auth: auth,
Region: region,
HTTPClient: func() *http.Client {
return http.DefaultClient
},
private: 0}
}
// Bucket returns a Bucket with the given name.
func (s3 *S3) Bucket(name string) *Bucket {
if s3.Region.S3BucketEndpoint != "" || s3.Region.S3LowercaseBucket {
name = strings.ToLower(name)
}
return &Bucket{s3, name}
}
var createBucketConfiguration = `<CreateBucketConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<LocationConstraint>%s</LocationConstraint>
</CreateBucketConfiguration>`
// locationConstraint returns an io.Reader specifying a LocationConstraint if
// required for the region.
//
// See http://goo.gl/bh9Kq for details.
func (s3 *S3) locationConstraint() io.Reader {
constraint := ""
if s3.Region.S3LocationConstraint {
constraint = fmt.Sprintf(createBucketConfiguration, s3.Region.Name)
}
return strings.NewReader(constraint)
}
type ACL string
const (
Private = ACL("private")
PublicRead = ACL("public-read")
PublicReadWrite = ACL("public-read-write")
AuthenticatedRead = ACL("authenticated-read")
BucketOwnerRead = ACL("bucket-owner-read")
BucketOwnerFull = ACL("bucket-owner-full-control")
)
// The ListBucketsResp type holds the results of a List buckets operation.
type ListBucketsResp struct {
Buckets []Bucket `xml:">Bucket"`
}
// ListBuckets lists all buckets
//
// See: http://goo.gl/NqlyMN
func (s3 *S3) ListBuckets() (result *ListBucketsResp, err error) {
req := &request{
path: "/",
}
result = &ListBucketsResp{}
for attempt := attempts.Start(); attempt.Next(); {
err = s3.query(req, result)
if !shouldRetry(err) {
break
}
}
if err != nil {
return nil, err
}
// set S3 instance on buckets
for i := range result.Buckets {
result.Buckets[i].S3 = s3
}
return result, nil
}
// PutBucket creates a new bucket.
//
// See http://goo.gl/ndjnR for details.
func (b *Bucket) PutBucket(perm ACL) error {
headers := map[string][]string{
"x-amz-acl": {string(perm)},
}
req := &request{
method: "PUT",
bucket: b.Name,
path: "/",
headers: headers,
payload: b.locationConstraint(),
}
return b.S3.query(req, nil)
}
// DelBucket removes an existing S3 bucket. All objects in the bucket must
// be removed before the bucket itself can be removed.
//
// See http://goo.gl/GoBrY for details.
func (b *Bucket) DelBucket() (err error) {
req := &request{
method: "DELETE",
bucket: b.Name,
path: "/",
}
for attempt := attempts.Start(); attempt.Next(); {
err = b.S3.query(req, nil)
if !shouldRetry(err) {
break
}
}
return err
}
// Get retrieves an object from an S3 bucket.
//
// See http://goo.gl/isCO7 for details.
func (b *Bucket) Get(path string) (data []byte, err error) {
body, err := b.GetReader(path)
if err != nil {
return nil, err
}
data, err = ioutil.ReadAll(body)
body.Close()
return data, err
}
// GetReader retrieves an object from an S3 bucket.
// It is the caller's responsibility to call Close on rc when
// finished reading.
func (b *Bucket) GetReader(path string) (rc io.ReadCloser, err error) {
resp, err := b.GetResponse(path)
if resp != nil {
return resp.Body, err
}
return nil, err
}
// GetResponse retrieves an object from an S3 bucket returning the http response
// It is the caller's responsibility to call Close on rc when
// finished reading.
func (b *Bucket) GetResponse(path string) (*http.Response, error) {
return b.getResponseParams(path, nil)
}
// GetTorrent retrieves an Torrent object from an S3 bucket an io.ReadCloser.
// It is the caller's responsibility to call Close on rc when finished reading.
func (b *Bucket) GetTorrentReader(path string) (io.ReadCloser, error) {
resp, err := b.getResponseParams(path, url.Values{"torrent": {""}})
if err != nil {
return nil, err
}
return resp.Body, nil
}
// GetTorrent retrieves an Torrent object from an S3, returning
// the torrent as a []byte.
func (b *Bucket) GetTorrent(path string) ([]byte, error) {
body, err := b.GetTorrentReader(path)
if err != nil {
return nil, err
}
defer body.Close()
return ioutil.ReadAll(body)
}
func (b *Bucket) getResponseParams(path string, params url.Values) (*http.Response, error) {
req := &request{
bucket: b.Name,
path: path,
params: params,
}
err := b.S3.prepare(req)
if err != nil {
return nil, err
}
for attempt := attempts.Start(); attempt.Next(); {
resp, err := b.S3.run(req, nil)
if shouldRetry(err) && attempt.HasNext() {
continue
}
if err != nil {
return nil, err
}
return resp, nil
}
panic("unreachable")
}
func (b *Bucket) Head(path string) (*http.Response, error) {
req := &request{
method: "HEAD",
bucket: b.Name,
path: path,
}
err := b.S3.prepare(req)
if err != nil {
return nil, err
}
for attempt := attempts.Start(); attempt.Next(); {
resp, err := b.S3.run(req, nil)
if shouldRetry(err) && attempt.HasNext() {
continue
}
if err != nil {
return nil, err
}
return resp, nil
}
panic("unreachable")
}
// Put inserts an object into the S3 bucket.
//
// See http://goo.gl/FEBPD for details.
func (b *Bucket) Put(path string, data []byte, contType string, perm ACL) error {
body := bytes.NewBuffer(data)
return b.PutReader(path, body, int64(len(data)), contType, perm)
}
/*
PutHeader - like Put, inserts an object into the S3 bucket.
Instead of Content-Type string, pass in custom headers to override defaults.
*/
func (b *Bucket) PutHeader(path string, data []byte, customHeaders map[string][]string, perm ACL) error {
body := bytes.NewBuffer(data)
return b.PutReaderHeader(path, body, int64(len(data)), customHeaders, perm)
}
// PutReader inserts an object into the S3 bucket by consuming data
// from r until EOF.
func (b *Bucket) PutReader(path string, r io.Reader, length int64, contType string, perm ACL) error {
headers := map[string][]string{
"Content-Length": {strconv.FormatInt(length, 10)},
"Content-Type": {contType},
"x-amz-acl": {string(perm)},
}
req := &request{
method: "PUT",
bucket: b.Name,
path: path,
headers: headers,
payload: r,
}
return b.S3.query(req, nil)
}
/*
PutReaderHeader - like PutReader, inserts an object into S3 from a reader.
Instead of Content-Type string, pass in custom headers to override defaults.
*/
func (b *Bucket) PutReaderHeader(path string, r io.Reader, length int64, customHeaders map[string][]string, perm ACL) error {
// Default headers
headers := map[string][]string{
"Content-Length": {strconv.FormatInt(length, 10)},
"Content-Type": {"application/text"},
"x-amz-acl": {string(perm)},
}
// Override with custom headers
for key, value := range customHeaders {
headers[key] = value
}
req := &request{
method: "PUT",
bucket: b.Name,
path: path,
headers: headers,
payload: r,
}
return b.S3.query(req, nil)
}
/*
Copy - copy objects inside bucket
*/
func (b *Bucket) Copy(oldPath, newPath string, perm ACL) error {
if !strings.HasPrefix(oldPath, "/") {
oldPath = "/" + oldPath
}
req := &request{
method: "PUT",
bucket: b.Name,
path: newPath,
headers: map[string][]string{
"x-amz-copy-source": {amazonEscape("/" + b.Name + oldPath)},
"x-amz-acl": {string(perm)},
},
}
err := b.S3.prepare(req)
if err != nil {
return err
}
for attempt := attempts.Start(); attempt.Next(); {
_, err = b.S3.run(req, nil)
if shouldRetry(err) && attempt.HasNext() {
continue
}
if err != nil {
return err
}
return nil
}
panic("unreachable")
}
// Del removes an object from the S3 bucket.
//
// See http://goo.gl/APeTt for details.
func (b *Bucket) Del(path string) error {
req := &request{
method: "DELETE",
bucket: b.Name,
path: path,
}
return b.S3.query(req, nil)
}
type Object struct {
Key string
}
type MultiObjectDeleteBody struct {
XMLName xml.Name `xml:"Delete"`
Quiet bool
Object []Object
}
func base64md5(data []byte) string {
h := md5.New()
h.Write(data)
return base64.StdEncoding.EncodeToString(h.Sum(nil))
}
// MultiDel removes multiple objects from the S3 bucket efficiently.
// A maximum of 1000 keys at once may be specified.
//
// See http://goo.gl/WvA5sj for details.
func (b *Bucket) MultiDel(paths []string) error {
// create XML payload
v := MultiObjectDeleteBody{}
v.Object = make([]Object, len(paths))
for i, path := range paths {
v.Object[i] = Object{path}
}
data, _ := xml.Marshal(v)
// Content-MD5 is required
md5hash := base64md5(data)
req := &request{
method: "POST",
bucket: b.Name,
path: "/",
params: url.Values{"delete": {""}},
headers: http.Header{"Content-MD5": {md5hash}},
payload: bytes.NewReader(data),
}
return b.S3.query(req, nil)
}
// The ListResp type holds the results of a List bucket operation.
type ListResp struct {
Name string
Prefix string
Delimiter string
Marker string
NextMarker string
MaxKeys int
// IsTruncated is true if the results have been truncated because
// there are more keys and prefixes than can fit in MaxKeys.
// N.B. this is the opposite sense to that documented (incorrectly) in
// http://goo.gl/YjQTc
IsTruncated bool
Contents []Key
CommonPrefixes []string `xml:">Prefix"`
}
// The Key type represents an item stored in an S3 bucket.
type Key struct {
Key string
LastModified string
Size int64
// ETag gives the hex-encoded MD5 sum of the contents,
// surrounded with double-quotes.
ETag string
StorageClass string
Owner Owner
}
// List returns information about objects in an S3 bucket.
//
// The prefix parameter limits the response to keys that begin with the
// specified prefix.
//
// The delim parameter causes the response to group all of the keys that
// share a common prefix up to the next delimiter in a single entry within
// the CommonPrefixes field. You can use delimiters to separate a bucket
// into different groupings of keys, similar to how folders would work.
//
// The marker parameter specifies the key to start with when listing objects
// in a bucket. Amazon S3 lists objects in alphabetical order and
// will return keys alphabetically greater than the marker.
//
// The max parameter specifies how many keys + common prefixes to return in
// the response. The default is 1000.
//
// For example, given these keys in a bucket:
//
// index.html
// index2.html
// photos/2006/January/sample.jpg
// photos/2006/February/sample2.jpg
// photos/2006/February/sample3.jpg
// photos/2006/February/sample4.jpg
//
// Listing this bucket with delimiter set to "/" would yield the
// following result:
//
// &ListResp{
// Name: "sample-bucket",
// MaxKeys: 1000,
// Delimiter: "/",
// Contents: []Key{
// {Key: "index.html", "index2.html"},
// },
// CommonPrefixes: []string{
// "photos/",
// },
// }
//
// Listing the same bucket with delimiter set to "/" and prefix set to
// "photos/2006/" would yield the following result:
//
// &ListResp{
// Name: "sample-bucket",
// MaxKeys: 1000,
// Delimiter: "/",
// Prefix: "photos/2006/",
// CommonPrefixes: []string{
// "photos/2006/February/",
// "photos/2006/January/",
// },
// }
//
// See http://goo.gl/YjQTc for details.
func (b *Bucket) List(prefix, delim, marker string, max int) (result *ListResp, err error) {
params := map[string][]string{
"prefix": {prefix},
"delimiter": {delim},
"marker": {marker},
}
if max != 0 {
params["max-keys"] = []string{strconv.FormatInt(int64(max), 10)}
}
req := &request{
bucket: b.Name,
params: params,
}
result = &ListResp{}
for attempt := attempts.Start(); attempt.Next(); {
err = b.S3.query(req, result)
if !shouldRetry(err) {
break
}
}
if err != nil {
return nil, err
}
return result, nil
}
// Returns a mapping of all key names in this bucket to Key objects
func (b *Bucket) GetBucketContents() (*map[string]Key, error) {
bucket_contents := map[string]Key{}
prefix := ""
path_separator := ""
marker := ""
for {
contents, err := b.List(prefix, path_separator, marker, 1000)
if err != nil {
return &bucket_contents, err
}
last_key := ""
for _, key := range contents.Contents {
bucket_contents[key.Key] = key
last_key = key.Key
}
if contents.IsTruncated {
marker = contents.NextMarker
if marker == "" {
// From the s3 docs: If response does not include the
// NextMarker and it is truncated, you can use the value of the
// last Key in the response as the marker in the subsequent
// request to get the next set of object keys.
marker = last_key
}
} else {
break
}
}
return &bucket_contents, nil
}
// Get metadata from the key without returning the key content
func (b *Bucket) GetKey(path string) (*Key, error) {
req := &request{
bucket: b.Name,
path: path,
method: "HEAD",
}
err := b.S3.prepare(req)
if err != nil {
return nil, err
}
key := &Key{}
for attempt := attempts.Start(); attempt.Next(); {
resp, err := b.S3.run(req, nil)
if shouldRetry(err) && attempt.HasNext() {
continue
}
if err != nil {
return nil, err
}
key.Key = path
key.LastModified = resp.Header.Get("Last-Modified")
key.ETag = resp.Header.Get("ETag")
contentLength := resp.Header.Get("Content-Length")
size, err := strconv.ParseInt(contentLength, 10, 64)
if err != nil {
return key, fmt.Errorf("bad s3 content-length %v: %v",
contentLength, err)
}
key.Size = size
return key, nil
}
panic("unreachable")
}
// URL returns a non-signed URL that allows retriving the
// object at path. It only works if the object is publicly
// readable (see SignedURL).
func (b *Bucket) URL(path string) string {
req := &request{
bucket: b.Name,
path: path,
}
err := b.S3.prepare(req)
if err != nil {
panic(err)
}
u, err := req.url(true)
if err != nil {
panic(err)
}
u.RawQuery = ""
return u.String()
}
// SignedURL returns a signed URL that allows anyone holding the URL
// to retrieve the object at path. The signature is valid until expires.
func (b *Bucket) SignedURL(path string, expires time.Time) string {
req := &request{
bucket: b.Name,
path: path,
params: url.Values{"Expires": {strconv.FormatInt(expires.Unix(), 10)}},
}
err := b.S3.prepare(req)
if err != nil {
panic(err)
}
u, err := req.url(true)
if err != nil {
panic(err)
}
return u.String()
}
type request struct {
method string
bucket string
path string
signpath string
params url.Values
headers http.Header
baseurl string
payload io.Reader
prepared bool
}
// amazonShouldEscape returns true if byte should be escaped
func amazonShouldEscape(c byte) bool {
return !((c >= 'A' && c <= 'Z') || (c >= 'a' && c <= 'z') ||
(c >= '0' && c <= '9') || c == '_' || c == '-' || c == '~' || c == '.' || c == '/' || c == ':')
}
// amazonEscape does uri escaping exactly as Amazon does
func amazonEscape(s string) string {
hexCount := 0
for i := 0; i < len(s); i++ {
if amazonShouldEscape(s[i]) {
hexCount++
}
}
if hexCount == 0 {
return s
}
t := make([]byte, len(s)+2*hexCount)
j := 0
for i := 0; i < len(s); i++ {
if c := s[i]; amazonShouldEscape(c) {
t[j] = '%'
t[j+1] = "0123456789ABCDEF"[c>>4]
t[j+2] = "0123456789ABCDEF"[c&15]
j += 3
} else {
t[j] = s[i]
j++
}
}
return string(t)
}
// url returns url to resource, either full (with host/scheme) or
// partial for HTTP request
func (req *request) url(full bool) (*url.URL, error) {
u, err := url.Parse(req.baseurl)
if err != nil {
return nil, fmt.Errorf("bad S3 endpoint URL %q: %v", req.baseurl, err)
}
u.Opaque = amazonEscape(req.path)
if full {
u.Opaque = "//" + u.Host + u.Opaque
}
u.RawQuery = req.params.Encode()
return u, nil
}
// query prepares and runs the req request.
// If resp is not nil, the XML data contained in the response
// body will be unmarshalled on it.
func (s3 *S3) query(req *request, resp interface{}) error {
err := s3.prepare(req)
if err == nil {
var httpResponse *http.Response
httpResponse, err = s3.run(req, resp)
if resp == nil && httpResponse != nil {
httpResponse.Body.Close()
}
}
return err
}
// prepare sets up req to be delivered to S3.
func (s3 *S3) prepare(req *request) error {
if !req.prepared {
req.prepared = true
if req.method == "" {
req.method = "GET"
}
// Copy so they can be mutated without affecting on retries.
params := make(url.Values)
headers := make(http.Header)
for k, v := range req.params {
params[k] = v
}
for k, v := range req.headers {
headers[k] = v
}
req.params = params
req.headers = headers
if !strings.HasPrefix(req.path, "/") {
req.path = "/" + req.path
}
req.signpath = req.path
if req.bucket != "" {
req.baseurl = s3.Region.S3BucketEndpoint
if req.baseurl == "" {
// Use the path method to address the bucket.
req.baseurl = s3.Region.S3Endpoint
req.path = "/" + req.bucket + req.path
} else {
// Just in case, prevent injection.
if strings.IndexAny(req.bucket, "/:@") >= 0 {
return fmt.Errorf("bad S3 bucket: %q", req.bucket)
}
req.baseurl = strings.Replace(req.baseurl, "${bucket}", req.bucket, -1)
}
req.signpath = "/" + req.bucket + req.signpath
} else {
req.baseurl = s3.Region.S3Endpoint
}
}
// Always sign again as it's not clear how far the
// server has handled a previous attempt.
u, err := url.Parse(req.baseurl)
if err != nil {
return fmt.Errorf("bad S3 endpoint URL %q: %v", req.baseurl, err)
}
req.headers["Host"] = []string{u.Host}
req.headers["Date"] = []string{time.Now().In(time.UTC).Format(time.RFC1123)}
sign(s3.Auth, req.method, amazonEscape(req.signpath), req.params, req.headers)
return nil
}
// run sends req and returns the http response from the server.
// If resp is not nil, the XML data contained in the response
// body will be unmarshalled on it.
func (s3 *S3) run(req *request, resp interface{}) (*http.Response, error) {
if debug {
log.Printf("Running S3 request: %#v", req)
}
u, err := req.url(false)
if err != nil {
return nil, err
}
hreq := http.Request{
URL: u,
Method: req.method,
ProtoMajor: 1,
ProtoMinor: 1,
Close: true,
Header: req.headers,
}
if v, ok := req.headers["Content-Length"]; ok {
hreq.ContentLength, _ = strconv.ParseInt(v[0], 10, 64)
delete(req.headers, "Content-Length")
}
if req.payload != nil {
hreq.Body = ioutil.NopCloser(req.payload)
}
hresp, err := s3.HTTPClient().Do(&hreq)
if err != nil {
return nil, err
}
if debug {
dump, _ := httputil.DumpResponse(hresp, true)
log.Printf("} -> %s\n", dump)
}
if hresp.StatusCode != 200 && hresp.StatusCode != 204 {
defer hresp.Body.Close()
return nil, buildError(hresp)
}
if resp != nil {
err = xml.NewDecoder(hresp.Body).Decode(resp)
hresp.Body.Close()
}
return hresp, err
}
// Error represents an error in an operation with S3.
type Error struct {
StatusCode int // HTTP status code (200, 403, ...)
Code string // EC2 error code ("UnsupportedOperation", ...)
Message string // The human-oriented error message
BucketName string
RequestId string
HostId string
}
func (e *Error) Error() string {
return e.Message
}
func buildError(r *http.Response) error {
if debug {
log.Printf("got error (status code %v)", r.StatusCode)
data, err := ioutil.ReadAll(r.Body)
if err != nil {
log.Printf("\tread error: %v", err)
} else {
log.Printf("\tdata:\n%s\n\n", data)
}
r.Body = ioutil.NopCloser(bytes.NewBuffer(data))
}
err := Error{}
// TODO return error if Unmarshal fails?
xml.NewDecoder(r.Body).Decode(&err)
r.Body.Close()
err.StatusCode = r.StatusCode
if err.Message == "" {
err.Message = r.Status
}
if debug {
log.Printf("err: %#v\n", err)
}
return &err
}
func shouldRetry(err error) bool {
if err == nil {
return false
}
switch err {
case io.ErrUnexpectedEOF, io.EOF:
return true
}
switch e := err.(type) {
case *net.DNSError:
return true
case *net.OpError:
switch e.Op {
case "read", "write":
return true
}
case *Error:
switch e.Code {
case "InternalError", "NoSuchUpload", "NoSuchBucket":
return true
}
}
return false
}
func hasCode(err error, code string) bool {
s3err, ok := err.(*Error)
return ok && s3err.Code == code
}

View File

@@ -1,435 +0,0 @@
package s3_test
import (
"bytes"
"io/ioutil"
"net/http"
"testing"
"time"
"github.com/mitchellh/goamz/aws"
"github.com/mitchellh/goamz/s3"
"github.com/mitchellh/goamz/testutil"
. "github.com/motain/gocheck"
)
func Test(t *testing.T) {
TestingT(t)
}
type S struct {
s3 *s3.S3
}
var _ = Suite(&S{})
var testServer = testutil.NewHTTPServer()
func (s *S) SetUpSuite(c *C) {
testServer.Start()
auth := aws.Auth{"abc", "123", ""}
s.s3 = s3.New(auth, aws.Region{Name: "faux-region-1", S3Endpoint: testServer.URL})
}
func (s *S) TearDownSuite(c *C) {
s3.SetAttemptStrategy(nil)
}
func (s *S) SetUpTest(c *C) {
attempts := aws.AttemptStrategy{
Total: 300 * time.Millisecond,
Delay: 100 * time.Millisecond,
}
s3.SetAttemptStrategy(&attempts)
}
func (s *S) TearDownTest(c *C) {
testServer.Flush()
}
func (s *S) DisableRetries() {
s3.SetAttemptStrategy(&aws.AttemptStrategy{})
}
// PutBucket docs: http://goo.gl/kBTCu
func (s *S) TestPutBucket(c *C) {
testServer.Response(200, nil, "")
b := s.s3.Bucket("bucket")
err := b.PutBucket(s3.Private)
c.Assert(err, IsNil)
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "PUT")
c.Assert(req.URL.Path, Equals, "/bucket/")
c.Assert(req.Header["Date"], Not(Equals), "")
}
// DeleteBucket docs: http://goo.gl/GoBrY
func (s *S) TestDelBucket(c *C) {
testServer.Response(204, nil, "")
b := s.s3.Bucket("bucket")
err := b.DelBucket()
c.Assert(err, IsNil)
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "DELETE")
c.Assert(req.URL.Path, Equals, "/bucket/")
c.Assert(req.Header["Date"], Not(Equals), "")
}
// ListBuckets: http://goo.gl/NqlyMN
func (s *S) TestListBuckets(c *C) {
testServer.Response(200, nil, GetListBucketsDump)
buckets, err := s.s3.ListBuckets()
c.Assert(err, IsNil)
c.Assert(len(buckets.Buckets), Equals, 2)
c.Assert(buckets.Buckets[0].Name, Equals, "bucket1")
c.Assert(buckets.Buckets[1].Name, Equals, "bucket2")
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "GET")
c.Assert(req.URL.Path, Equals, "/")
}
// GetObject docs: http://goo.gl/isCO7
func (s *S) TestGet(c *C) {
testServer.Response(200, nil, "content")
b := s.s3.Bucket("bucket")
data, err := b.Get("name")
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "GET")
c.Assert(req.URL.Path, Equals, "/bucket/name")
c.Assert(req.Header["Date"], Not(Equals), "")
c.Assert(err, IsNil)
c.Assert(string(data), Equals, "content")
}
func (s *S) TestHead(c *C) {
testServer.Response(200, nil, "")
b := s.s3.Bucket("bucket")
resp, err := b.Head("name")
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "HEAD")
c.Assert(req.URL.Path, Equals, "/bucket/name")
c.Assert(req.Header["Date"], Not(Equals), "")
c.Assert(err, IsNil)
body, err := ioutil.ReadAll(resp.Body)
c.Assert(err, IsNil)
c.Assert(len(body), Equals, 0)
}
func (s *S) TestURL(c *C) {
testServer.Response(200, nil, "content")
b := s.s3.Bucket("bucket")
url := b.URL("name")
r, err := http.Get(url)
c.Assert(err, IsNil)
data, err := ioutil.ReadAll(r.Body)
r.Body.Close()
c.Assert(err, IsNil)
c.Assert(string(data), Equals, "content")
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "GET")
c.Assert(req.URL.Path, Equals, "/bucket/name")
}
func (s *S) TestGetReader(c *C) {
testServer.Response(200, nil, "content")
b := s.s3.Bucket("bucket")
rc, err := b.GetReader("name")
c.Assert(err, IsNil)
data, err := ioutil.ReadAll(rc)
rc.Close()
c.Assert(err, IsNil)
c.Assert(string(data), Equals, "content")
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "GET")
c.Assert(req.URL.Path, Equals, "/bucket/name")
c.Assert(req.Header["Date"], Not(Equals), "")
}
func (s *S) TestGetNotFound(c *C) {
for i := 0; i < 10; i++ {
testServer.Response(404, nil, GetObjectErrorDump)
}
b := s.s3.Bucket("non-existent-bucket")
data, err := b.Get("non-existent")
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "GET")
c.Assert(req.URL.Path, Equals, "/non-existent-bucket/non-existent")
c.Assert(req.Header["Date"], Not(Equals), "")
s3err, _ := err.(*s3.Error)
c.Assert(s3err, NotNil)
c.Assert(s3err.StatusCode, Equals, 404)
c.Assert(s3err.BucketName, Equals, "non-existent-bucket")
c.Assert(s3err.RequestId, Equals, "3F1B667FAD71C3D8")
c.Assert(s3err.HostId, Equals, "L4ee/zrm1irFXY5F45fKXIRdOf9ktsKY/8TDVawuMK2jWRb1RF84i1uBzkdNqS5D")
c.Assert(s3err.Code, Equals, "NoSuchBucket")
c.Assert(s3err.Message, Equals, "The specified bucket does not exist")
c.Assert(s3err.Error(), Equals, "The specified bucket does not exist")
c.Assert(data, IsNil)
}
// PutObject docs: http://goo.gl/FEBPD
func (s *S) TestPutObject(c *C) {
testServer.Response(200, nil, "")
b := s.s3.Bucket("bucket")
err := b.Put("name", []byte("content"), "content-type", s3.Private)
c.Assert(err, IsNil)
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "PUT")
c.Assert(req.URL.Path, Equals, "/bucket/name")
c.Assert(req.Header["Date"], Not(DeepEquals), []string{""})
c.Assert(req.Header["Content-Type"], DeepEquals, []string{"content-type"})
c.Assert(req.Header["Content-Length"], DeepEquals, []string{"7"})
//c.Assert(req.Header["Content-MD5"], DeepEquals, "...")
c.Assert(req.Header["X-Amz-Acl"], DeepEquals, []string{"private"})
}
func (s *S) TestPutObjectHeader(c *C) {
testServer.Response(200, nil, "")
b := s.s3.Bucket("bucket")
err := b.PutHeader(
"name",
[]byte("content"),
map[string][]string{"Content-Type": {"content-type"}},
s3.Private,
)
c.Assert(err, IsNil)
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "PUT")
c.Assert(req.URL.Path, Equals, "/bucket/name")
c.Assert(req.Header["Date"], Not(DeepEquals), []string{""})
c.Assert(req.Header["Content-Type"], DeepEquals, []string{"content-type"})
c.Assert(req.Header["Content-Length"], DeepEquals, []string{"7"})
//c.Assert(req.Header["Content-MD5"], DeepEquals, "...")
c.Assert(req.Header["X-Amz-Acl"], DeepEquals, []string{"private"})
}
func (s *S) TestPutReader(c *C) {
testServer.Response(200, nil, "")
b := s.s3.Bucket("bucket")
buf := bytes.NewBufferString("content")
err := b.PutReader("name", buf, int64(buf.Len()), "content-type", s3.Private)
c.Assert(err, IsNil)
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "PUT")
c.Assert(req.URL.Path, Equals, "/bucket/name")
c.Assert(req.Header["Date"], Not(DeepEquals), []string{""})
c.Assert(req.Header["Content-Type"], DeepEquals, []string{"content-type"})
c.Assert(req.Header["Content-Length"], DeepEquals, []string{"7"})
//c.Assert(req.Header["Content-MD5"], Equals, "...")
c.Assert(req.Header["X-Amz-Acl"], DeepEquals, []string{"private"})
}
func (s *S) TestPutReaderHeader(c *C) {
testServer.Response(200, nil, "")
b := s.s3.Bucket("bucket")
buf := bytes.NewBufferString("content")
err := b.PutReaderHeader(
"name",
buf,
int64(buf.Len()),
map[string][]string{"Content-Type": {"content-type"}},
s3.Private,
)
c.Assert(err, IsNil)
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "PUT")
c.Assert(req.URL.Path, Equals, "/bucket/name")
c.Assert(req.Header["Date"], Not(DeepEquals), []string{""})
c.Assert(req.Header["Content-Type"], DeepEquals, []string{"content-type"})
c.Assert(req.Header["Content-Length"], DeepEquals, []string{"7"})
//c.Assert(req.Header["Content-MD5"], Equals, "...")
c.Assert(req.Header["X-Amz-Acl"], DeepEquals, []string{"private"})
}
func (s *S) TestCopy(c *C) {
testServer.Response(200, nil, "")
b := s.s3.Bucket("bucket")
err := b.Copy(
"old/file",
"new/file",
s3.Private,
)
c.Assert(err, IsNil)
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "PUT")
c.Assert(req.URL.Path, Equals, "/bucket/new/file")
c.Assert(req.Header["X-Amz-Copy-Source"], DeepEquals, []string{"/bucket/old/file"})
c.Assert(req.Header["X-Amz-Acl"], DeepEquals, []string{"private"})
}
func (s *S) TestPlusInURL(c *C) {
testServer.Response(200, nil, "")
b := s.s3.Bucket("bucket")
err := b.Copy(
"dir/old+f?le",
"dir/new+f?le",
s3.Private,
)
c.Assert(err, IsNil)
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "PUT")
c.Assert(req.RequestURI, Equals, "/bucket/dir/new%2Bf%3Fle")
c.Assert(req.Header["X-Amz-Copy-Source"], DeepEquals, []string{"/bucket/dir/old%2Bf%3Fle"})
c.Assert(req.Header["X-Amz-Acl"], DeepEquals, []string{"private"})
}
// DelObject docs: http://goo.gl/APeTt
func (s *S) TestDelObject(c *C) {
testServer.Response(200, nil, "")
b := s.s3.Bucket("bucket")
err := b.Del("name")
c.Assert(err, IsNil)
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "DELETE")
c.Assert(req.URL.Path, Equals, "/bucket/name")
c.Assert(req.Header["Date"], Not(Equals), "")
}
// Delete Multiple Objects docs: http://goo.gl/WvA5sj
func (s *S) TestMultiDelObject(c *C) {
testServer.Response(200, nil, "")
b := s.s3.Bucket("bucket")
err := b.MultiDel([]string{"a", "b"})
c.Assert(err, IsNil)
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "POST")
c.Assert(req.URL.Path, Equals, "/bucket/")
c.Assert(req.RequestURI, Equals, "/bucket/?delete=")
c.Assert(req.Header["Content-Md5"], DeepEquals, []string{"nos/vZNvjGs17xIyjEFlwQ=="})
data, err := ioutil.ReadAll(req.Body)
req.Body.Close()
c.Assert(err, IsNil)
c.Assert(string(data), Equals, "<Delete><Quiet>false</Quiet><Object><Key>a</Key></Object><Object><Key>b</Key></Object></Delete>")
}
// Bucket List Objects docs: http://goo.gl/YjQTc
func (s *S) TestList(c *C) {
testServer.Response(200, nil, GetListResultDump1)
b := s.s3.Bucket("quotes")
data, err := b.List("N", "", "", 0)
c.Assert(err, IsNil)
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "GET")
c.Assert(req.URL.Path, Equals, "/quotes/")
c.Assert(req.Header["Date"], Not(Equals), "")
c.Assert(req.Form["prefix"], DeepEquals, []string{"N"})
c.Assert(req.Form["delimiter"], DeepEquals, []string{""})
c.Assert(req.Form["marker"], DeepEquals, []string{""})
c.Assert(req.Form["max-keys"], DeepEquals, []string(nil))
c.Assert(data.Name, Equals, "quotes")
c.Assert(data.Prefix, Equals, "N")
c.Assert(data.IsTruncated, Equals, false)
c.Assert(len(data.Contents), Equals, 2)
c.Assert(data.Contents[0].Key, Equals, "Nelson")
c.Assert(data.Contents[0].LastModified, Equals, "2006-01-01T12:00:00.000Z")
c.Assert(data.Contents[0].ETag, Equals, `"828ef3fdfa96f00ad9f27c383fc9ac7f"`)
c.Assert(data.Contents[0].Size, Equals, int64(5))
c.Assert(data.Contents[0].StorageClass, Equals, "STANDARD")
c.Assert(data.Contents[0].Owner.ID, Equals, "bcaf161ca5fb16fd081034f")
c.Assert(data.Contents[0].Owner.DisplayName, Equals, "webfile")
c.Assert(data.Contents[1].Key, Equals, "Neo")
c.Assert(data.Contents[1].LastModified, Equals, "2006-01-01T12:00:00.000Z")
c.Assert(data.Contents[1].ETag, Equals, `"828ef3fdfa96f00ad9f27c383fc9ac7f"`)
c.Assert(data.Contents[1].Size, Equals, int64(4))
c.Assert(data.Contents[1].StorageClass, Equals, "STANDARD")
c.Assert(data.Contents[1].Owner.ID, Equals, "bcaf1ffd86a5fb16fd081034f")
c.Assert(data.Contents[1].Owner.DisplayName, Equals, "webfile")
}
func (s *S) TestListWithDelimiter(c *C) {
testServer.Response(200, nil, GetListResultDump2)
b := s.s3.Bucket("quotes")
data, err := b.List("photos/2006/", "/", "some-marker", 1000)
c.Assert(err, IsNil)
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "GET")
c.Assert(req.URL.Path, Equals, "/quotes/")
c.Assert(req.Header["Date"], Not(Equals), "")
c.Assert(req.Form["prefix"], DeepEquals, []string{"photos/2006/"})
c.Assert(req.Form["delimiter"], DeepEquals, []string{"/"})
c.Assert(req.Form["marker"], DeepEquals, []string{"some-marker"})
c.Assert(req.Form["max-keys"], DeepEquals, []string{"1000"})
c.Assert(data.Name, Equals, "example-bucket")
c.Assert(data.Prefix, Equals, "photos/2006/")
c.Assert(data.Delimiter, Equals, "/")
c.Assert(data.Marker, Equals, "some-marker")
c.Assert(data.IsTruncated, Equals, false)
c.Assert(len(data.Contents), Equals, 0)
c.Assert(data.CommonPrefixes, DeepEquals, []string{"photos/2006/feb/", "photos/2006/jan/"})
}
func (s *S) TestGetKey(c *C) {
testServer.Response(200, GetKeyHeaderDump, "")
b := s.s3.Bucket("bucket")
key, err := b.GetKey("name")
req := testServer.WaitRequest()
c.Assert(req.Method, Equals, "HEAD")
c.Assert(req.URL.Path, Equals, "/bucket/name")
c.Assert(req.Header["Date"], Not(Equals), "")
c.Assert(err, IsNil)
c.Assert(key.Key, Equals, "name")
c.Assert(key.LastModified, Equals, GetKeyHeaderDump["Last-Modified"])
c.Assert(key.Size, Equals, int64(434234))
c.Assert(key.ETag, Equals, GetKeyHeaderDump["ETag"])
}
func (s *S) TestUnescapedColon(c *C) {
b := s.s3.Bucket("bucket")
u := b.URL("foo:bar")
c.Assert(u, Equals, "http://localhost:4444/bucket/foo:bar")
}

View File

@@ -1,616 +0,0 @@
package s3_test
import (
"bytes"
"crypto/md5"
"fmt"
"io/ioutil"
"net/http"
"strings"
"github.com/mitchellh/goamz/aws"
"github.com/mitchellh/goamz/s3"
"github.com/mitchellh/goamz/testutil"
. "github.com/motain/gocheck"
"net"
"sort"
"time"
)
// AmazonServer represents an Amazon S3 server.
type AmazonServer struct {
auth aws.Auth
}
func (s *AmazonServer) SetUp(c *C) {
auth, err := aws.EnvAuth()
if err != nil {
c.Fatal(err.Error())
}
s.auth = auth
}
var _ = Suite(&AmazonClientSuite{Region: aws.USEast})
var _ = Suite(&AmazonClientSuite{Region: aws.EUWest})
var _ = Suite(&AmazonClientSuite{Region: aws.EUCentral})
var _ = Suite(&AmazonDomainClientSuite{Region: aws.USEast})
// AmazonClientSuite tests the client against a live S3 server.
type AmazonClientSuite struct {
aws.Region
srv AmazonServer
ClientTests
}
func (s *AmazonClientSuite) SetUpSuite(c *C) {
if !testutil.Amazon {
c.Skip("live tests against AWS disabled (no -amazon)")
}
s.srv.SetUp(c)
s.s3 = s3.New(s.srv.auth, s.Region)
// In case tests were interrupted in the middle before.
s.ClientTests.Cleanup()
}
func (s *AmazonClientSuite) TearDownTest(c *C) {
s.ClientTests.Cleanup()
}
// AmazonDomainClientSuite tests the client against a live S3
// server using bucket names in the endpoint domain name rather
// than the request path.
type AmazonDomainClientSuite struct {
aws.Region
srv AmazonServer
ClientTests
}
func (s *AmazonDomainClientSuite) SetUpSuite(c *C) {
if !testutil.Amazon {
c.Skip("live tests against AWS disabled (no -amazon)")
}
s.srv.SetUp(c)
region := s.Region
region.S3BucketEndpoint = "https://${bucket}.s3.amazonaws.com"
s.s3 = s3.New(s.srv.auth, region)
s.ClientTests.Cleanup()
}
func (s *AmazonDomainClientSuite) TearDownTest(c *C) {
s.ClientTests.Cleanup()
}
// ClientTests defines integration tests designed to test the client.
// It is not used as a test suite in itself, but embedded within
// another type.
type ClientTests struct {
s3 *s3.S3
authIsBroken bool
}
func (s *ClientTests) Cleanup() {
killBucket(testBucket(s.s3))
}
func testBucket(s *s3.S3) *s3.Bucket {
// Watch out! If this function is corrupted and made to match with something
// people own, killBucket will happily remove *everything* inside the bucket.
key := s.Auth.AccessKey
if len(key) >= 8 {
key = s.Auth.AccessKey[:8]
}
return s.Bucket(fmt.Sprintf("goamz-%s-%s", s.Region.Name, key))
}
var attempts = aws.AttemptStrategy{
Min: 5,
Total: 20 * time.Second,
Delay: 100 * time.Millisecond,
}
func killBucket(b *s3.Bucket) {
var err error
for attempt := attempts.Start(); attempt.Next(); {
err = b.DelBucket()
if err == nil {
return
}
if _, ok := err.(*net.DNSError); ok {
return
}
e, ok := err.(*s3.Error)
if ok && e.Code == "NoSuchBucket" {
return
}
if ok && e.Code == "BucketNotEmpty" {
// Errors are ignored here. Just retry.
resp, err := b.List("", "", "", 1000)
if err == nil {
for _, key := range resp.Contents {
_ = b.Del(key.Key)
}
}
multis, _, _ := b.ListMulti("", "")
for _, m := range multis {
_ = m.Abort()
}
}
}
message := "cannot delete test bucket"
if err != nil {
message += ": " + err.Error()
}
panic(message)
}
func get(url string) ([]byte, error) {
for attempt := attempts.Start(); attempt.Next(); {
resp, err := http.Get(url)
if err != nil {
if attempt.HasNext() {
continue
}
return nil, err
}
data, err := ioutil.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
if attempt.HasNext() {
continue
}
return nil, err
}
return data, err
}
panic("unreachable")
}
func (s *ClientTests) TestBasicFunctionality(c *C) {
b := testBucket(s.s3)
err := b.PutBucket(s3.PublicRead)
c.Assert(err, IsNil)
err = b.Put("name", []byte("yo!"), "text/plain", s3.PublicRead)
c.Assert(err, IsNil)
defer b.Del("name")
data, err := b.Get("name")
c.Assert(err, IsNil)
c.Assert(string(data), Equals, "yo!")
data, err = get(b.URL("name"))
c.Assert(err, IsNil)
c.Assert(string(data), Equals, "yo!")
buf := bytes.NewBufferString("hey!")
err = b.PutReader("name2", buf, int64(buf.Len()), "text/plain", s3.Private)
c.Assert(err, IsNil)
defer b.Del("name2")
rc, err := b.GetReader("name2")
c.Assert(err, IsNil)
data, err = ioutil.ReadAll(rc)
c.Check(err, IsNil)
c.Check(string(data), Equals, "hey!")
rc.Close()
data, err = get(b.SignedURL("name2", time.Now().Add(time.Hour)))
c.Assert(err, IsNil)
c.Assert(string(data), Equals, "hey!")
if !s.authIsBroken {
data, err = get(b.SignedURL("name2", time.Now().Add(-time.Hour)))
c.Assert(err, IsNil)
c.Assert(string(data), Matches, "(?s).*AccessDenied.*")
}
err = b.DelBucket()
c.Assert(err, NotNil)
s3err, ok := err.(*s3.Error)
c.Assert(ok, Equals, true)
c.Assert(s3err.Code, Equals, "BucketNotEmpty")
c.Assert(s3err.BucketName, Equals, b.Name)
c.Assert(s3err.Message, Equals, "The bucket you tried to delete is not empty")
err = b.Del("name")
c.Assert(err, IsNil)
err = b.Del("name2")
c.Assert(err, IsNil)
err = b.DelBucket()
c.Assert(err, IsNil)
}
func (s *ClientTests) TestCopy(c *C) {
b := testBucket(s.s3)
err := b.PutBucket(s3.PublicRead)
err = b.Put("name+1", []byte("yo!"), "text/plain", s3.PublicRead)
c.Assert(err, IsNil)
defer b.Del("name+1")
err = b.Copy("name+1", "name+2", s3.PublicRead)
c.Assert(err, IsNil)
defer b.Del("name+2")
data, err := b.Get("name+2")
c.Assert(err, IsNil)
c.Assert(string(data), Equals, "yo!")
err = b.Del("name+1")
c.Assert(err, IsNil)
err = b.Del("name+2")
c.Assert(err, IsNil)
err = b.DelBucket()
c.Assert(err, IsNil)
}
func (s *ClientTests) TestGetNotFound(c *C) {
b := s.s3.Bucket("goamz-" + s.s3.Auth.AccessKey)
data, err := b.Get("non-existent")
s3err, _ := err.(*s3.Error)
c.Assert(s3err, NotNil)
c.Assert(s3err.StatusCode, Equals, 404)
c.Assert(s3err.Code, Equals, "NoSuchBucket")
c.Assert(s3err.Message, Equals, "The specified bucket does not exist")
c.Assert(data, IsNil)
}
// Communicate with all endpoints to see if they are alive.
func (s *ClientTests) TestRegions(c *C) {
errs := make(chan error, len(aws.Regions))
for _, region := range aws.Regions {
go func(r aws.Region) {
s := s3.New(s.s3.Auth, r)
b := s.Bucket("goamz-" + s.Auth.AccessKey)
_, err := b.Get("non-existent")
errs <- err
}(region)
}
for _ = range aws.Regions {
err := <-errs
if err != nil {
s3_err, ok := err.(*s3.Error)
if ok {
c.Check(s3_err.Code, Matches, "NoSuchBucket")
} else if _, ok = err.(*net.DNSError); ok {
// Okay as well.
} else {
c.Errorf("Non-S3 error: %s", err)
}
} else {
c.Errorf("Test should have errored but it seems to have succeeded")
}
}
}
var objectNames = []string{
"index.html",
"index2.html",
"photos/2006/February/sample2.jpg",
"photos/2006/February/sample3.jpg",
"photos/2006/February/sample4.jpg",
"photos/2006/January/sample.jpg",
"test/bar",
"test/foo",
}
func keys(names ...string) []s3.Key {
ks := make([]s3.Key, len(names))
for i, name := range names {
ks[i].Key = name
}
return ks
}
// As the ListResp specifies all the parameters to the
// request too, we use it to specify request parameters
// and expected results. The Contents field is
// used only for the key names inside it.
var listTests = []s3.ListResp{
// normal list.
{
Contents: keys(objectNames...),
}, {
Marker: objectNames[0],
Contents: keys(objectNames[1:]...),
}, {
Marker: objectNames[0] + "a",
Contents: keys(objectNames[1:]...),
}, {
Marker: "z",
},
// limited results.
{
MaxKeys: 2,
Contents: keys(objectNames[0:2]...),
IsTruncated: true,
}, {
MaxKeys: 2,
Marker: objectNames[0],
Contents: keys(objectNames[1:3]...),
IsTruncated: true,
}, {
MaxKeys: 2,
Marker: objectNames[len(objectNames)-2],
Contents: keys(objectNames[len(objectNames)-1:]...),
},
// with delimiter
{
Delimiter: "/",
CommonPrefixes: []string{"photos/", "test/"},
Contents: keys("index.html", "index2.html"),
}, {
Delimiter: "/",
Prefix: "photos/2006/",
CommonPrefixes: []string{"photos/2006/February/", "photos/2006/January/"},
}, {
Delimiter: "/",
Prefix: "t",
CommonPrefixes: []string{"test/"},
}, {
Delimiter: "/",
MaxKeys: 1,
Contents: keys("index.html"),
IsTruncated: true,
}, {
Delimiter: "/",
MaxKeys: 1,
Marker: "index2.html",
CommonPrefixes: []string{"photos/"},
IsTruncated: true,
}, {
Delimiter: "/",
MaxKeys: 1,
Marker: "photos/",
CommonPrefixes: []string{"test/"},
IsTruncated: false,
}, {
Delimiter: "Feb",
CommonPrefixes: []string{"photos/2006/Feb"},
Contents: keys("index.html", "index2.html", "photos/2006/January/sample.jpg", "test/bar", "test/foo"),
},
}
func (s *ClientTests) TestDoublePutBucket(c *C) {
b := testBucket(s.s3)
err := b.PutBucket(s3.PublicRead)
c.Assert(err, IsNil)
err = b.PutBucket(s3.PublicRead)
if err != nil {
c.Assert(err, FitsTypeOf, new(s3.Error))
c.Assert(err.(*s3.Error).Code, Equals, "BucketAlreadyOwnedByYou")
}
}
func (s *ClientTests) TestBucketList(c *C) {
b := testBucket(s.s3)
err := b.PutBucket(s3.Private)
c.Assert(err, IsNil)
objData := make(map[string][]byte)
for i, path := range objectNames {
data := []byte(strings.Repeat("a", i))
err := b.Put(path, data, "text/plain", s3.Private)
c.Assert(err, IsNil)
defer b.Del(path)
objData[path] = data
}
for i, t := range listTests {
c.Logf("test %d", i)
resp, err := b.List(t.Prefix, t.Delimiter, t.Marker, t.MaxKeys)
c.Assert(err, IsNil)
c.Check(resp.Name, Equals, b.Name)
c.Check(resp.Delimiter, Equals, t.Delimiter)
c.Check(resp.IsTruncated, Equals, t.IsTruncated)
c.Check(resp.CommonPrefixes, DeepEquals, t.CommonPrefixes)
checkContents(c, resp.Contents, objData, t.Contents)
}
}
func etag(data []byte) string {
sum := md5.New()
sum.Write(data)
return fmt.Sprintf(`"%x"`, sum.Sum(nil))
}
func checkContents(c *C, contents []s3.Key, data map[string][]byte, expected []s3.Key) {
c.Assert(contents, HasLen, len(expected))
for i, k := range contents {
c.Check(k.Key, Equals, expected[i].Key)
// TODO mtime
c.Check(k.Size, Equals, int64(len(data[k.Key])))
c.Check(k.ETag, Equals, etag(data[k.Key]))
}
}
func (s *ClientTests) TestMultiInitPutList(c *C) {
b := testBucket(s.s3)
err := b.PutBucket(s3.Private)
c.Assert(err, IsNil)
multi, err := b.InitMulti("multi", "text/plain", s3.Private)
c.Assert(err, IsNil)
c.Assert(multi.UploadId, Matches, ".+")
defer multi.Abort()
var sent []s3.Part
for i := 0; i < 5; i++ {
p, err := multi.PutPart(i+1, strings.NewReader(fmt.Sprintf("<part %d>", i+1)))
c.Assert(err, IsNil)
c.Assert(p.N, Equals, i+1)
c.Assert(p.Size, Equals, int64(8))
c.Assert(p.ETag, Matches, ".+")
sent = append(sent, p)
}
s3.SetListPartsMax(2)
parts, err := multi.ListParts()
c.Assert(err, IsNil)
c.Assert(parts, HasLen, len(sent))
for i := range parts {
c.Assert(parts[i].N, Equals, sent[i].N)
c.Assert(parts[i].Size, Equals, sent[i].Size)
c.Assert(parts[i].ETag, Equals, sent[i].ETag)
}
err = multi.Complete(parts)
s3err, failed := err.(*s3.Error)
c.Assert(failed, Equals, true)
c.Assert(s3err.Code, Equals, "EntityTooSmall")
err = multi.Abort()
c.Assert(err, IsNil)
_, err = multi.ListParts()
s3err, ok := err.(*s3.Error)
c.Assert(ok, Equals, true)
c.Assert(s3err.Code, Equals, "NoSuchUpload")
}
// This may take a minute or more due to the minimum size accepted S3
// on multipart upload parts.
func (s *ClientTests) TestMultiComplete(c *C) {
b := testBucket(s.s3)
err := b.PutBucket(s3.Private)
c.Assert(err, IsNil)
multi, err := b.InitMulti("multi", "text/plain", s3.Private)
c.Assert(err, IsNil)
c.Assert(multi.UploadId, Matches, ".+")
defer multi.Abort()
// Minimum size S3 accepts for all but the last part is 5MB.
data1 := make([]byte, 5*1024*1024)
data2 := []byte("<part 2>")
part1, err := multi.PutPart(1, bytes.NewReader(data1))
c.Assert(err, IsNil)
part2, err := multi.PutPart(2, bytes.NewReader(data2))
c.Assert(err, IsNil)
// Purposefully reversed. The order requirement must be handled.
err = multi.Complete([]s3.Part{part2, part1})
c.Assert(err, IsNil)
data, err := b.Get("multi")
c.Assert(err, IsNil)
c.Assert(len(data), Equals, len(data1)+len(data2))
for i := range data1 {
if data[i] != data1[i] {
c.Fatalf("uploaded object at byte %d: want %d, got %d", data1[i], data[i])
}
}
c.Assert(string(data[len(data1):]), Equals, string(data2))
}
type multiList []*s3.Multi
func (l multiList) Len() int { return len(l) }
func (l multiList) Less(i, j int) bool { return l[i].Key < l[j].Key }
func (l multiList) Swap(i, j int) { l[i], l[j] = l[j], l[i] }
func (s *ClientTests) TestListMulti(c *C) {
b := testBucket(s.s3)
err := b.PutBucket(s3.Private)
c.Assert(err, IsNil)
// Ensure an empty state before testing its behavior.
multis, _, err := b.ListMulti("", "")
for _, m := range multis {
err := m.Abort()
c.Assert(err, IsNil)
}
keys := []string{
"a/multi2",
"a/multi3",
"b/multi4",
"multi1",
}
for _, key := range keys {
m, err := b.InitMulti(key, "", s3.Private)
c.Assert(err, IsNil)
defer m.Abort()
}
// Amazon's implementation of the multiple-request listing for
// multipart uploads in progress seems broken in multiple ways.
// (next tokens are not provided, etc).
//s3.SetListMultiMax(2)
multis, prefixes, err := b.ListMulti("", "")
c.Assert(err, IsNil)
for attempt := attempts.Start(); attempt.Next() && len(multis) < len(keys); {
multis, prefixes, err = b.ListMulti("", "")
c.Assert(err, IsNil)
}
sort.Sort(multiList(multis))
c.Assert(prefixes, IsNil)
var gotKeys []string
for _, m := range multis {
gotKeys = append(gotKeys, m.Key)
}
c.Assert(gotKeys, DeepEquals, keys)
for _, m := range multis {
c.Assert(m.Bucket, Equals, b)
c.Assert(m.UploadId, Matches, ".+")
}
multis, prefixes, err = b.ListMulti("", "/")
for attempt := attempts.Start(); attempt.Next() && len(prefixes) < 2; {
multis, prefixes, err = b.ListMulti("", "")
c.Assert(err, IsNil)
}
c.Assert(err, IsNil)
c.Assert(prefixes, DeepEquals, []string{"a/", "b/"})
c.Assert(multis, HasLen, 1)
c.Assert(multis[0].Bucket, Equals, b)
c.Assert(multis[0].Key, Equals, "multi1")
c.Assert(multis[0].UploadId, Matches, ".+")
for attempt := attempts.Start(); attempt.Next() && len(multis) < 2; {
multis, prefixes, err = b.ListMulti("", "")
c.Assert(err, IsNil)
}
multis, prefixes, err = b.ListMulti("a/", "/")
c.Assert(err, IsNil)
c.Assert(prefixes, IsNil)
c.Assert(multis, HasLen, 2)
c.Assert(multis[0].Bucket, Equals, b)
c.Assert(multis[0].Key, Equals, "a/multi2")
c.Assert(multis[0].UploadId, Matches, ".+")
c.Assert(multis[1].Bucket, Equals, b)
c.Assert(multis[1].Key, Equals, "a/multi3")
c.Assert(multis[1].UploadId, Matches, ".+")
}
func (s *ClientTests) TestMultiPutAllZeroLength(c *C) {
b := testBucket(s.s3)
err := b.PutBucket(s3.Private)
c.Assert(err, IsNil)
multi, err := b.InitMulti("multi", "text/plain", s3.Private)
c.Assert(err, IsNil)
defer multi.Abort()
// This tests an edge case. Amazon requires at least one
// part for multiprat uploads to work, even the part is empty.
parts, err := multi.PutAll(strings.NewReader(""), 5*1024*1024)
c.Assert(err, IsNil)
c.Assert(parts, HasLen, 1)
c.Assert(parts[0].Size, Equals, int64(0))
c.Assert(parts[0].ETag, Equals, `"d41d8cd98f00b204e9800998ecf8427e"`)
err = multi.Complete(parts)
c.Assert(err, IsNil)
}

Some files were not shown because too many files have changed in this diff Show More