I am tracking down an issue when performing object copies using my S3
library. For context:
http://bitbucket.org/jmurty/jets3t/issue/27/x-amz-copy-source-incompatibilty
When sending copy requests to S3 I URL-encode the "x-amz-copy-source"
header value, which points to the source object in that service.
Google Storage does not seem to accept headers in this format,
however, and will fail to find the source object if "x-amz-copy-
source" header value contains any URL-encoded characters.
Example "x-amz-copy-source" values that work:
bucketname/testing.txt
bucketname/test me.txt (note the space)
bucketname/virtual path/filename.txt (note space and slash
characters)
Example "x-amz-copy-source" values that do not work:
bucketname/test%20me.txt
bucketname/test+me.txt
bucketname/path%2Ffilename.txt (encoded slash in object key name)
I'm guessing I need to use MIME/RFC 2047 encoding for header values
instead of URL-encoding?
Ideally GS would support the same URL-encoding that S3 accepts --
although S3 may be overly-accepting in this case -- but I can switch
to another encoding provided there is one that both services will
support. I've run out of time to test with RFC 2047 right now so if
anyone has a definitive answer that would be great.
- James