New issue 153 by cris.fl...@gmail.com: IAM user permissions issue
http://code.google.com/p/s3fs/issues/detail?id=153
What steps will reproduce the problem?
1. Create a user with the following permissions, I am able to mount the
bucket without problem (I have verified that the credentials are correct)
AdminsGroupPolicy
{
"Statement":[{
"Effect":"Allow",
"Action":"*",
"Resource":"*"
}
]
}
2. Create a user with permissions limited to a specific S3 bucket, I
receive the following errors:
s3fs: CURLE_HTTP_RETURNED_ERROR
s3fs: HTTP Error Code: 403
s3fs: AWS Error Code: AccessDenied
s3fs: AWS Message: Access Denied
Permissions:
BackupProjectPolicy
{
"Statement":[{
"Effect":"Allow",
"Action":["s3:*"],
"Resource":["arn:aws:s3:::data-folder/*",
"arn:aws:s3:::data-folder"]
},
{
"Effect":"Deny",
"Action":["s3:*"],
"NotResource":["arn:aws:s3:::data-folder/*",
"arn:aws:s3:::data-folder"]
}
]
}
What is the expected output? What do you see instead?
Are there permissions other than S3 access to the bucket required to mount
a bucket as a file system? I have been able to use these credentials to
upload files using Ruby AWS/s3 so I believe they work correctly.
What version of the product are you using? On what operating system?
* Linux 2.6.35-25-virtual #44-Ubuntu SMP x86_64 GNU/Linux (mounted as ec2
instance)
* Amazon Simple Storage Service File System 1.35
Please provide any additional information below.
Simply changing the credentials in the .password_s3fs file from the admin
user to the other user causes the error.
Comment #1 on issue 153 by moore...@suncup.net: IAM user permissions issue
http://code.google.com/p/s3fs/issues/detail?id=153
Probably not a defect but has to do with the usage model.
There are several ways that you can get the credentials into AWS (in order
of precedence)
// 1 - from a password file specified on the command line
Needs to be readable by the effective user, cannot be readable
by group/other
// 2 - from environment variables
AWSACCESSKEYID
AWSSECRETACCESSKEY
// 3 - from the users ${HOME}/.passwd-s3fs
same permissions restrictions as #1
// 4 - from /etc/passwd-s3fs
Can be group readable, but not other readable
It looks like what you might want to do is create a special group for your
backup folks and make the /etc/passwd-s3fs that group owned and group
readable.
If you want the user to use the central /etc/passwd-s3fs file, then they
shouldn't have a $HOME/.passwd-s3fs file, but they can have one, but the
credentials need to be correct -- be careful with default credentials in
these files, you might want the use the explicit format.
If the user executing the
The situation relates to #3 using the passwd-s3fs file for a single user.
1) log in
2) su to root
3) cd /root/
4) cp .admin_credentials.txt .passwd-s3fs
5) emacs .passwd-s3fs (edit so it is in correct format)
6) mount s3fs data-folder /mnt/tmp
- everything works
7) umount /mnt/tmp
8) cp .limited_user_credentials.txt .passwd-s3fs
9) emacs .passwd-s3fs (edit so it is in correct format)
10) mount s3fs data-folder /mnt/tmp
- AWS Error Code: AccessDenied
Do I need IAM permissions for anything other than the s3 bucket (as shown
above)? I have the correct format in the .passwd-s3fs file. The s3fs
setup is correct since I can access it with the admin credentials. The
limited user credentials are correct since I have used them to access the
s3 bucket from other software. I have tried a policy allowing access to
ALL s3 with the limited user and that works (again showing the credentials
are correct). I am looking of a solution whereby I can set a ploicy for a
user allowing the user to only mount one of several buckets stored on s3.
Admittedly, I'm not up on the whole policy thing. It appears that a
policies can be set on a per-bucket basis. I'm afraid that I can provide
very little support in this area -- hopefully there's another user who can.
Evidently there is a call to s3:ListAllMyBuckets
(http://docs.amazonwebservices.com/IAM/latest/UserGuide/UsingWithS3.html)
that is required to determine if the bucket requested exists before
attempting to mount. Adding the following policy to the user allowed me to
mount the bucket:
{
"Statement":[{
"Effect":"Allow",
"Action":"s3:ListAllMyBuckets",
"Resource":"arn:aws:s3:::*"
}
]
}
You may want to consider only checking the list of all buckets if mounting
a specific bucket fails. This would allow granting access to a bucket to
mount the bucket. Additionally, it would clear the way to
allow 'directories' on s3 to be mounted (such as data-folder/person1) by
one user and differing locations (data-folder/person2) by another person.
Cris,
Let's revisit this and see if there is something in the behavior of s3fs
that can be changed that will support your use model. First assume that I
know nothing of this IAM feature or how to use it.
- should s3fs retrieve the bucket's policy and parse it for pertinent
information? If so, what info should we look for and how is it pertinent?
Dan
As far as I can tell, the only issue is an attempt to retrieve the listing
of all buckets prior to connecting to a bucket. If the bucket name is
known, is a listing of all buckets required?
Cris
I just ran into this same issue. It would be nice if s3fs would support iam
in regards to limiting accounts to specific buckets.
Any idea when this usage model will be supported? I'm currently using 1.59
I had an immediate need to make s3fs support IAM policies. I chopped out
the code the listed all of the buckets available to access key and it now
works for me.
I don't really need to list all of the buckets so long as there's a
mechanism to gracefully handle permission errors. Thanks for the patch,
I'll take a look and see if we can get this pushed out in the next release.
Hi Ben, thanks for taking a look so quickly. I will check it out first
thing tomorrow. It's not just listing files, but service operations in
general. If you set a resource mask limiting operations to specific
buckets, general service operations with give access denied errors.
Hi Ben,
I tested and it works for me!
great! I'll leave it in for now, hopefully any issues will pop up before
the next release. Hopefully I'll get some time to fully integrate IAM
before too long.
I tested this (using r383, also r374) and about 80% of the time I
get "Input/output error" when listing a directory (although it does fix the
initial issue with getting a 403 when using IAM credentials).
These files have been in this bucket forever, so it's not an eventual
consistency thing.
Using the -d option only prints the init message to syslog:
Nov 2 23:09:55 slice4 s3fs: init $Rev: 382 $ [sic]
Using:
fuse-2.8.6 (patched to fix the --no-canonicalize issue in mount: Issue 228)
CentOS release 5.7 (Final)
What else can I do to help test?
However, creating a new bucket, with the same bucket policy and IAM user,
gives no errors after creating 50 files (the other bucket only had 33), and
listing them never gives the IO error.
access_useragreement/privacypolicy.$PASSWORD$%@(TRUE)filesetdir=<false>