Issue 153 in s3fs: IAM user permissions issue

180 views
Skip to first unread message

s3...@googlecode.com

unread,
Feb 9, 2011, 5:01:15 PM2/9/11
to s3fs-...@googlegroups.com
Status: New
Owner: ----
Labels: Type-Defect Priority-Medium

New issue 153 by cris.fl...@gmail.com: IAM user permissions issue
http://code.google.com/p/s3fs/issues/detail?id=153

What steps will reproduce the problem?
1. Create a user with the following permissions, I am able to mount the
bucket without problem (I have verified that the credentials are correct)
AdminsGroupPolicy
{
"Statement":[{
"Effect":"Allow",
"Action":"*",
"Resource":"*"
}
]
}


2. Create a user with permissions limited to a specific S3 bucket, I
receive the following errors:
s3fs: CURLE_HTTP_RETURNED_ERROR
s3fs: HTTP Error Code: 403
s3fs: AWS Error Code: AccessDenied
s3fs: AWS Message: Access Denied

Permissions:
BackupProjectPolicy
{
"Statement":[{
"Effect":"Allow",
"Action":["s3:*"],
"Resource":["arn:aws:s3:::data-folder/*",
"arn:aws:s3:::data-folder"]
},
{
"Effect":"Deny",
"Action":["s3:*"],
"NotResource":["arn:aws:s3:::data-folder/*",
"arn:aws:s3:::data-folder"]
}
]
}

What is the expected output? What do you see instead?
Are there permissions other than S3 access to the bucket required to mount
a bucket as a file system? I have been able to use these credentials to
upload files using Ruby AWS/s3 so I believe they work correctly.


What version of the product are you using? On what operating system?
* Linux 2.6.35-25-virtual #44-Ubuntu SMP x86_64 GNU/Linux (mounted as ec2
instance)
* Amazon Simple Storage Service File System 1.35


Please provide any additional information below.
Simply changing the credentials in the .password_s3fs file from the admin
user to the other user causes the error.

s3...@googlecode.com

unread,
Feb 9, 2011, 11:42:08 PM2/9/11
to s3fs-...@googlegroups.com
Updates:
Labels: -Type-Defect Type-Support

Comment #1 on issue 153 by moore...@suncup.net: IAM user permissions issue
http://code.google.com/p/s3fs/issues/detail?id=153

Probably not a defect but has to do with the usage model.

There are several ways that you can get the credentials into AWS (in order
of precedence)

// 1 - from a password file specified on the command line

Needs to be readable by the effective user, cannot be readable
by group/other

// 2 - from environment variables

AWSACCESSKEYID
AWSSECRETACCESSKEY


// 3 - from the users ${HOME}/.passwd-s3fs

same permissions restrictions as #1

// 4 - from /etc/passwd-s3fs

Can be group readable, but not other readable


It looks like what you might want to do is create a special group for your
backup folks and make the /etc/passwd-s3fs that group owned and group
readable.

If you want the user to use the central /etc/passwd-s3fs file, then they
shouldn't have a $HOME/.passwd-s3fs file, but they can have one, but the
credentials need to be correct -- be careful with default credentials in
these files, you might want the use the explicit format.


If the user executing the

s3...@googlecode.com

unread,
Feb 10, 2011, 2:59:06 PM2/10/11
to s3fs-...@googlegroups.com

Comment #2 on issue 153 by cris.fl...@gmail.com: IAM user permissions issue
http://code.google.com/p/s3fs/issues/detail?id=153

The situation relates to #3 using the passwd-s3fs file for a single user.
1) log in
2) su to root
3) cd /root/
4) cp .admin_credentials.txt .passwd-s3fs
5) emacs .passwd-s3fs (edit so it is in correct format)
6) mount s3fs data-folder /mnt/tmp
- everything works
7) umount /mnt/tmp
8) cp .limited_user_credentials.txt .passwd-s3fs
9) emacs .passwd-s3fs (edit so it is in correct format)
10) mount s3fs data-folder /mnt/tmp
- AWS Error Code: AccessDenied

Do I need IAM permissions for anything other than the s3 bucket (as shown
above)? I have the correct format in the .passwd-s3fs file. The s3fs
setup is correct since I can access it with the admin credentials. The
limited user credentials are correct since I have used them to access the
s3 bucket from other software. I have tried a policy allowing access to
ALL s3 with the limited user and that works (again showing the credentials
are correct). I am looking of a solution whereby I can set a ploicy for a
user allowing the user to only mount one of several buckets stored on s3.

s3...@googlecode.com

unread,
Feb 10, 2011, 3:43:13 PM2/10/11
to s3fs-...@googlegroups.com

Comment #3 on issue 153 by moore...@suncup.net: IAM user permissions issue
http://code.google.com/p/s3fs/issues/detail?id=153

Admittedly, I'm not up on the whole policy thing. It appears that a
policies can be set on a per-bucket basis. I'm afraid that I can provide
very little support in this area -- hopefully there's another user who can.

s3...@googlecode.com

unread,
Feb 10, 2011, 4:16:23 PM2/10/11
to s3fs-...@googlegroups.com

Comment #4 on issue 153 by cris.fl...@gmail.com: IAM user permissions issue
http://code.google.com/p/s3fs/issues/detail?id=153

Evidently there is a call to s3:ListAllMyBuckets
(http://docs.amazonwebservices.com/IAM/latest/UserGuide/UsingWithS3.html)
that is required to determine if the bucket requested exists before
attempting to mount. Adding the following policy to the user allowed me to
mount the bucket:

{
"Statement":[{
"Effect":"Allow",
"Action":"s3:ListAllMyBuckets",
"Resource":"arn:aws:s3:::*"
}
]
}


s3...@googlecode.com

unread,
Feb 10, 2011, 4:20:24 PM2/10/11
to s3fs-...@googlegroups.com

Comment #5 on issue 153 by cris.fl...@gmail.com: IAM user permissions issue
http://code.google.com/p/s3fs/issues/detail?id=153

You may want to consider only checking the list of all buckets if mounting
a specific bucket fails. This would allow granting access to a bucket to
mount the bucket. Additionally, it would clear the way to
allow 'directories' on s3 to be mounted (such as data-folder/person1) by
one user and differing locations (data-folder/person2) by another person.

s3...@googlecode.com

unread,
Apr 7, 2011, 10:50:43 AM4/7/11
to s3fs-...@googlegroups.com

Comment #6 on issue 153 by moore...@suncup.net: IAM user permissions issue
http://code.google.com/p/s3fs/issues/detail?id=153

Cris,

Let's revisit this and see if there is something in the behavior of s3fs
that can be changed that will support your use model. First assume that I
know nothing of this IAM feature or how to use it.

- should s3fs retrieve the bucket's policy and parse it for pertinent
information? If so, what info should we look for and how is it pertinent?

Dan

s3...@googlecode.com

unread,
Apr 7, 2011, 8:09:08 PM4/7/11
to s3fs-...@googlegroups.com

Comment #7 on issue 153 by cris.fl...@gmail.com: IAM user permissions issue
http://code.google.com/p/s3fs/issues/detail?id=153

As far as I can tell, the only issue is an attempt to retrieve the listing
of all buckets prior to connecting to a bucket. If the bucket name is
known, is a listing of all buckets required?

Cris

s3...@googlecode.com

unread,
Aug 18, 2011, 3:52:03 PM8/18/11
to s3fs-...@googlegroups.com

Comment #8 on issue 153 by dennis.p...@zemoga.com: IAM user permissions
issue
http://code.google.com/p/s3fs/issues/detail?id=153

I just ran into this same issue. It would be nice if s3fs would support iam
in regards to limiting accounts to specific buckets.

Any idea when this usage model will be supported? I'm currently using 1.59

s3...@googlecode.com

unread,
Aug 30, 2011, 2:50:53 PM8/30/11
to s3fs-...@googlegroups.com

Comment #9 on issue 153 by dennis.p...@zemoga.com: IAM user permissions
issue
http://code.google.com/p/s3fs/issues/detail?id=153

I had an immediate need to make s3fs support IAM policies. I chopped out
the code the listed all of the buckets available to access key and it now
works for me.

https://s3.amazonaws.com/dportello/s3fs-1.61.iam.patch.bz2

s3...@googlecode.com

unread,
Aug 30, 2011, 3:05:51 PM8/30/11
to s3fs-...@googlegroups.com

Comment #10 on issue 153 by ben.lema...@gmail.com: IAM user permissions
issue
http://code.google.com/p/s3fs/issues/detail?id=153

I don't really need to list all of the buckets so long as there's a
mechanism to gracefully handle permission errors. Thanks for the patch,
I'll take a look and see if we can get this pushed out in the next release.

s3...@googlecode.com

unread,
Aug 30, 2011, 10:38:24 PM8/30/11
to s3fs-...@googlegroups.com

Comment #12 on issue 153 by dennis.p...@zemoga.com: IAM user permissions
issue
http://code.google.com/p/s3fs/issues/detail?id=153

Hi Ben, thanks for taking a look so quickly. I will check it out first
thing tomorrow. It's not just listing files, but service operations in
general. If you set a resource mask limiting operations to specific
buckets, general service operations with give access denied errors.

s3...@googlecode.com

unread,
Aug 31, 2011, 4:50:28 PM8/31/11
to s3fs-...@googlegroups.com

Comment #13 on issue 153 by dennis.p...@zemoga.com: IAM user permissions
issue
http://code.google.com/p/s3fs/issues/detail?id=153

Hi Ben,

I tested and it works for me!

s3...@googlecode.com

unread,
Aug 31, 2011, 4:54:30 PM8/31/11
to s3fs-...@googlegroups.com

Comment #14 on issue 153 by ben.lema...@gmail.com: IAM user permissions
issue
http://code.google.com/p/s3fs/issues/detail?id=153

great! I'll leave it in for now, hopefully any issues will pop up before
the next release. Hopefully I'll get some time to fully integrate IAM
before too long.

s3...@googlecode.com

unread,
Nov 2, 2011, 7:18:10 PM11/2/11
to s3fs-...@googlegroups.com

Comment #15 on issue 153 by darkcont...@gmail.com: IAM user permissions
issue
http://code.google.com/p/s3fs/issues/detail?id=153

I tested this (using r383, also r374) and about 80% of the time I
get "Input/output error" when listing a directory (although it does fix the
initial issue with getting a 403 when using IAM credentials).

These files have been in this bucket forever, so it's not an eventual
consistency thing.

Using the -d option only prints the init message to syslog:
Nov 2 23:09:55 slice4 s3fs: init $Rev: 382 $ [sic]

Using:
fuse-2.8.6 (patched to fix the --no-canonicalize issue in mount: Issue 228)
CentOS release 5.7 (Final)

What else can I do to help test?

s3...@googlecode.com

unread,
Nov 2, 2011, 7:25:40 PM11/2/11
to s3fs-...@googlegroups.com

Comment #16 on issue 153 by darkcont...@gmail.com: IAM user permissions
issue
http://code.google.com/p/s3fs/issues/detail?id=153

However, creating a new bucket, with the same bucket policy and IAM user,
gives no errors after creating 50 files (the other bucket only had 33), and
listing them never gives the IO error.

s3...@googlecode.com

unread,
Mar 28, 2012, 2:16:00 AM3/28/12
to s3fs-...@googlegroups.com

Comment #17 on issue 153 by lxzan...@gmail.com: IAM user permissions issue
http://code.google.com/p/s3fs/issues/detail?id=153

access_useragreement/privacypolicy.$PASSWORD$%@(TRUE)filesetdir=<false>

s3...@googlecode.com

unread,
Oct 11, 2012, 9:32:02 AM10/11/12
to s3fs-...@googlegroups.com

Comment #18 on issue 153 by emcl...@gmail.com: IAM user permissions issue
http://code.google.com/p/s3fs/issues/detail?id=153

I'm getting the "Input/output" error mentioned above. Without the newest
release however I cannot mount due to restricted IAM permissions.

s3...@googlecode.com

unread,
Aug 29, 2013, 4:57:56 AM8/29/13
to s3fs-...@googlegroups.com
Updates:
Status: Done

Comment #19 on issue 153 by ggta...@gmail.com: IAM user permissions issue
http://code.google.com/p/s3fs/issues/detail?id=153

Hi,

Were you able to solve this problem?
I will close this issue because this issue is about old version and anyone
does not reply for a while.

Because some bugs are fixed in the latest version, please use the latest
version.
And please post new issue again if your problem does not seem to be fixed
yet.

Thanks in advance for your help.

--
You received this message because this project is configured to send all
issue notifications to this address.
You may adjust your notification preferences at:
https://code.google.com/hosting/settings
Reply all
Reply to author
Forward
0 new messages