Issue 329 in s3fs: Use s3fs mount amazon S3 bucket as FTP server backend

774 views
Skip to first unread message

s3...@googlecode.com

unread,
Apr 9, 2013, 6:36:45 AM4/9/13
to s3fs-...@googlegroups.com
Status: New
Owner: ----
Labels: Type-Support Priority-Medium

New issue 329 by monta...@gmail.com: Use s3fs mount amazon S3 bucket as FTP
server backend
http://code.google.com/p/s3fs/issues/detail?id=329

At first, I mount S3 bucket which name is "buck" by the command s3fs -o
allow_other -o default_acl="public-read" -o use_cache=/mnt/loopdev/ -o buck
/mnt/ftp

Secondly, I set up a FTP server. Its home directory is /mnt/ftp.

The error situation is that when I download the file which just exist in
the /mnt/ftp but the /mnt/loopdev/ (cache folder) doesn't has the file.

I don't know whether the correct process is that when I download a new file
which cache folder doesn't has it, the s3fs will send request to S3 to get
the file and put the file to cache folder or not.




===================================================================
The following information is very important in order to help us to help
you. Omission of the following details may delay your support request or
receive no attention at all.
===================================================================
Version of s3fs being used (s3fs --version):

Version of fuse being used (pkg-config --modversion fuse):

System information (uname -a):

Distro (cat /etc/issue):

s3fs command line used (if applicable):

/etc/fstab entry (if applicable):

s3fs syslog messages (grep s3fs /var/log/syslog):

--
You received this message because this project is configured to send all
issue notifications to this address.
You may adjust your notification preferences at:
https://code.google.com/hosting/settings

s3...@googlecode.com

unread,
Apr 10, 2013, 2:09:59 AM4/10/13
to s3fs-...@googlegroups.com

Comment #1 on issue 329 by monta...@gmail.com: Use s3fs mount amazon S3
Sorry, I add the information to help you to clarify my situation.

#s3fs --version
Amazon Simple Storage Service File System 1.61
Copyright (C) 2010 Randy Rizun <rri...@gmail.com>
License GPL2: GNU GPL version 2 <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
# pkg-config --modversion fuse
2.8.6
# uname -a
Linux chttl-83e6aec4f47f2bdc 2.6.32-28-server #55-Ubuntu SMP Mon Jan 10
23:57:16 UTC 2011 x86_64 GNU/Linux
# cat /etc/issue
Ubuntu 10.04.2 LTS \n \l

Thx.

s3...@googlecode.com

unread,
Apr 10, 2013, 10:58:34 PM4/10/13
to s3fs-...@googlegroups.com
Updates:
Status: Accepted

Comment #2 on issue 329 by ggta...@gmail.com: Use s3fs mount amazon S3
Hi

I don't know who's authority the ftp accesses the target file with.
Then I can know about error detail, If you can, please test below.

$ su - user <--- user is ftp process user or ftp user
$ cat /mnt/ftp/targetfile

If this issue is about the file permission, you can see the error message
on display.

About your second question, The cache file is not made in cache directory
when you don't have permission for accessing the target file.
(s3fs checks permission before making/downloading the target file in cache
directory)

Regards,

s3...@googlecode.com

unread,
Apr 12, 2013, 6:18:15 AM4/12/13
to s3fs-...@googlegroups.com

Comment #3 on issue 329 by monta...@gmail.com: Use s3fs mount amazon S3
Another question, I set up a ftp server by ProFTPD Version 1.3.2c.
The FTP's home directory is /mnt/ftpup which is mounted by s3fs
(Command: s3fs -o url=http://s3.hicloud.net.tw/ -o allow_other -o
default_acl="public-read" caasftpup /mnt/ftpup)

When I used filezilla client to upload a small file (<50MB), it was success.
I didn't know the situation when I upload a large file (>50MB) to s3fs
bucket through filezilla client, the filezilla client will resend whole
file again when the fisrt time transfer over.

Thx.

s3...@googlecode.com

unread,
Apr 15, 2013, 1:47:06 AM4/15/13
to s3fs-...@googlegroups.com

Comment #4 on issue 329 by ggta...@gmail.com: Use s3fs mount amazon S3
Hi, montaler

Are you getting the error as like "permission denied" when you do command
manually ?(#2)
If you get a error message, please check your file(and directory)
permission.

#3 question, I don't know "url=http://s3.hicloud.net.tw/" service, is it
S3(or S3 compatible) service?
And the filezilla resend because the filezilla got the error from server,
is it correct?

Please let me know what you want to do(or s3fs should do).

Regards,

s3...@googlecode.com

unread,
Apr 18, 2013, 9:18:47 PM4/18/13
to s3fs-...@googlegroups.com

Comment #5 on issue 329 by monta...@gmail.com: Use s3fs mount amazon S3
The s3.hicloud.net.tw service is the S3 compatible service. The
filezilla resend but doesn't get the error message.

The thing I want to do is set up ftp server(proftpd) by s3fs and and
the ftp server's default home directory is the folder which is the s3fs
mounted bucket.

I found the situation when I upload a smaill file (<50MB) to ftp
server, It's success. But big file (>50MB) is fail. The ftp server resent
the big file.

Thx.

s3...@googlecode.com

unread,
Jun 4, 2013, 2:34:26 AM6/4/13
to s3fs-...@googlegroups.com

Comment #6 on issue 329 by ggta...@gmail.com: Use s3fs mount amazon S3
Hi, montaler

I'm sorry for replying too late.
s3fs is updated to v1.70 which fixes some bugs about connecting and file
descriptor.

Please try to use and please post the result.
And when you run s3fs, if you can please run it manually with "-d" option.
If this option is specified, s3fs puts more information in
/var/log/messages.

Thanks in advance for your help.

s3...@googlecode.com

unread,
Aug 18, 2013, 8:47:45 AM8/18/13
to s3fs-...@googlegroups.com

Comment #7 on issue 329 by jarde...@gmail.com: Use s3fs mount amazon S3
bucket as FTP server backend
http://code.google.com/p/s3fs/issues/detail?id=329

Hi,

I'm on s3fs-1.72 and having issues with creating a user to use the s3 mount
point as it's home directory then connect with FTP.

mkdir backup
s3fs mybucket /backup/

Shows correctly in df -h, permissions as follows:

drwxr-xr-x 1 root root 0 Jan 1 1970 backup

adduser -b /backup/ test
su test
bash: /backup//test/.bashrc: Permission denied

ls -la from /backup/test

-rw-r--r-- 1 test test 18 Feb 27 19:18 .bash_logout
-rw-r--r-- 1 test test 176 Feb 27 19:18 .bash_profile
-rw-r--r-- 1 test test 124 Feb 27 19:18 .bashrc
-rw-r--r-- 1 test test 121 Feb 27 19:23 .kshrc

Permissions of files/directories look all good, but can't su to the user
without error, and FTP fails. If I make the user the same way but specify
homedir as anything that isn't s3 FTP works fine.

/var/log/messages when connecting with FTP:

FTP session opened.
notice: unable to use '~/' [resolved to '/backup//test/']: Permission denied
Preparing to chroot to directory '~/'
test chdir("/"): No such file or directory
FTP session closed.

Any ideas?

Thanks.

s3...@googlecode.com

unread,
Aug 19, 2013, 3:09:36 AM8/19/13
to s3fs-...@googlegroups.com

Comment #8 on issue 329 by ggta...@gmail.com: Use s3fs mount amazon S3
HI, jardeath

I think this issue is a bug, which is fixed at r465.
s3fs v1.72 before r465 does not change a permission/owner/group for the
mount point.
On your issue case, I think "/backup" directory(mount point) should have
0777 permission, but it has 0755.
s3fs after r465 probably solves this issue.

Please try to use after r465 and let me know the result.

thanks in advance for your help.

s3...@googlecode.com

unread,
Aug 19, 2013, 4:22:03 AM8/19/13
to s3fs-...@googlegroups.com

Comment #9 on issue 329 by jarde...@gmail.com: Use s3fs mount amazon S3
Thanks, I tried making /backup as 0777 already but got the same error.

I've put r465 in place but having the same problem, with 0755 and 0777 (it
wouldn't mount anymore with default 0755 but did work fine with 0777).

drwxrwxrwx 1 root root 0 Jan 1 1970 backup

In /backup we have the new user dir I just made with "useradd -b /backup
test"

drwx------ 1 test test 0 Aug 19 08:11 test

When I try su to this user:

bash: /backup/test/.bashrc: Permission denied

FTP logs are the same as in my previous message.

Thanks, I am very keen to be able to FTP with a user account to a homedir
in s3. :)

s3...@googlecode.com

unread,
Aug 19, 2013, 4:29:26 AM8/19/13
to s3fs-...@googlegroups.com

Comment #10 on issue 329 by ggta...@gmail.com: Use s3fs mount amazon S3
Hi,

It seems the permission is no problem, but I don't know why s3fs puts
error(Permission denied).
if you can, please run s3fs manually with "-f"(and "-d") option and test
same commands.
Then s3fs puts many debugging message on you display, and it helps us.

Thanks in advance for your help.

s3...@googlecode.com

unread,
Aug 19, 2013, 5:34:10 AM8/19/13
to s3fs-...@googlegroups.com

Comment #11 on issue 329 by jarde...@gmail.com: Use s3fs mount amazon S3
Sure:

s3fs bucketname /backup/ -d -f
s3fs_check_service(2505): check services.
CheckBucket(1713): check a bucket.
RequestPerform(1017): connecting to URL
http://s3.amazonaws.com/bucketname
RequestPerform(1044): HTTP response code 301
s3fs_init(2450): init

It just hangs here and the mount is not there.

s3...@googlecode.com

unread,
Aug 19, 2013, 10:44:59 AM8/19/13
to s3fs-...@googlegroups.com

Comment #12 on issue 329 by ggta...@gmail.com: Use s3fs mount amazon S3
Hi,

I'm sorry for that I don't explain detail.
Please run s3fs with "&"(backend processing).
And please open another terminal and run commands(ls /backup/test; ls
backup/test/.bashrc; etc)

Thanks in advance for your help.


s3...@googlecode.com

unread,
Aug 19, 2013, 7:59:02 PM8/19/13
to s3fs-...@googlegroups.com

Comment #14 on issue 329 by jarde...@gmail.com: Use s3fs mount amazon S3
Thanks, I ran this a while ago and posted my output but the comment was
deleted - perhaps the text was too long? I've uploaded it instead at the
below link.

http://rootusers.com/output.txt

The commands I ran to produce this were:

ls /backup
ls /backup/test99
ls /backup/test99/.bashrc

s3...@googlecode.com

unread,
Aug 20, 2013, 1:44:47 AM8/20/13
to s3fs-...@googlegroups.com

Comment #15 on issue 329 by ggta...@gmail.com: Use s3fs mount amazon S3
Hi

Thanks for log file.
I saw it, and it seems s3fs works without problem.

Do you get result of those command(ls) without error?
(or you got Permission denied error?)
If you got errors about permission, please do ls command as root user.

So s3fs probably succeed to get permission for these directories/files, I
think these object's permission is not allow command's user.

Sorry for spending your time.

Regards,

s3...@googlecode.com

unread,
Aug 20, 2013, 2:23:26 AM8/20/13
to s3fs-...@googlegroups.com

Comment #16 on issue 329 by jarde...@gmail.com: Use s3fs mount amazon S3
I am able to run the ls fine without error as root, however if I do 'su
test99' and use this user:

[root@ip-172-31-20-138 backup]# su test99
s3fs_getattr(667): [path=/]
bash: /backup/tom99/.bashrc: Permission denied

It's strange that this only happens on the s3fs mount, anywhere else works
fine and I can replicate this.

/backup is 777 and /backup/test99 is still 700 and owned by test99:test99,
/backup/tom99/.bashrc is also owned by the user with 644 permissions.

s3...@googlecode.com

unread,
Aug 20, 2013, 2:35:31 AM8/20/13
to s3fs-...@googlegroups.com

Comment #17 on issue 329 by jarde...@gmail.com: Use s3fs mount amazon S3
Additionally when I try to FTP, the debug logs show:

notice: unable to use '~/' [resolved to '/backup/test99/']: Permission
denied
unable to chdir to /backup/test99 (No such file or directory), defaulting
to chroot directory /
test99 chdir("/"): No such file or directory

The permissions look fine though, and are the same as an account I make
with useradd on non s3fs space which works fine ... Really strange.

s3...@googlecode.com

unread,
Aug 20, 2013, 2:36:31 AM8/20/13
to s3fs-...@googlegroups.com

Comment #18 on issue 329 by jarde...@gmail.com: Use s3fs mount amazon S3
I am able to run the ls fine without error as root, however if I do 'su
test99' and use this user:

[root@ip-172-31-20-138 backup]# su test99
s3fs_getattr(667): [path=/]
bash: /backup/test99/.bashrc: Permission denied

It's strange that this only happens on the s3fs mount, anywhere else works
fine and I can replicate this.

/backup is 777 and /backup/test99 is still 700 and owned by test99:test99,
/backup/test99/.bashrc is also owned by the user with 644 permissions.

s3...@googlecode.com

unread,
Aug 20, 2013, 3:03:08 AM8/20/13
to s3fs-...@googlegroups.com

Comment #19 on issue 329 by ggta...@gmail.com: Use s3fs mount amazon S3
HI

It is strange...
If you can, please try to check meta data for below object.

1) "/test99/" (notice: end of object name is "/")
2) "/test99/.bashrc"

You can see these object meta data by "x-amz-meta-gid" and "x-amz-meta-uid".
You can use S3 console for checking (2) object, but you can not see (1)
object's meta data on S3 console.
Then you have to use another tool, one of tool is s3cmd.
(example command line is "s3cmd info s3://<bucket>/test99/ -d", please
don't forget "-d", you can see metadata in debugging message)

Probably, these result helps us.

Regards

s3...@googlecode.com

unread,
Aug 20, 2013, 4:17:54 AM8/20/13
to s3fs-...@googlegroups.com

Comment #20 on issue 329 by jarde...@gmail.com: Use s3fs mount amazon S3
Hi,

Metadata for .bashrc is:

x-amz-meta-gid 505
x-amz-meta-mode 33188
x-amz-meta-mtime 1361992691
x-amz-meta-uid 504
Content-Type application/octet-stream

For s3cmd I've just installed this tool now and configured it, I've run:

s3cmd info s3://<bucket>/tom99/ -d

This shows:

'x-amz-meta-uid': '504'
'x-amz-meta-gid': '505'

If you need more information please let me know.

Thanks

s3...@googlecode.com

unread,
Aug 21, 2013, 9:43:59 PM8/21/13
to s3fs-...@googlegroups.com

Comment #21 on issue 329 by jarde...@gmail.com: Use s3fs mount amazon S3
I've tried swapping from proftpd to vsftpd, it's definitely something to do
with the permissions/s3fs thinking there is an issue with the permissions,
as the permissions look fine.

vsftpd shows on login: Response: 500 OOPS: cannot change
directory:/backup/test99

In /backup this is as follows:

drwx------ 1 test99 test99 0 Aug 22 00:53 test99

When I try to log in with the FTP client s3fs shows:

s3fs_getattr(667): [path=/]
s3fs_access(2498): [path=/][mask=X_OK ]
s3fs_access(2498): [path=/][mask=X_OK ]
s3fs_access(2498): [path=/][mask=X_OK ]
s3fs_access(2498): [path=/][mask=X_OK ]

Please let me know if you need any further information, I can help you set
up a test to replicate this if you like.

s3...@googlecode.com

unread,
Aug 21, 2013, 10:38:48 PM8/21/13
to s3fs-...@googlegroups.com

Comment #22 on issue 329 by jarde...@gmail.com: Use s3fs mount amazon S3
I've made a bit of progress, instead of mounting using the s3fs command I
added the following to /etc/fstab

s3fs#mybucket /backup/ fuse
allow_other,_netdev,nosuid,nodev,url=https://s3.amazonaws.com 0 0

Now when I FTP:

Status: Connection established, waiting for welcome message...
Response: 220 FTP Server ready.
Command: USER exampleuser2
Response: 331 Password required for exampleuser2
Command: PASS ******
Response: 230 User exampleuser2 logged in
Command: OPTS UTF8 ON
Response: 200 UTF8 set to on
Status: Connected
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/" is the current directory
Command: TYPE I
Response: 200 Type set to I
Command: PASV
Error: Connection timed out
Error: Failed to retrieve directory listing

So it's timing out there now after connecting. The connection DID
successfully make the /backup/exampleuser2 home directory this time (didn't
happen before) so I think it's progress... Any ideas?

s3...@googlecode.com

unread,
Aug 21, 2013, 11:07:56 PM8/21/13
to s3fs-...@googlegroups.com

Comment #23 on issue 329 by ggta...@gmail.com: Use s3fs mount amazon S3
Hi,

About #20 and #21, It seems good about the permission, but I could not see
the reason for this problem.
As possibility, the object's uid is different from user's uid, or
communication error(ex, timeout) may occur while s3fs gets the list of
objets in that directory.
If you can see, some error message is put throw syslog(/var/log/messages).

About #22,
This log messages said "Connection time out", is this message put by FTP
server?
If s3fs got some error about communication with S3, this problem is
occurred as possibility.
if you can, please do not set url option and run s3fs.
(please run s3fs on HTTP(not https))

Best Regards,

s3...@googlecode.com

unread,
Aug 21, 2013, 11:40:07 PM8/21/13
to s3fs-...@googlegroups.com

Comment #24 on issue 329 by jarde...@gmail.com: Use s3fs mount amazon S3
The message is from the FTP server yes, it times out after 20 seconds as
this is the default timeout value in my FTP client, is there anything that
s3fs might be doing that could cause the delay? I tried to change the
timeout value to 300 seconds as a test but the same thing happened just
timed out?

FTP server logs:

FTP session opened.
Preparing to chroot to directory '/backup/exampleuser2'
USER exampleuser2: Login successful.

After 4-5 minutes:

Client session idle timeout, disconnected.
FTP session closed.

Having the homedir on /home which is not an s3 mount works instantly so
something must be causing the delay.

The FTP server logs show the connection establishing fine, just timing out
and closing.

I've changed the /etc/fstab entry to HTTP instead of HTTPS but it has not
helped. Do you think anything else is needed in the fstab settings?

How can I test mounting the same way as fstab by command line? I want to be
able to run -f and -d for more information but the mount only seems to work
with allow_other,_netdev,nosuid,nodev.. I was trying s3fs -o
allow_other,_netdev,nosuid,nodev,bucket=bucketname /backup based on
http://linux.die.net/man/1/s3fs but that's not working correctly.

Thanks.

s3...@googlecode.com

unread,
Aug 22, 2013, 1:01:31 AM8/22/13
to s3fs-...@googlegroups.com

Comment #25 on issue 329 by ggta...@gmail.com: Use s3fs mount amazon S3
Hi,

When you run s3fs with HTTP, did you do commands on terminal manually?
I would like to see it's log, because I want to know whether s3fs gets
timeout error.
(ex, ls /backup/test99/.bashrc )

So s3fs checks objects by list bucket request, s3fs sends many request than
you think.
Then I recommend that you should specify below option, if there are many
object in directory.
* max_stat_cache_size
* enable_noobj_cache
* use_cache
(please see "-h" option or man page).

If you get other error message/log, it will help us for this issue.

Thanks advance for your assistance.

s3...@googlecode.com

unread,
Aug 23, 2013, 12:53:14 AM8/23/13
to s3fs-...@googlegroups.com

Comment #26 on issue 329 by jarde...@gmail.com: Use s3fs mount amazon S3
I tried HTTP with /etc/fstab, I tried to run it manually with a command
instead like you suggested as I showed in my last comment but I wasn't able
to get the command syntax working correctly, can you please provide an
example of what I should run? How can I run it with the -o use_cache
option? It's been suggested to me to try using this.

s3...@googlecode.com

unread,
Aug 23, 2013, 11:44:41 AM8/23/13
to s3fs-...@googlegroups.com

Comment #27 on issue 329 by ggta...@gmail.com: Use s3fs mount amazon S3
Hi,

Probably, you can run s3fs with below options.

# s3fs mybucket /backup -o
allow_other,nosuid,use_cache=/tmp,url=https://s3.amazonaws.com,f2 -f -d
("f2" and "-f", "-d" option is for printing/logging debug messages, if you
need it, please specify it)

The use_cache option takes directory path for saving cache file.
Example command line means that cache files lays in "/tmp/mybucket"
directory.

Regards,

s3...@googlecode.com

unread,
Aug 23, 2013, 8:05:39 PM8/23/13
to s3fs-...@googlegroups.com

Comment #28 on issue 329 by jarde...@gmail.com: Use s3fs mount amazon S3
Thanks! That has helped a lot, I've been able to connect with the FTP
account now using s3 as the home directory.

The only issue now, is that when I upload a .txt file with some words in it
as a test, on the server it appears as
just "$^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^$"

When I download the file, it's a blank 0kb file?

Here are the logs from just this single file upload by FTP to
/backup/exampleuser2:

http://rootusers.com/file_upload.txt

There are over 1000 lines of logs, is this expected for a single file
transfer? Can that be optimized? How can I upload the file and download it
without it changing?

s3...@googlecode.com

unread,
Aug 23, 2013, 10:06:45 PM8/23/13
to s3fs-...@googlegroups.com

Comment #29 on issue 329 by jarde...@gmail.com: Use s3fs mount amazon S3
bucket as FTP server backend
http://code.google.com/p/s3fs/issues/detail?id=329

Thanks! That has helped a lot, I've been able to connect with the FTP
account now using s3 as the home directory.

The only issue now, is that when I upload a .txt file with some words in it
as a test, on the server it appears as just ^@^@^@^@^@^@^@^@^@^@^@ when I
edit with nano. When I use vi/vim it appears blank. When I use 'xxd' to get
a hex dump there is no output.

When I download the file through FTP after uploading it, it's a blank 0kb
file?

The file was 15 bytes before it was uploaded, after upload it shows as 15
bytes on the server:

ls -la /backup/exampleuser2/tester.txt
-rw-r--r-- 1 ftpuser ftpgroup 15 Aug 24 00:11
/backup/exampleuser2/tester.txt

I can download the file normally through the S3 web console and it shows
the correct content, just not over s3fs.

s3...@googlecode.com

unread,
Aug 23, 2013, 11:13:29 PM8/23/13
to s3fs-...@googlegroups.com

Comment #30 on issue 329 by jarde...@gmail.com: Use s3fs mount amazon S3
It seems I've fixed the above by doing the command in /etc/fstab, and
removing 'nodev' - so at the moment I have:

s3fs#backupboxbucket /backup/ fuse
allow_other,_netdev,nosuid,url=http://s3.amazonaws.com 0 0

And it appears to be working! Thanks for all the help, hopefully this can
help someone else.

s3...@googlecode.com

unread,
Aug 23, 2013, 11:14:29 PM8/23/13
to s3fs-...@googlegroups.com

Comment #31 on issue 329 by jarde...@gmail.com: Use s3fs mount amazon S3
bucket as FTP server backend
http://code.google.com/p/s3fs/issues/detail?id=329

It seems I've fixed the above by doing the command in /etc/fstab, and
removing 'nodev' - so at the moment I have:

s3fs#mybucket /backup/ fuse

s3...@googlecode.com

unread,
Aug 25, 2013, 11:01:33 PM8/25/13
to s3fs-...@googlegroups.com

Comment #32 on issue 329 by ggta...@gmail.com: Use s3fs mount amazon S3
Hi,

About #29,
Latest s3fs opens file without no initializing from download, but s3fs
gather all pieces and upload it when the file is flushed.
It seems that s3fs uploaded the object before s3fs gathered all pieces or
s3fs wrote into local cache file.

About #30,
Then I think that it does not influence this solution to the problem that
you unset nodev option.
It is happy to solve this issue, but I don't understand this reason.


About #29,
I saw your log, but it seems that s3fs worked without problem.
s3fs worked below as simply throw your FTP server:
* checks /exampleuser2/.ftpaccess file and permission.
* checks existing /exampleuser2/test.txt
* creates /exampleuser2/.in.test.txt. file(maybe as temporally)
* renames /exampleuser2/.in.test.txt. to /exampleuser2/test.txt

Then it seems that s3fs uploaded 2 byte file completely.
And I suggest about decreasing requests.
So s3fs sent many request for checking permission and existing .ftpaccess
and test.txt, you can decrease requests if you set enable_noobj_cache
option.

Thus, should I close this issue?
If you have more information, please let me know.

Regards,

s3...@googlecode.com

unread,
Aug 26, 2013, 4:19:15 AM8/26/13
to s3fs-...@googlegroups.com

Comment #33 on issue 329 by jarde...@gmail.com: Use s3fs mount amazon S3
With #29 I think it was an issue with how I had mounted it as it works fine
now, I did upload and try leaving it (for up to 30 minutes) but this didn't
help so I don't think it's cache or something similar.

With #30 yeah it seems to be working now, I'm just not sure if that many
requests are required (doesn't s3 cost per requests?) - what can be done to
reduce the requests? I've already got use_cache in place.

I'll try enable_noobj_cache - how do I enable this, as a -o option in the
mount from what I can tell if I just add in "enable_noobj_cache" that's all
that's required.

Are there any downsides to adding this? Have you seen any problems with
caching too much of the data?

s3...@googlecode.com

unread,
Aug 26, 2013, 6:11:03 AM8/26/13
to s3fs-...@googlegroups.com

Comment #34 on issue 329 by ggta...@gmail.com: Use s3fs mount amazon S3
Hi,

The enable_noobj_cache option is set like allow_other, example: "-o
enable_noobj_cache".
If you set this option, below case is problem.
This option is caching which there is not the object, then you can not know
existing the object unless the object is uploaded as newer.
(This case is only directly accessing the object before listing the
directory)
But you can lower the influence of this problem, you can specify
stat_cache_expire and max_stat_cache_size.
So these option controls cache life cycle.

Example, if you need to not send many request for getting attributes of the
object, you can set max_stat_cache_size option.(ex, "-o
max_stat_cache_size=100000").

And last, the problem that cache file size is grown up is fixed(added
sample script) as v1.73.
Please see it.

Regards,
Reply all
Reply to author
Forward
0 new messages