New issue 146 by semacken...@gmail.com: ls: reading directory .:
Input/output error
http://code.google.com/p/s3fs/issues/detail?id=146
What steps will reproduce the problem?
1. Install Ubuntu 10.04 from Ubuntu AMI on ec2;
2. Follow the wiki for s3fs;
3. Problem duplicated twice;
What is the expected output? What do you see instead?
The expected output post /usr/bin/s3fs <insert unique s3 bucket name>
/mnt/s3 is that the mount works and files are accessible;
What happens is a message shows up in tail -f /var/log/messages: "s3fs:
init $Rev: 191 $" AND when one cd's into the s3/ directory and then
performs and ls or tries to create a file an error is presented instead:
root@host:/mnt/s3# ls -la
ls: reading directory .: Input/output error
total 0
What version of the product are you using? On what operating system?
I am using the s3fs-1.35.tar.gz install on ubuntu 10.04 LTS; I had to
change the configure.ac to allow the installation or it error'd out on fuse
2.8.4 as ubuntu 10.04 LTS is patched and working effectively on 2.8.1 and
they will not change the version on the ubuntu LTS image.
Please provide any additional information below.
If you need any more information please let me know.
suntzu
Comment #1 on issue 146 by moore...@suncup.net: ls: reading directory .:
Input/output error
http://code.google.com/p/s3fs/issues/detail?id=146
s3fs-1.35 is not compatible with FUSE versions lower than 2.8.4
This is not an s3fs problem.
Another obvious problem (and probably a bigger one) is that "$Rev: 191 $"
does not correspond to the 1.35 tar ball. The revision for 1.35 in
src/s3fs.cpp is 304
% which s3fs
% s3fs --version
should give additional clues.
I have installed Ubuntu 10.10 from Ubuntu AMI on ec2 + s3fs 1.35 and I'm
having a similar problem. Maybe it's related.
$ ls -la
d????????? ? ? ? ? ? mnt
$ tail -f /var/log/messages
init $Rev: 304 $
$ which s3fs
/usr/bin/s3fs
$ s3fs --version
Amazon Simple Storage Service File System 1.35
Copyright (C) 2010 Randy Rizun <rri...@gmail.com>
License GPL2: GNU GPL version 2 <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
I'm suspecting something with your /etc/fstab or command line invocation at
this point.
What is the line in your /etc/fstab or your command line?
Is the directory that you are trying to mount to empty? In what directory
did you run the "ls -la"? If in / , then the mnt directory typically is
not empty.
Try running s3fs in foreground mode from the command line with the -f
option, the messages can be helpful.
Do a grep for s3fs in /var/log/messages and /var/log/syslog
I'm not denying that you are having an issue, but s3fs has been well tested
on EC2 using a Ubuntu 10.10 AMI
I get this error with buckets that have names containing capital letters.
Despite being "allowed" to assign the name in AWS console, s3cmd has issues
with them as well.
I am having the same problem. I am using
http://blog.eberly.org/2008/10/27/how-i-automated-my-backups-to-amazon-s3-using-rsync/
as a guide.
less /var/log/messages | grep s3fs
Feb 1 14:35:21 xxx-xxx s3fs: init $Rev: 304 $
less /var/log/syslog | grep s3fs
Feb 1 14:35:21 xxx-xxx s3fs: init $Rev: 304 $
$ s3fs --version
Amazon Simple Storage Service File System 1.35
Copyright (C) 2010 Randy Rizun <rri...@gmail.com>
License GPL2: GNU GPL version 2 <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
$ which s3fs
/usr/local/bin/s3fs
I also get this problem with s3fs 1.40/rev 312 and libfuse 2.8.4 on Ubuntu:
# ls -lh
ls: cannot access mysqlbackup.sh: Input/output error
total 0
?????????? ? ? ? ? ? mysqlbackup.sh
I suspect that some of this has to do with data generated by other s3
clients, as directories uploaded via s3cmd to reduced redundancy (and
possibly others as well, I haven't tested that yet) aren't even showing up,
and the the files I'm attempting to move into this directory I'm moving
with s3cmd and were originally uploaded via s3cmd. S3 buckets were mounted
with use_rrs=1 set.
Can data from other clients throw off s3fs this way, or am I barking up the
wrong tree tree?
Data created by other S3 clients probably will not be compatible with s3fs.
See issue #73, issue #21 and issue #31
Ahhh, that makes sense about the difference concepts of directories. So,
are there any workarounds, or is the best solution simply to re-upload data
using rsync to my s3fs share?
There are workarounds, but tricky and easy to screw up. You'll need to use
something like JetS3t Cockpit to add the directory objects, or you might be
able to do it through the AWS console. The simplest solution is as you
suggested, but it might not be very fast if you have a slow internet
connection or a lot of data.
Cool, I don't mind reuploading the data.... Thanks!