Cannot access the mount point by a user other than root/person who mounted

4,375 views
Skip to first unread message

Sumana

unread,
Jul 29, 2011, 3:26:39 AM7/29/11
to s3ql
Hello All,

I have a requirement.
I have mounted the s3ql file system as the root user but now i want to
provide access to another user who should be able to read/ write into
the file system.
I have mounted with the --allow-other option, but still non-root user
is not able to access the mount point.

I have also tried to modify the acls. It worked for the folders in my
system but not on the s3ql mount point.
cd into mount point gives a permission denied error.

Please help!

Regards,
Sumana

William Blunn

unread,
Jul 29, 2011, 6:30:04 AM7/29/11
to s3...@googlegroups.com
On 29/07/2011 08:26, Sumana wrote:
> I have a requirement.
> I have mounted the s3ql file system as the root user but now i want to provide access to another user who should be able to read/ write into the file system.
> I have mounted with the --allow-other option, but still non-root user is not able to access the mount point.

As a sanity check, please provide, as the root user, the results of "ls
-ld" for each directory leading down into the S3QL filesystem.

So say you had an S3QL volumne mounted at /mnt/s3ql, and within that a
folder /mnt/s3ql/mydir, please provide the output of:

ls -ld /
ls -ld /mnt
ls -ld /mnt/s3ql
ls -ld /mnt/s3ql/mydir

Or, more succinctly:

ls -ld / /mnt /mnt/s3ql /mnt/s3ql/mydir

(The 'd' is necessary to make 'ls' list the directories themselves and
not the contents of the directories.)

Bill

Sumana

unread,
Jul 29, 2011, 6:49:46 AM7/29/11
to s3ql
Hi,

As mentioned, here is the output of the different commands,

ls -ld /
drwxr-xr-x 29 root root 4096 Jul 25 10:06 /

ls -ld /mnt
drwxr-xr-x 23 root root 4096 Jul 27 15:51 /mnt

ls -ld /mnt/s3ql
drwxr-xr-x 1 root root 0 Apr 29 09:23 /mnt/test-50

ls -ld /mnt/s3ql/mydir
drwxr-xr-x 1 root root 0 Jul 29 16:18 /mnt/test-50/test

Please note i have run all these commands as a root user.

Thanks for the quick reply!
Sumana

Nikolaus Rath

unread,
Jul 29, 2011, 8:39:40 AM7/29/11
to s3...@googlegroups.com
On 07/29/2011 03:26 AM, Sumana wrote:
> Hello All,
>
> I have a requirement.
> I have mounted the s3ql file system as the root user but now i want to
> provide access to another user who should be able to read/ write into
> the file system.
> I have mounted with the --allow-other option, but still non-root user
> is not able to access the mount point.

What's your distribution, FUSE and kernel version?

Normally this should not be required, but have you checked the settings
in /etc/fuse.conf?

Can you please post a complete example of the problem, with all inputs
and outputs, and the contents of mount.log and dmesg output?


Thanks,

-Nikolaus

--
�Time flies like an arrow, fruit flies like a Banana.�

PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6 02CF A9AD B7F8 AE4E 425C

Sumana

unread,
Jul 29, 2011, 9:29:05 AM7/29/11
to s3ql
Hi,

The distributions i am using are,
s3ql - 0.30
fuse - 2.8.5
kernel version - 2.6.18-238.e15

I did not find the /etc/fuse.conf file on my system. But i assure you
fuse is installed and i can execute fusermount command :)

The problem i face is - I have mounted s3ql filesystem with --allow-
other option as a root user and able to perform all the operations on
the filesystem.
I then created another user without any root privileges and trying to
access the mounted s3ql filesystem.

Now, consider my mounted filesystem is /mnt/s3ql. The new user is
user1.
When i try to execute "cd /mnt/s3ql" from user1 home directory, i get
a permission denied error.
Similarly, i get a permission denied error when i try to use "cp"
command.

But, i can list the contents of /mnt/s3ql and the output looks like
this,
total 0
?--------- ? ? ? ? ? EOD.iso
?--------- ? ? ? ? ? Leopard.7z
?--------- ? ? ? ? ? Linear-run_S.rst
?---------+ ? ? ? ? ? lost+found
?--------- ? ? ? ? ? Project
Please note these directories and files were created by the root user.

Here is the contents of the mount.log -

Fri, 29 Jul 2011 07:19:45 GMT
/s3ql/s3ql_passphrase
2011-07-29 12:49:45.298 [25930] MainThread: [boto] Method: HEAD
2011-07-29 12:49:45.298 [25930] MainThread: [boto] Path: /
s3ql_passphrase
2011-07-29 12:49:45.299 [25930] MainThread: [boto] Data:
2011-07-29 12:49:45.299 [25930] MainThread: [boto] Headers: {'Date':
'Fri, 29 Ju l 2011
07:19:45 GMT', 'Content-Length': '0', 'Authorization': 'AWS
AKIAIW4KBHGSV
6QD2VHA:rznHu/JkNzyaxfi7BtOGFK07DWw=', 'User-Agent': 'Boto/1.9b
(linux2)'}
2011-07-29 12:49:45.299 [25930] MainThread: [boto] Host:
s3ql.s3.amazonaws .com
2011-07-29 12:49:45.567 [25930] MainThread: [boto] Canonical: GET


Fri, 29 Jul 2011 07:19:45 GMT
/store-area/s3ql_passphrase
2011-07-29 12:49:45.568 [25930] MainThread: [boto] Method: GET
2011-07-29 12:49:45.568 [25930] MainThread: [boto] Path: /
s3ql_passphrase
2011-07-29 12:49:45.568 [25930] MainThread: [boto] Data:
2011-07-29 12:49:45.569 [25930] MainThread: [boto] Headers: {'Date':
'Fri, 29 Ju l 2011
07:19:45 GMT', 'Content-Length': '0', 'Authorization': 'AWS
AKIAIW4KBHGSV
6QD2VHA:sfZaeJh1Wd11GQc3A/Ffks17NCY=', 'User-Agent': 'Boto/1.9b
(linux2)'}
2011-07-29 12:49:45.569 [25930] MainThread: [boto] Host: store-
area.s3.amazonaws .com
2011-07-29 12:49:45.835 [25930] MainThread: [boto] Canonical: GET


Fri, 29 Jul 2011 07:19:45 GMT
/store-area/
2011-07-29 12:49:45.835 [25930] MainThread: [boto] Method: GET
2011-07-29 12:49:45.836 [25930] MainThread: [boto] Path: /?&max-keys=0
2011-07-29 12:49:45.836 [25930] MainThread: [boto] Data:
2011-07-29 12:49:45.836 [25930] MainThread: [boto] Headers: {'Date':
'Fri, 29 Ju l 2011
07:19:45 GMT', 'Content-Length': '0', 'Authorization': 'AWS
AKIAIW4KBHGSV
6QD2VHA:pdHnDyxor6zs57KwzfMfcPOu0Hg=', 'User-Agent': 'Boto/1.9b
(linux2)'}
2011-07-29 12:49:45.837 [25930] MainThread: [boto] Host: store-
area.s3.amazonaws .com
2011-07-29 12:49:46.091 [25930] MainThread: [boto] <?xml version="1.0"
encoding= "UTF-8"?>
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/
2006-03-01/"><Name>store-
ar ea</Name><Prefix></
Prefix><Marker></Marker><MaxKeys>0</
MaxKeys><IsTruncated>fals
e</IsTruncated></ListBucketResult>
2011-07-29 12:49:46.093 [25930] MainThread: [boto] Canonical: GET


Fri, 29 Jul 2011 07:19:46 GMT
/store-area/
2011-07-29 12:49:46.093 [25930] MainThread: [boto] Method: GET
2011-07-29 12:49:46.093 [25930] MainThread: [boto] Path: /?
&prefix=s3ql_seq_no_
2011-07-29 12:49:46.094 [25930] MainThread: [boto] Data:
2011-07-29 12:49:46.094 [25930] MainThread: [boto] Headers: {'Date':
'Fri, 29 Ju l 2011
07:19:46 GMT', 'Content-Length': '0', 'Authorization': 'AWS
AKIAIW4KBHGSV 6QD2VHA:
+5YR2EoLHaxDnncIsqXOAuOZr5A=', 'User-Agent': 'Boto/1.9b (linux2)'}
2011-07-29 12:49:46.094 [25930] MainThread: [boto] Host: store-
area.s3.amazonaws .com
2011-07-29 12:49:46.420 [25930] MainThread: [boto] <?xml version="1.0"
encoding= "UTF-8"?>
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/
2006-03-01/"><Name>store-
ar ea</
Name><Prefix>s3ql_seq_no_</Prefix><Marker></Marker><MaxKeys>1000</
MaxKeys><I
sTruncated>false</IsTruncated><Contents><Key>s3ql_seq_no_0</
Key><LastModified>20
11-07-27T10:19:42.000Z</LastModified><ETag>&quot;
23ab53fb3b9e3ee8f85aea18abc8306
5&quot;</ETag><Size>110</
Size><Owner><ID>4e93f9fd34fd4626c75948c45bacdd4e35bf67a
3676ce1f7f732fb6ea60e06d9</ID><DisplayName>jimschwaller</DisplayName></
Owner><St
orageClass>STANDARD</StorageClass></
Contents><Contents><Key>s3ql_seq_no_1</
Key><
LastModified>2011-07-27T10:21:48.000Z</LastModified><ETag>&quot;
6c9b6fa234caea59
66bf3ef473b57669&quot;</ETag><Size>63</
Size><Owner><ID>4e93f9fd34fd4626c75948c45
bacdd4e35bf67a3676ce1f7f732fb6ea60e06d9</ID><DisplayName>jimschwaller</
DisplayNa me></
Owner><StorageClass>STANDARD</StorageClass></
Contents><Contents><Key>s3ql_s
eq_no_2</Key><LastModified>2011-07-28T08:08:50.000Z</
LastModified><ETag>&quot;
74
f30695855bad1b445b8f8cf08046a8&quot;</ETag><Size>110</
Size><Owner><ID>4e93f9fd34
fd4626c75948c45bacdd4e35bf67a3676ce1f7f732fb6ea60e06d9</
ID><DisplayName>jimschwa
ller</DisplayName></Owner><StorageClass>STANDARD</StorageClass></
Contents><Conte
nts><Key>s3ql_seq_no_3</Key><LastModified>2011-07-28T09:00:44.000Z</
LastModified
><ETag>&quot;d8a292191d345cb4eb1cfc472173e765&quot;</ETag><Size>63</
Size><Owner>
<ID>4e93f9fd34fd4626c75948c45bacdd4e35bf67a3676ce1f7f732fb6ea60e06d9</
ID><Displa
yName>jimschwaller</DisplayName></Owner><StorageClass>STANDARD</
StorageClass></C
ontents><Contents><Key>s3ql_seq_no_4</
Key><LastModified>2011-07-28T12:59:49.000Z
</LastModified><ETag>&quot;4a95af4382c0b06339711a1100a92ee1&quot;</
ETag><Size>63 </
Size><Owner><ID>4e93f9fd34fd4626c75948c45bacdd4e35bf67a3676ce1f7f732fb6ea60e06
d9</ID><DisplayName>jimschwaller</DisplayName></
Owner><StorageClass>STANDARD</
St orageClass></Contents></
ListBucketResult>
2011-07-29 12:49:46.423 [25930] MainThread: [mount] Using cached
metadata.
2011-07-29 12:49:46.426 [25930] MainThread: [boto] Canonical: GET


Fri, 29 Jul 2011 07:19:46 GMT
/store-area/
2011-07-29 12:49:46.426 [25930] MainThread: [boto] Method: GET
2011-07-29 12:49:46.427 [25930] MainThread: [boto] Path: /?&max-keys=0
2011-07-29 12:49:46.427 [25930] MainThread: [boto] Data:
2011-07-29 12:49:46.427 [25930] MainThread: [boto] Headers: {'Date':
'Fri, 29 Ju l 2011
07:19:46 GMT', 'Content-Length': '0', 'Authorization': 'AWS
AKIAIW4KBHGSV 6QD2VHA:
+5YR2EoLHaxDnncIsqXOAuOZr5A=', 'User-Agent': 'Boto/1.9b (linux2)'}
2011-07-29 12:49:46.428 [25930] MainThread: [boto] Host: store-
area.s3.amazonaws .com
2011-07-29 12:49:46.674 [25930] MainThread: [boto] <?xml version="1.0"
encoding= "UTF-8"?>
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/
2006-03-01/"><Name>store-
ar ea</Name><Prefix></
Prefix><Marker></Marker><MaxKeys>0</
MaxKeys><IsTruncated>fals
e</IsTruncated></ListBucketResult>
2011-07-29 12:49:46.681 [25930] MainThread: [boto] Canonical: PUT
rUg/ycdm4mCbvJzsloVm6A==
application/octet-stream
Fri, 29 Jul 2011 07:19:46 GMT
x-amz-meta-compression:ZLIB
x-amz-meta-encryption:AES
x-amz-meta-meta:EVllnE5zM3FsX3NlcV9ub181V1O/5iL/
i4CoKBfzIw72TDQZKAtlqG4XBknqso5F
83lp8RaJ
/store-area/s3ql_seq_no_5
2011-07-29 12:49:46.682 [25930] MainThread: [boto] Method: PUT
2011-07-29 12:49:46.682 [25930] MainThread: [boto] Path: /
s3ql_seq_no_5
2011-07-29 12:49:46.682 [25930] MainThread: [boto] Data:
2011-07-29 12:49:46.683 [25930] MainThread: [boto] Headers: {'x-amz-
meta-meta':
'EVllnE5zM3FsX3NlcV9ub181V1O/5iL/
i4CoKBfzIw72TDQZKAtlqG4XBknqso5F83lp8RaJ', 'x-
a mz-meta-compression':
'ZLIB', 'Content-Length': '63', 'x-amz-meta-encryption':
' AES', 'User-Agent':
'Boto/1.9b (linux2)', 'Expect': '100-Continue', 'Date':
'Fri , 29 Jul 2011
07:19:46 GMT', 'Content-MD5': 'rUg/ycdm4mCbvJzsloVm6A==',
'Content -Type':
'application/octet-stream', 'Authorization': 'AWS
AKIAIW4KBHGSV6QD2VHA:u
iPymUqMs0PS2wZv0TWg0Ig1yH4='}
2011-07-29 12:49:46.683 [25930] MainThread: [boto] Host: store-
area.s3.amazonaws .com
2011-07-29 12:49:47.191 [25930] MainThread: [BlockCache] Initializing
2011-07-29 12:49:47.191 [25930] MainThread: [mount] Mounting
filesystem...
2011-07-29 12:49:47.192 [25930] MainThread: [fuse] Initializing llfuse
2011-07-29 12:49:47.192 [25930] MainThread: [fuse] Calling fuse_mount
2011-07-29 12:49:47.222 [25930] MainThread: [fuse] Calling
fuse_lowlevel_new
2011-07-29 12:49:47.222 [25930] MainThread: [fuse] Calling
fuse_set_signal_handl ers
2011-07-29 12:49:47.223 [25930] MainThread: [fuse] Calling
fuse_session_add_chan
2011-07-29 12:49:47.227 [25935] MainThread: [daemonize] Daemonizing,
new PID is 25936
2011-07-29 12:49:47.229 [25936] MainThread: [fuse] Calling
fuse_session_loop_mt
2011-07-29 12:49:47.230 [25936] Metadata-Upload-Thread: [mount]
MetadataUploadTh read:
start
2011-07-29 12:49:47.231 [25936] Dummy-3: [BlockCache] init: start
2011-07-29 12:49:47.232 [25936] CommitThread: [BlockCache]
CommitThread: start
2011-07-29 12:49:47.232 [25936] Dummy-3: [BlockCache] init: end
2011-07-29 12:49:47.233 [25936] Inode Flush Thread: [fs] FlushThread:
start
2011-07-29 16:09:55.390 [26371] MainThread: [common] Using backend
credentials f rom /
root/.s3ql/authinfo
2011-07-29 16:09:57.375 [26371] MainThread: [root] Bucket does not
exist.
2011-07-29 16:10:07.134 [26374] MainThread: [root] Mountpoint does not
exist.
2011-07-29 16:10:41.115 [26376] MainThread: [common] Using backend
credentials f rom /
root/.s3ql/authinfo
2011-07-29 16:10:45.110 [26376] MainThread: [mount] Using cached
metadata.
2011-07-29 16:10:45.129 [26376] MainThread: [mount] Last file system
check was m ore than 1
month ago, running fsck.s3ql is recommended.
2011-07-29 16:10:45.741 [26376] MainThread: [mount] Mounting
filesystem...
2011-07-29 16:10:45.774 [26381] MainThread: [daemonize] Daemonizing,
new PID is 26382

I am not sure if this log output would actually help. The problem
seems to be with the file permissions. Please note i have tried
running different commands and debug the error, so the log may not
match the exact description given.
Sorry, i din't understand whats the dmesg output...

Regards,
Sumana

Nikolaus Rath

unread,
Jul 29, 2011, 10:07:27 AM7/29/11
to s3...@googlegroups.com
On 07/29/2011 09:29 AM, Sumana wrote:
> Hi,
>
> The distributions i am using are,
> s3ql - 0.30
> fuse - 2.8.5
> kernel version - 2.6.18-238.e15

Your kernel is pretty old, but it should work (although with degraded
performance). But I mean what Linux distribution you use (Debian,
Fedora, SuSE, CentOS, etc).


> I did not find the /etc/fuse.conf file on my system. But i assure you
> fuse is installed and i can execute fusermount command :)
>
> The problem i face is - I have mounted s3ql filesystem with --allow-
> other option as a root user and able to perform all the operations on
> the filesystem.
> I then created another user without any root privileges and trying to
> access the mounted s3ql filesystem.

[....]


> I am not sure if this log output would actually help.

No, not really :-).

> The problem
> seems to be with the file permissions.

No, it's not an issue with file permissions. This is some FUSE issue. Do
other FUSE file systems show the same behavior, e.g. sshfs?

> Sorry, i din't understand whats the dmesg output...

Please run "dmesg | tail" after mount.s3ql and show the output.


Best,

Sumana

unread,
Jul 31, 2011, 11:18:22 PM7/31/11
to s3ql
Hi,

Sorry for the delay in my response.

The distribution i am using is - red hat enterprise linux server 5.6

I have not tried any other fuse filesystems. (s3ql is the first :))

The output of the command - dmesg | tail
bridge-eth0: enabling the bridge
bridge-eth0: up
tg3: eth0: Link is up at 1000 Mbps, full duplex.
tg3: eth0: Flow control is on for TX and on for RX.
bridge-eth0: disabling the bridge
bridge-eth0: down
bridge-eth0: enabling the bridge
bridge-eth0: up
tg3: eth0: Link is up at 1000 Mbps, full duplex.
tg3: eth0: Flow control is on for TX and on for RX.

Regards,
Sumana

Nikolaus Rath

unread,
Aug 1, 2011, 10:17:01 AM8/1/11
to s3ql
Hi,

I don't see any reason why it may not work on your system. I recommend that you try sshfs or the example file system that comes with fuse, so that we can check if the problem is with S3QL or fuse.

Best,
Niko
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

Sumana <sasu...@gmail.com> wrote:
-- 
You received this message because you are subscribed to the Google Groups "s3ql" group.
To post to this group, send email to s3...@googlegroups.com.
To unsubscribe from this group, send email to s3ql+uns...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/s3ql?hl=en.

Sumana

unread,
Aug 2, 2011, 5:16:05 AM8/2/11
to s3ql
Hi,

I was able to correct this problem.

As you suggested, it was indeed a problem with the FUSE file system,
and as i had suspected, the file permissions were not correct.
The non-root user should have permissions to access the mount point to
red/write data into that folder. This change was supposed to be done
in fuse.conf file.

This was rightly pointed out here -
http://xentek.net/articles/448/installing-fuse-s3fs-and-sshfs-on-ubuntu/

Thanks a lot for your help guys!

Regards,
Sumana

William Blunn

unread,
Aug 2, 2011, 5:35:40 AM8/2/11
to s3...@googlegroups.com
On 02/08/2011 10:16, Sumana wrote:
> I was able to correct this problem.
>
> As you suggested, it was indeed a problem with the FUSE file system, and as i had suspected, the file permissions were not correct.
> The non-root user should have permissions to access the mount point to red/write data into that folder. This change was supposed to be done in fuse.conf file.
>
> This was rightly pointed out here -
> http://xentek.net/articles/448/installing-fuse-s3fs-and-sshfs-on-ubuntu/

Hello Sumana,

I would like to understand the problem you had so as to build knowledge
and experience.

Going back to your message of 29 July 2011:


> As mentioned, here is the output of the different commands,
>
> ls -ld /
> drwxr-xr-x 29 root root 4096 Jul 25 10:06 /
>
> ls -ld /mnt
> drwxr-xr-x 23 root root 4096 Jul 27 15:51 /mnt
>
> ls -ld /mnt/s3ql
> drwxr-xr-x 1 root root 0 Apr 29 09:23 /mnt/test-50
>
> ls -ld /mnt/s3ql/mydir
> drwxr-xr-x 1 root root 0 Jul 29 16:18 /mnt/test-50/test

Was it any of these directories which had incorrect permissions?

Or are you saying that it was the permissions of the directory
"underneath" the mount point?

Or something else?

Regards,

Bill

Sumana

unread,
Aug 2, 2011, 5:50:38 AM8/2/11
to s3ql
Hello Bill,

> I would like to understand the problem you had so as to build knowledge
> and experience.
>
My problem was infact a trivial one. I am new to LINUX file systems,
so din't understand how the file/folder permissions work for different
users.

> > ls -ld /mnt/s3ql
> > drwxr-xr-x 1 root root 0 Apr 29 09:23 /mnt/s3ql
>
S3ql folder is the mount point where i am mounting the amazon s3
storage.

> > ls -ld /mnt/s3ql/mydir
> > drwxr-xr-x 1 root root 0 Jul 29 16:18 /mnt/test-50/test
>
> Was it any of these directories which had incorrect permissions?
>
Yes, the non-root user should have access to the s3ql folder to read/
write data into it.
For eg. in my case, s3ql file system was mounted by root user and then
i created user1 (non-root). Now, i wanted user1 to access the mounted
file system.
To be able to do that, the user1 should have access permissions to the
folder.
So i tried to give permissions to user1 by modifying the acl on that
folder, it din't work. (i still don't know why)
So, i added the user to the root group and was able to access it (as
you can see, root group is the owner group for the folder)

> Or are you saying that it was the permissions of the directory
> "underneath" the mount point?
>
> Or something else?
>
No

Once again, thanks for pointing me in the right direction.

Regards,
Sumana

Nikolaus Rath

unread,
Aug 2, 2011, 10:37:29 AM8/2/11
to s3ql
Hi Sumana,

I'm afraid adding the user to the root group did not have any effect at all, since you already had rx permissions for everyone on the folder. It'd be better to undo that.

The important change is to add user_allow_other to fuse.conf, as I said in my first mail.


Best,
Niko
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

Sumana <sasu...@gmail.com> wrote:

Sumana

unread,
Aug 2, 2011, 11:55:26 PM8/2/11
to s3ql
Hi Nikolaus,

You were right again!
I changed as you suggested :) Thanks!!!

I have another query. I am able to transmit the data from my system to
Amazon S3. But the maximum file size it supports is only upto 2Gb. I
get an input/output error after that. Is this an hardware limitation
or do i need to do configure something?

Regards,
Sumana

Nikolaus Rath

unread,
Aug 3, 2011, 9:50:38 AM8/3/11
to s3...@googlegroups.com
Hi,

Does it work on other file systems with this computer?

Does it work with the fuse example fs?

How did you try to create the file? Please try it with dd if=/dev/zero


Best,
Niko
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

Sumana <sasu...@gmail.com> wrote:

Sumana

unread,
Aug 4, 2011, 5:47:17 AM8/4/11
to s3ql
Hi,

> Does it work on other file systems with this computer?
>
Yes it works with other file systems like nfs, fat32 etc, but am not
sure about fuse file systems.

> Does it work with the fuse example fs?
>
I tried to install s3fs and check, but ran into same problem even when
i used the "ls" command.

> How did you try to create the file? Please try it with dd if=/dev/zero
>
I executed the command dd if=/dev/zero and the output is here -
33009835+0 records in
33009834+0 records out
16901035008 bytes (17 GB) copied, 2313.22 seconds, 7.3 MB/s

Please note i manually stopped the execution.

Regards,
Sumana

Sumana

unread,
Aug 4, 2011, 5:53:07 AM8/4/11
to s3ql
I also tried this -
dd if=/dev/zero of=/mnt/s3ql/foobar count=10240000 bs=1024

This stopped after transferring 2.2 Gb data.
Please note /mnt/s3ql is the mountpoint where s3ql file sytem has been
mounted.

Thanks,
Sumana

Nikolaus Rath

unread,
Aug 4, 2011, 3:11:24 PM8/4/11
to s3...@googlegroups.com
Quoting Sumana <sasu...@gmail.com>:
>> Does it work with the fuse example fs?
>>
> I tried to install s3fs and check, but ran into same problem even
when
> i used the "ls" command.

What is "the same problem"? If you want help, you need to give information.

>
>> How did you try to create the file? Please try it with dd
if=/dev/zero
>>
> I executed the command dd if=/dev/zero and the output is here -
> 33009835+0 records in
> 33009834+0 records out
> 16901035008 bytes (17 GB) copied, 2313.22 seconds, 7.3 MB/s
>
> Please note i manually stopped the execution.

Ok, so your system supports large files. The culprit is either FUSE or S3QL.

Nikolaus Rath

unread,
Aug 4, 2011, 3:13:20 PM8/4/11
to s3...@googlegroups.com
Quoting Sumana <sasu...@gmail.com>:

Please give details. What do you mean with "it stopped"? Please include the
complete input and output.


Tipp: Following http://www.catb.org/~esr/faqs/smart-questions.html
would make this much easier for people trying to help you.

Sumana

unread,
Aug 5, 2011, 2:00:24 AM8/5/11
to s3ql
Hi,

> What is "the same problem"? If you want help, you need to give information.
>
Sorry about that..since it was a reply i did not repeat the problem i
was facing..i shall make sure to mention the problem i am facing every
time i reply.
Sorry again!!!

> Please give details. What do you mean with "it stopped"? Please include the
complete input and output.
>
The command i executed is -
dd if=/dev/zero of=/mnt/s3ql/foobar count=10240000 bs=1024
The output is -
dd: writing `/mnt/s3ql/foobar': Input/output error
2193641+0 records in
2193640+0 records out
2246287360 bytes (2.2 GB) copied, 3338.15 seconds, 673 kB/s

Please note /mnt/s3ql is the folder where i have mounted the s3ql
filesystem.

The problem i am facing is -
I am able to transmit data from my system to Amazon S3. But the
maximum file size it supports is only upto 2Gb. When i try to transfer
a file of size more than 2Gb (say 5Gb), it transfers only 2Gb and then
gives a input/output error after that.

Also,
>>> How did you try to create the file? Please try it with dd
if=/dev/zero
>> I executed the command dd if=/dev/zero and the output is here -
>> 33009835+0 records in
>> 33009834+0 records out
>> 16901035008 bytes (17 GB) copied, 2313.22 seconds, 7.3 MB/s
>> Please note i manually stopped the execution.
>
> Ok, so your system supports large files. The culprit is either FUSE or S3QL.
>
Please help me resolve this.

Regards,
Sumana

Nikolaus Rath

unread,
Aug 5, 2011, 9:13:32 AM8/5/11
to s3...@googlegroups.com
On 08/05/2011 02:00 AM, Sumana wrote:
> Hi,
>
>> What is "the same problem"? If you want help, you need to give information.
>>
> Sorry about that..since it was a reply i did not repeat the problem i
> was facing..i shall make sure to mention the problem i am facing every
> time i reply.
> Sorry again!!!

So what *is* the problem that you have when trying the fuse example fs?
You managed to once again omit that information.


>> Please give details. What do you mean with "it stopped"? Please include the
> complete input and output.
>>
> The command i executed is -
> dd if=/dev/zero of=/mnt/s3ql/foobar count=10240000 bs=1024
> The output is -
> dd: writing `/mnt/s3ql/foobar': Input/output error
> 2193641+0 records in
> 2193640+0 records out
> 2246287360 bytes (2.2 GB) copied, 3338.15 seconds, 673 kB/s
>
> Please note /mnt/s3ql is the folder where i have mounted the s3ql
> filesystem.

What are the contents of ~/.s3ql/mount.log right after you get this error?

Sumana

unread,
Aug 8, 2011, 2:57:47 AM8/8/11
to s3ql
Hi,

> > The command i executed is -
> > dd if=/dev/zero of=/mnt/s3ql/foobar count=10240000 bs=1024
> > The output is -
> > dd: writing `/mnt/s3ql/foobar': Input/output error
> > 2193641+0 records in
> > 2193640+0 records out
> > 2246287360 bytes (2.2 GB) copied, 3338.15 seconds, 673 kB/s
>
> > Please note /mnt/s3ql is the folder where i have mounted the s3ql
> > filesystem.
>
> What are the contents of ~/.s3ql/mount.log right after you get this error?
>
...
2011-08-08 11:11:00.131 [15686] Dummy-6: [BlockCache]
get(inode=1905962477, block=21): end
2011-08-08 11:11:00.131 [15686] Dummy-3: [BlockCache]
get(inode=1905962477, block=21): start
2011-08-08 11:11:00.131 [15686] Dummy-3: [BlockCache]
get(inode=1905962477, block=21): in cache
2011-08-08 11:11:00.132 [15686] Dummy-3: [BlockCache]
get(inode=1905962477, block=21): yield
2011-08-08 11:11:00.175 [15686] Dummy-3: [fuse] operations.write()
raised exception.
Traceback (most recent call last):
File "handlers.pxi", line 314, in llfuse.fuse_write (src/llfuse.c:
8615)
File "handlers.pxi", line 314, in llfuse.fuse_write (src/llfuse.c:
8547)
File "/usr/lib/python2.6/site-packages/s3ql-0.30-py2.6.egg/s3ql/
fs.py", line 912, in write
written = self._write(fh, offset, buf)
File "/usr/lib/python2.6/site-packages/s3ql-0.30-py2.6.egg/s3ql/
fs.py", line 949, in _write
fh.write(buf)
File "/usr/lib/python2.6/contextlib.py", line 23, in __exit__
self.gen.next()
File "/usr/lib/python2.6/site-packages/s3ql-0.30-py2.6.egg/s3ql/
block_cache.py", line 268, in get
el.flush()
IOError: [Errno 28] No space left on device
2011-08-08 11:11:00.323 [15686] Dummy-3: [fs] Unexpected internal
filesystem error.
Filesystem may be corrupted, run fsck.s3ql as soon as possible!
Please report this bug on http://code.google.com/p/s3ql/.
2011-08-08 11:11:13.574 [15686] CommitThread: [UploadManager]
UploadManager.add(<CacheEntry, inode=1905962477, blockno=21,
dirty=True, obj_id=None>): start
2011-08-08 11:11:14.662 [15686] CommitThread: [UploadManager]
UploadManager(inode=1905962477, blockno=21): hashed 50069504 bytes in
1.088 seconds, 43.89 MB/s
2011-08-08 11:11:14.680 [15686] CommitThread: [UploadManager]
add(inode=1905962477, blockno=21): created new object 56
2011-08-08 11:11:14.680 [15686] CommitThread: [UploadManager]
add(inode=1905962477, blockno=21): no previous object
2011-08-08 11:11:14.680 [15686] CommitThread: [UploadManager]
add(inode=1905962477, blockno=21): starting compression thread
2011-08-08 11:11:14.681 [15686] CommitThread: [UploadManager]
add(inode=1905962477, blockno=21): end
2011-08-08 11:11:16.993 [15686] Thread-26: [UploadManager]
CompressionThread(inode=1905962477, blockno=21): compressed 50069504
bytes in 2.311 seconds, 20.66 MB/s
2011-08-08 11:11:16.993 [15686] Thread-26: [UploadManager]
CompressThread(<CacheEntry, inode=1905962477, blockno=21, dirty=True,
obj_id=56L>): starting upload thread
2011-08-08 11:11:16.995 [15686] Thread-27: [boto] Canonical: GET


Mon, 08 Aug 2011 05:41:16 GMT
/store-area/
2011-08-08 11:11:16.995 [15686] Thread-27: [boto] Method: GET
2011-08-08 11:11:16.995 [15686] Thread-27: [boto] Path: /?&max-keys=0
2011-08-08 11:11:16.996 [15686] Thread-27: [boto] Data:
2011-08-08 11:11:16.996 [15686] Thread-27: [boto] Headers: {'Date':
'Mon, 08 Aug 2011 05:41:16 GMT', 'Content-Length': '0',
'Authorization': 'AWS AKIAIW4KBHGSV6QD2VHA:sOlp3dx5by+b
+dd4xl3jLmMkYnQ=', 'User-Agent': 'Boto/1.9b (linux2)'}
2011-08-08 11:11:16.996 [15686] Thread-27: [boto] Host: store-
area.s3.amazonaws.com
2011-08-08 11:11:16.997 [15686] Thread-27: [boto] Bad status line: '',
retrying..
2011-08-08 11:11:16.997 [15686] Thread-26: [thread_group] thread:
waiting for lock
2011-08-08 11:11:16.998 [15686] Thread-27: [boto] establishing HTTP
connection
2011-08-08 11:11:16.998 [15686] Thread-26: [thread_group] thread:
calling notify()
2011-08-08 11:11:19.680 [15686] Thread-27: [boto] <?xml version="1.0"
encoding="UTF-8"?>
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/
2006-03-01/"><Name>store-area</Name><Prefix></Prefix><Marker></
Marker><MaxKeys>0</MaxKeys><IsTruncated>false</IsTruncated></
ListBucketResult>
2011-08-08 11:11:19.682 [15686] Thread-27: [boto] Canonical: PUT
j7jAnOhDC/6FD8alrA7eNw==
application/octet-stream
Mon, 08 Aug 2011 05:41:19 GMT
x-amz-meta-compression:ZLIB
x-amz-meta-encryption:AES
x-amz-meta-meta:EIl/
nE5zM3FsX2RhdGFfNTY1cDpTWRzEXOCIz7rg6zxA0oUbFPYX6liG7QYBl3wfSTtifGE=
/store-area/s3ql_data_56
2011-08-08 11:11:19.683 [15686] Thread-27: [boto] Method: PUT
2011-08-08 11:11:19.683 [15686] Thread-27: [boto] Path: /s3ql_data_56
2011-08-08 11:11:19.683 [15686] Thread-27: [boto] Data:
2011-08-08 11:11:19.684 [15686] Thread-27: [boto] Headers: {'x-amz-
meta-meta': 'EIl/
nE5zM3FsX2RhdGFfNTY1cDpTWRzEXOCIz7rg6zxA0oUbFPYX6liG7QYBl3wfSTtifGE=',
'x-amz-meta-compression': 'ZLIB', 'Content-Length': '48726', 'x-amz-
meta-encryption': 'AES', 'User-Agent': 'Boto/1.9b (linux2)', 'Expect':
'100-Continue', 'Date': 'Mon, 08 Aug 2011 05:41:19 GMT', 'Content-
MD5': 'j7jAnOhDC/6FD8alrA7eNw==', 'Content-Type': 'application/octet-
stream', 'Authorization': 'AWS AKIAIW4KBHGSV6QD2VHA:VeLePL4/rJsOJF/
yonYiQoI5bHc='}
2011-08-08 11:11:19.684 [15686] Thread-27: [boto] Host: store-
area.s3.amazonaws.com
2011-08-08 11:11:21.273 [15686] Thread-27: [UploadManager]
CompressionThread(inode=1905962477, blockno=21): compressed 48726
bytes in 4.278 seconds, 0.01 MB/s
2011-08-08 11:11:21.273 [15686] Thread-27: [thread_group] thread:
waiting for lock
2011-08-08 11:11:21.274 [15686] Thread-27: [thread_group] thread:
calling notify()

Please note the file system has enough space.

Regards,
Sumana

Nikolaus Rath

unread,
Aug 8, 2011, 8:48:21 AM8/8/11
to s3ql
Hi,

You do not seem to have enough space left in the file system that holds the block cache (default ~/.s3ql).

Could you check that?


Best,
Niko
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

Sumana <sasu...@gmail.com> wrote:
Reply all
Reply to author
Forward
0 new messages