Issue 371 in s3fs: ftruncate failed

867 views
Skip to first unread message

s3...@googlecode.com

unread,
Sep 5, 2013, 7:15:11 AM9/5/13
to s3fs-...@googlegroups.com
Status: New
Owner: ----
Labels: Type-Defect Priority-Medium

New issue 371 by i...@red-paint.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Detailed description of observed behavior:

Can't upload large files using RSYNC

What steps will reproduce the problem - please be very specific and
detailed. (if the developers cannot reproduce the issue, then it is
unlikely a fix will be found)?

I'm running the following command:
# rsync -v -r --size-only --inplace --progress --stats --whole-file
/var/dumps/backup_201309050207.tar /mnt/s3drive
sending incremental file list
backup_201309050207.tar
19534940160 100% 29.22MB/s 0:10:37 (xfer#1, to-check=0/1)
rsync: ftruncate failed on "/mnt/s3drive/backup_201309050207.tar":
Input/output error (5)

The file is 19GB in size and I've tried using cp too, without any joy.
Everything worked fine last month but something changed and even after
updating s3fs to the latest version it's not successfully
transferring/completing large file uploads.


===================================================================
The following information is very important in order to help us to help
you. Omission of the following details may delay your support request or
receive no attention at all.
===================================================================
Version of s3fs being used (s3fs --version): 1.73

Version of fuse being used (pkg-config --modversion fuse): 2.9.3

System information (uname -a): Linux xxx 3.2.46-53.art.x86_64 #1 SMP Fri
Jun 7 09:16:28 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux

Distro (cat /etc/issue): CentOS release 6.4 (Final)

s3fs command line used (if applicable):

/etc/fstab entry (if applicable):

s3fs syslog messages (grep s3fs /var/log/syslog): Sep 5 11:22:48 xxx s3fs:
init $Rev: 474 $

Be grateful for any advice on what might be going awry - everything was
working fine using an old version of s3fs until a few weeks ago.

Thanks



--
You received this message because this project is configured to send all
issue notifications to this address.
You may adjust your notification preferences at:
https://code.google.com/hosting/settings

s3...@googlecode.com

unread,
Sep 5, 2013, 1:11:17 PM9/5/13
to s3fs-...@googlegroups.com

Comment #1 on issue 371 by i...@red-paint.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

The full output I get when this fails is as below:

#rsync -v --size-only --inplace --progress --omit-dir-times --stats
--whole-file /var/dumps/backup_201309041403.tar /mnt/s3drive
pleskbackup_201309041403.tar
19727329280 100% 29.82MB/s 0:10:30 (xfer#1, to-check=0/1)
rsync: ftruncate failed on "/mnt/s3drive/backup_201309041403.tar":
Input/output error (5)
rsync: close failed on "/mnt/s3drive/": Operation not permitted (1)
rsync error: error in file IO (code 11) at receiver.c(730) [receiver=3.0.6]
rsync: connection unexpectedly closed (29 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(600)
[sender=3.0.6]

Any advice or assistance would be very much appreciated.

s3...@googlecode.com

unread,
Sep 5, 2013, 1:13:18 PM9/5/13
to s3fs-...@googlegroups.com

Comment #2 on issue 371 by i...@red-paint.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

The full output I get when this fails is as below:

#rsync -v --size-only --inplace --progress --omit-dir-times --stats
--whole-file /var/dumps/backup_201309041403.tar /mnt/s3drive

s3...@googlecode.com

unread,
Sep 17, 2013, 3:22:58 AM9/17/13
to s3fs-...@googlegroups.com

Comment #3 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi,

I'm sorry for replying late.

I would like to know about s3fs version which runs without problem on your
system.
(I changed some codes about cache and file descriptor, then these codes
have a bug possibly.)

And please let me know about below.

* /mnt/s3drive/ permission
"close failed on "/mnt/s3drive/": Operation not permitted (1)" message is
put by rsync, I want to know this reason.

* about ftruncate error
"ftruncate failed on "/mnt/s3drive/backup_201309041403.tar": Input/output
error (5)" means some thing error occurred by truncating file.
Possibly, EIO error is put by s3fs.
So that, if you can please run s3fs with "-f" or "-d"(etc, -of2,
-ocurldbg) and get displayed some error about EIO.

* disk space/memory
So you don't specify "use_cache", s3fs uses system temporary file by
tmpfile() function.
I think that possibly the system has not enough space for 19GB.
(But you said good work few week ago, this case is not low possibly.)

I think these information helps us to solve this problem.

Thanks in advance for your help.

s3...@googlecode.com

unread,
Sep 17, 2013, 12:37:35 PM9/17/13
to s3fs-...@googlegroups.com

Comment #4 on issue 371 by i...@red-paint.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hello,

Thanks for the reply - the previous version we were running was quite old
and it stopped working about a month ago prompting us to upgrade. We
couldn't update some of the dependencies via YUM (fuse) so needed to
install these from source. We were using s3fs 1.61.

# /mnt/s3drive/ permissions
drwx------ 1 root root 0 Jan 1 1970 s3drive

This is the output after running the commands in shell


# /usr/bin/s3fs XXX /mnt/s3drive -ocurldbg
* About to connect() to XXX.s3.amazonaws.com port 80 (#0)
* Trying 178.236.6.193... * connected
* Connected to XXX.s3.amazonaws.com (178.236.6.193) port 80 (#0)
> GET / HTTP/1.1
Host: XXX.s3.amazonaws.com
Accept: */*
Authorization: AWS XXX
Date: Tue, 17 Sep 2013 10:48:26 GMT

< HTTP/1.1 200 OK
< x-amz-id-2: XXX
< x-amz-request-id: XXX
< Date: Tue, 17 Sep 2013 10:48:32 GMT
< Content-Type: application/xml
< Transfer-Encoding: chunked
< Server: AmazonS3
<
* Connection #0 to host XXX.s3.amazonaws.com left intact
* Closing connection #0


# rsync -vvv --size-only --inplace --progress --omit-dir-times --stats
--whole-file --no-p /var/lib/dumps/backup_201309170207.tar /mnt/s3drive
[sender] make_file(pleskbackup_201309170207.tar,*,0)
send_file_list done
send_files starting
server_recv(2) starting pid=1392
received 1 names
recv_file_list done
get_local_name count=1 /mnt/s3drive
generator starting pid=1392
delta-transmission disabled for local transfer or --whole-file
recv_generator(backup_201309170207.tar,0)
send_files(0, /var/lib/dumps/backup_201309170207.tar)
send_files mapped /var/lib/dumps/backup_201309170207.tar of size 19829237760
calling match_sums /var/lib/dumps/backup_201309170207.tar
backup_201309170207.tar
19820511232 99% 37.15MB/s 0:00:00
sending file_sum
false_alarms=0 hash_hits=0 matches=0
19829237760 100% 32.17MB/s 0:09:47 (xfer#1, to-check=0/1)
sender finished /var/lib/dumps/backup_201309170207.tar
recv_files(1) starting
send_files phase=1
generate_files phase=1
recv_files(backup_201309170207.tar)
rsync: ftruncate failed on "/mnt/s3drive/backup_201309170207.tar":
Input/output error (5)
got file_sum
rsync: close failed on "/mnt/s3drive/": Input/output error (5)
rsync error: error in file IO (code 11) at receiver.c(730) [receiver=3.0.6]
_exit_cleanup(code=11, file=receiver.c, line=730): about to call exit(11)
rsync: connection unexpectedly closed (29 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(600)
[sender=3.0.6]
_exit_cleanup(code=12, file=io.c, line=600): about to call exit(12)

We've got about 500GB of free storage space so I don't *think* that's the
problem unless there is an issue writing to the tmp directory.

s3...@googlecode.com

unread,
Sep 19, 2013, 5:30:57 AM9/19/13
to s3fs-...@googlegroups.com
Updates:
Status: Accepted

Comment #6 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi,

I'm sorry that this problem is simply bug.
I updated new revision r486 which fixed this issue(bug).

This problem is that ftruncate callback in s3fs returns error though
success.
This is because it misunderstood the prototype type of the function which
is called in s3fs_truncate(Load function).

Please check new revision(486) and if you have another problem please let
me know.
Because I caould not check large object yet.

Thanks in advance for your help.


s3...@googlecode.com

unread,
Sep 19, 2013, 8:26:33 AM9/19/13
to s3fs-...@googlegroups.com

Comment #7 on issue 371 by i...@red-paint.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hello,

Thanks for the update.

I edited the revised file, verified permissions then unmounted and
remounted the s3drive folder.

This is what I'm getting now:

#rsync -vvv --size-only --inplace --progress --omit-dir-times --stats
--whole-file --no-p /var/lib/dumps/backup_201309170207.tar /mnt/s3drive

[sender] make_file(backup_201309170207.tar,*,0)
send_file_list done
send_files starting
server_recv(2) starting pid=14503
received 1 names
recv_file_list done
get_local_name count=1 /mnt/s3drive
generator starting pid=14503
delta-transmission disabled for local transfer or --whole-file
recv_generator(backup_201309170207.tar,0)
send_files(0, /var/lib/dumps/backup_201309170207.tar)
send_files mapped /var/lib/dumps/backup_201309170207.tar of size 198292
calling match_sums /var/lib/dumps/backup_201309170207.tar
pleskbackup_201309170207.tar
19805700096 99% 39.09MB/s 0:00:00
sending file_sum
false_alarms=0 hash_hits=0 matches=0
19829237760 100% 33.76MB/s 0:09:20 (xfer#1, to-check=0/1)
sender finished /var/lib/dumps/backup_201309170207.tar
send_files phase=1
recv_files(1) starting
generate_files phase=1
recv_files(backup_201309170207.tar)
rsync: ftruncate failed on "/mnt/s3drive/backup_201309170207.tar":
Input/ou
tput error (5)
got file_sum
rsync: close failed on "/mnt/s3drive/": Operation not permitted (1)
rsync error: error in file IO (code 11) at receiver.c(730) [receiver=3.0.6]
_exit_cleanup(code=11, file=receiver.c, line=730): about to call exit(11)
rsync: connection unexpectedly closed (29 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(600)
[sender=3.0.6]
_exit_cleanup(code=12, file=io.c, line=600): about to call exit(12)

Is there something else I should be doing to ensure the revision above
takes effect (aside from un-mounting a re-mounting)?

Thanks

s3...@googlecode.com

unread,
Sep 19, 2013, 9:52:39 PM9/19/13
to s3fs-...@googlegroups.com

Comment #8 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi,

Thanks for testing new revision.

It seems that you got same error yet, the error is EIO by ftruncate
operation.
r486 fixes wrong codes in s3fs_truncate() function, but this result
occurred another reason.

If you can, please s3fs run with "-f" option, this option puts many debug
message and runs s3fs foreground.
The debug message helps us to solve this issue.

Finally I may be worried about tmp file.
Because you run s3fs without "use_cache" option, it means that s3fs makes
temporary file by tmp file() function when s3fs downloads/uploads objects.
So your object is large as about 20GB, s3fs makes temp file for this object
under your /tmp directory.
(tmpfile function makes files under P_tmpdir which is defined by
/usr/stdio.h file, and probably it is /tmp.)
Then possibly ftruncate failed by space.

If you can, try to run s3fs with "use_cache" option.

Thanks a lot.

s3...@googlecode.com

unread,
Sep 23, 2013, 4:34:25 PM9/23/13
to s3fs-...@googlegroups.com

Comment #9 on issue 371 by jer...@rosengren.org: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

I'm seeing a similar problem on 1.73 with use_cache enabled. I was seeing
issues with both rsync and also when untarring a 3.9 GB tarball into my
s3fs mount.

tar: mands_ftp_test/science_data_mqfte_test/20130121.tgz: Cannot close:
Operation not permitted

s3fs mounted via /etc/fstab as:

s3fs quantumftp /mnt/quantumftp -o
rw,allow_other,use_cache=/mnt/s3cache,dev,suid

Here's some of the output when mounted with -f:

s3fs_getattr(676):
[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
GetStat(158): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/tmp/][time=1379967540][hit
count=4]
is_uid_inculde_group(490): could not get group infomation.
GetStat(158): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/][time=1379967489][hit
count=16]
is_uid_inculde_group(490): could not get group infomation.
GetStat(158): stat cache hit
[path=/mands_ftp_test/][time=1379967489][hit count=19]
is_uid_inculde_group(490): could not get group infomation.
HeadRequest(1619):
[tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
RequestPerform(1232): HTTP response code 200
AddStat(234): add stat cache
entry[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
GetStat(158): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz][time=1379967540][hit
count=0]
s3fs_flush(1925):
[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz][fd=6]
GetStat(158): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/tmp/][time=1379967540][hit
count=5]
is_uid_inculde_group(490): could not get group infomation.
GetStat(158): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/][time=1379967489][hit
count=17]
is_uid_inculde_group(490): could not get group infomation.
GetStat(158): stat cache hit
[path=/mands_ftp_test/][time=1379967489][hit count=20]
is_uid_inculde_group(490): could not get group infomation.
GetStat(158): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz][time=1379967540][hit
count=1]
is_uid_inculde_group(490): could not get group infomation.
GetStat(158): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz][time=1379967540][hit
count=2]
ParallelMultipartUploadRequest(741):
[tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz][fd=6]
PreMultipartPostRequest(2025):
[tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
RequestPerform(1232): HTTP response code 200
UploadMultipartPostSetup(2258):
[tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz][start=0][size=10485760][part=1]
UploadMultipartPostSetup(2258):
[tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz][start=10485760][size=10485760][part=2]
UploadMultipartPostSetup(2258):
[tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz][start=20971520][size=6074368][part=3]
Request(2791): [count=3]
CompleteMultipartPostRequest(2116):
[tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz][parts=3]
CompleteMultipartPostRequest(2127): 1 file part is not finished uploading.
s3fs_release(1963):
[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz][fd=6]
DelStat(341): delete stat cache
entry[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]

s3...@googlecode.com

unread,
Sep 25, 2013, 1:15:14 AM9/25/13
to s3fs-...@googlegroups.com

Comment #10 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, jeremy

Thanks for sending your log.

This log seems that s3fs failed to send one of part by multipart upload.

> UploadMultipartPostSetup(2258):
> [tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz][start=0][size=10485760][part=1]
> UploadMultipartPostSetup(2258):
> [tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz][start=10485760][size=10485760][part=2]
UploadMultipartPostSetup(2258):
[tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz][start=20971520][size=6074368][part=3]
> Request(2791): [count=3]
> CompleteMultipartPostRequest(2116):
> [tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz][parts=3]
> CompleteMultipartPostRequest(2127): 1 file part is not finished uploading.

it means 3 parts uploaded but one of them failed, then s3fs could not get
one ETag of response.
After that s3fs put this error message.

If you can, please run s3fs with "-o f2" or "-d".
These option helps putting more information, and let us know about what
occurs in s3fs.
Please care for specify these option, you get many logs.(if you
specify "-d", you can see the log in /var/log/syslog(or etc))

If you need to know about HTTP response, you can specify "-o curldbg"
option.
(but you get more logs.)

Thanks in advance for your help.

s3...@googlecode.com

unread,
Sep 25, 2013, 3:38:54 PM9/25/13
to s3fs-...@googlegroups.com

Comment #11 on issue 371 by jer...@rosengren.org: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

I ran a test using -o f2:

$ tar xvf /export/mands_ftp_test.tar
mands_ftp_test/
mands_ftp_test/science_data_mqfte_test/
mands_ftp_test/science_data_mqfte_test/20130121.tgz
tar: mands_ftp_test/science_data_mqfte_test/20130121.tgz: Cannot close:
Operation not permitted
mands_ftp_test/science_data_mqfte_test/tmp/
mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz

The output was huge (66 MB), so I created a public Gist for it at:

https://gist.github.com/jrosengren/6704866

Let me know if you'd like me to do any additional testing.

s3...@googlecode.com

unread,
Sep 26, 2013, 1:10:31 AM9/26/13
to s3fs-...@googlegroups.com

Comment #12 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, jeremy

Thanks for your reporting.

After I saw your log file, this problem occurred in parallel
upload(multipart) function.
But s3fs error message is not good now, because it puts only poor
information(without error code...).
Then I updated s3fs as r487 which puts error code when curl error is
occurred.

Please try to run new revision(r487) with ONLY "-f" option, and look s3fs
log which helps us for this problem.

May be this problem is occurred by about multipart.
If you can test another, please test running s3fs with nomultipart option.

Thanks in advance for your help.






s3...@googlecode.com

unread,
Sep 26, 2013, 4:43:52 PM9/26/13
to s3fs-...@googlegroups.com

Comment #13 on issue 371 by jer...@rosengren.org: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

I've run a "-f" test against r487. Here's the output:

set_moutpoint_attribute(3056): PROC(uid=0, gid=0) - MountPoint(uid=0,
gid=0, mode=40755)
s3fs_init(2573): init
s3fs_check_service(2665): check services.
CheckBucket(2050): check a bucket.
RequestPerform(1354): HTTP response code 200
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_access(2624): [path=/][mask=X_OK ]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/mands_ftp_test]
HeadRequest(1742): [tpath=/mands_ftp_test]
RequestPerform(1354): HTTP response code 404
RequestPerform(1378): HTTP response code 404 was returned, returning
ENOENT
HeadRequest(1742): [tpath=/mands_ftp_test/]
RequestPerform(1354): HTTP response code 200
AddStat(247): add stat cache entry[path=/mands_ftp_test/]
GetStat(170): stat cache hit
[path=/mands_ftp_test/][time=1380227965][hit count=0]
s3fs_getattr(676): [path=/mands_ftp_test/science_data_mqfte_test]
GetStat(170): stat cache hit
[path=/mands_ftp_test/][time=1380227965][hit count=1]
is_uid_inculde_group(515): could not get group infomation.
HeadRequest(1742): [tpath=/mands_ftp_test/science_data_mqfte_test]
RequestPerform(1354): HTTP response code 404
RequestPerform(1378): HTTP response code 404 was returned, returning
ENOENT
HeadRequest(1742): [tpath=/mands_ftp_test/science_data_mqfte_test/]
RequestPerform(1354): HTTP response code 200
AddStat(247): add stat cache
entry[path=/mands_ftp_test/science_data_mqfte_test/]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/][time=1380227965][hit
count=0]
s3fs_getattr(676):
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/][time=1380227965][hit
count=1]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/][time=1380227965][hit count=2]
is_uid_inculde_group(515): could not get group infomation.
HeadRequest(1742):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
RequestPerform(1354): HTTP response code 200
AddStat(247): add stat cache
entry[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][time=1380227965][hit
count=0]
s3fs_unlink(873):
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/][time=1380227965][hit
count=2]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/][time=1380227965][hit count=3]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/][time=1380227965][hit
count=3]
is_uid_inculde_group(515): could not get group infomation.
DeleteRequest(1650):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
RequestPerform(1354): HTTP response code 204
DelStat(375): delete stat cache
entry[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
s3fs_getattr(676): [path=/mands_ftp_test]
GetStat(170): stat cache hit
[path=/mands_ftp_test/][time=1380227965][hit count=4]
s3fs_getattr(676): [path=/mands_ftp_test/science_data_mqfte_test]
GetStat(170): stat cache hit
[path=/mands_ftp_test/][time=1380227966][hit count=5]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/][time=1380227965][hit
count=4]
s3fs_getattr(676):
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/][time=1380227966][hit
count=5]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/][time=1380227966][hit count=6]
is_uid_inculde_group(515): could not get group infomation.
HeadRequest(1742):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
RequestPerform(1354): HTTP response code 404
RequestPerform(1378): HTTP response code 404 was returned, returning
ENOENT
HeadRequest(1742):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz/]
RequestPerform(1354): HTTP response code 404
RequestPerform(1378): HTTP response code 404 was returned, returning
ENOENT
HeadRequest(1742):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz_$folder$]
RequestPerform(1354): HTTP response code 404
RequestPerform(1378): HTTP response code 404 was returned, returning
ENOENT
list_bucket(2203):
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
ListBucketRequest(2092):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
RequestPerform(1354): HTTP response code 200
append_objects_from_xml_ex(2277): contents_xp->nodesetval is empty.
append_objects_from_xml_ex(2277): contents_xp->nodesetval is empty.
s3fs_create(783):
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][mode=100644][flags=32961]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/][time=1380227966][hit
count=6]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/][time=1380227966][hit count=7]
is_uid_inculde_group(515): could not get group infomation.
HeadRequest(1742):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
RequestPerform(1354): HTTP response code 404
RequestPerform(1378): HTTP response code 404 was returned, returning
ENOENT
HeadRequest(1742):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz/]
RequestPerform(1354): HTTP response code 404
RequestPerform(1378): HTTP response code 404 was returned, returning
ENOENT
HeadRequest(1742):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz_$folder$]
RequestPerform(1354): HTTP response code 404
RequestPerform(1378): HTTP response code 404 was returned, returning
ENOENT
list_bucket(2203):
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
ListBucketRequest(2092):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
RequestPerform(1354): HTTP response code 200
append_objects_from_xml_ex(2277): contents_xp->nodesetval is empty.
append_objects_from_xml_ex(2277): contents_xp->nodesetval is empty.
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/][time=1380227966][hit
count=7]
is_uid_inculde_group(515): could not get group infomation.
create_file_object(742):
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][mode=100644]
PutRequest(1864):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
PutRequest(1878): create zero byte file object.
PutRequest(1957): uploading...
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][fd=-1][size=0]
RequestPerform(1354): HTTP response code 200
DelStat(375): delete stat cache
entry[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
s3fs_getattr(676):
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/][time=1380227966][hit
count=8]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/][time=1380227966][hit count=8]
is_uid_inculde_group(515): could not get group infomation.
HeadRequest(1742):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
RequestPerform(1354): HTTP response code 200
AddStat(247): add stat cache
entry[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][time=1380227966][hit
count=0]
s3fs_flush(1957):
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][fd=6]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/][time=1380227966][hit
count=9]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/][time=1380227966][hit count=9]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][time=1380227966][hit
count=1]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][time=1380227978][hit
count=2]
ParallelMultipartUploadRequest(874):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][fd=6]
PreMultipartPostRequest(2147):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
RequestPerform(1354): HTTP response code 200
UploadMultipartPostSetup(2380):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][start=0][size=10485760][part=1]
UploadMultipartPostSetup(2380):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][start=10485760][size=10485760][part=2]
UploadMultipartPostSetup(2380):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][start=20971520][size=10485760][part=3]
UploadMultipartPostSetup(2380):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][start=31457280][size=10485760][part=4]
UploadMultipartPostSetup(2380):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][start=41943040][size=10485760][part=5]
Request(2923): [count=5]
MultiRead(2886): failed a request(400: )
MultiRead(2886): failed a request(400: )
MultiRead(2886): failed a request(400: )
MultiRead(2886): failed a request(400: )
MultiRead(2886): failed a request(400: )
UploadMultipartPostSetup(2380):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][start=52428800][size=10485760][part=6]
UploadMultipartPostSetup(2380):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][start=62914560][size=10485760][part=7]
UploadMultipartPostSetup(2380):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][start=73400320][size=10485760][part=8]
UploadMultipartPostSetup(2380):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][start=83886080][size=10485760][part=9]
UploadMultipartPostSetup(2380):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][start=94371840][size=10485760][part=10]
Request(2923): [count=5]
MultiRead(2886): failed a request(400: )
MultiRead(2886): failed a request(400: )
MultiRead(2886): failed a request(400: )
MultiRead(2886): failed a request(400: )
MultiRead(2886): failed a request(400: )
UploadMultipartPostSetup(2380):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][start=104857600][size=10485760][part=11]
UploadMultipartPostSetup(2380):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][start=115343360][size=10485760][part=12]
UploadMultipartPostSetup(2380):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][start=125829120][size=10485760][part=13]
UploadMultipartPostSetup(2380):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][start=136314880][size=10485760][part=14]
UploadMultipartPostSetup(2380):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][start=146800640][size=10485760][part=15]
Request(2923): [count=5]
MultiRead(2886): failed a request(400: )
MultiRead(2886): failed a request(400: )
MultiRead(2886): failed a request(400: )
MultiRead(2886): failed a request(400: )
MultiRead(2886): failed a request(400: )
UploadMultipartPostSetup(2380):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][start=157286400][size=10485760][part=16]
UploadMultipartPostSetup(2380):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][start=167772160][size=10485760][part=17]
UploadMultipartPostSetup(2380):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][start=178257920][size=10485760][part=18]
UploadMultipartPostSetup(2380):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][start=188743680][size=10485760][part=19]
UploadMultipartPostSetup(2380):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][start=199229440][size=10485760][part=20]
Request(2923): [count=5]
MultiRead(2886): failed a request(400: )
MultiRead(2886): failed a request(400: )
MultiRead(2886): failed a request(400: )
MultiRead(2886): failed a request(400: )
MultiRead(2886): failed a request(400: )
UploadMultipartPostSetup(2380):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][start=209715200][size=10485760][part=21]
UploadMultipartPostSetup(2380):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][start=220200960][size=9068607][part=22]
Request(2923): [count=2]
MultiRead(2886): failed a request(400: )
MultiRead(2886): failed a request(400: )
CompleteMultipartPostRequest(2238):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][parts=22]
CompleteMultipartPostRequest(2249): 1 file part is not finished uploading.
s3fs_release(1997):
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][fd=6]
DelStat(375): delete stat cache
entry[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
s3fs_getattr(676): [path=/mands_ftp_test]
GetStat(170): stat cache hit
[path=/mands_ftp_test/][time=1380227977][hit count=10]
s3fs_getattr(676): [path=/mands_ftp_test/science_data_mqfte_test]
GetStat(170): stat cache hit
[path=/mands_ftp_test/][time=1380227983][hit count=11]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/][time=1380227977][hit
count=10]
s3fs_getattr(676):
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/][time=1380227983][hit
count=11]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/][time=1380227983][hit count=12]
is_uid_inculde_group(515): could not get group infomation.
HeadRequest(1742):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
RequestPerform(1354): HTTP response code 200
AddStat(247): add stat cache
entry[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][time=1380227983][hit
count=0]
s3fs_utimens(1648):
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][mtime=1361464407]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/][time=1380227983][hit
count=12]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/][time=1380227983][hit count=13]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][time=1380227983][hit
count=1]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][time=1380227983][hit
count=2]
put_headers(637):
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][time=1380227983][hit
count=3]
PutHeadRequest(1780):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
PutHeadRequest(1849): copying...
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
RequestPerform(1354): HTTP response code 200
DelStat(375): delete stat cache
entry[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
s3fs_getattr(676):
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/][time=1380227983][hit
count=13]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/][time=1380227983][hit count=14]
is_uid_inculde_group(515): could not get group infomation.
HeadRequest(1742):
[tpath=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
RequestPerform(1354): HTTP response code 200
AddStat(247): add stat cache
entry[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/20130121.tgz][time=1380227983][hit
count=0]
s3fs_getattr(676): [path=/mands_ftp_test/science_data_mqfte_test/tmp]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/][time=1380227983][hit
count=14]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/][time=1380227983][hit count=15]
is_uid_inculde_group(515): could not get group infomation.
HeadRequest(1742): [tpath=/mands_ftp_test/science_data_mqfte_test/tmp]
RequestPerform(1354): HTTP response code 404
RequestPerform(1378): HTTP response code 404 was returned, returning
ENOENT
HeadRequest(1742): [tpath=/mands_ftp_test/science_data_mqfte_test/tmp/]
RequestPerform(1354): HTTP response code 200
AddStat(247): add stat cache
entry[path=/mands_ftp_test/science_data_mqfte_test/tmp/]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/tmp/][time=1380227983][hit
count=0]
s3fs_getattr(676):
[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/tmp/][time=1380227983][hit
count=1]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/][time=1380227983][hit
count=15]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/][time=1380227983][hit count=16]
is_uid_inculde_group(515): could not get group infomation.
HeadRequest(1742):
[tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
RequestPerform(1354): HTTP response code 200
AddStat(247): add stat cache
entry[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz][time=1380227983][hit
count=0]
s3fs_unlink(873):
[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/tmp/][time=1380227983][hit
count=2]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/][time=1380227983][hit
count=16]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/][time=1380227983][hit count=17]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/tmp/][time=1380227983][hit
count=3]
is_uid_inculde_group(515): could not get group infomation.
DeleteRequest(1650):
[tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
RequestPerform(1354): HTTP response code 204
DelStat(375): delete stat cache
entry[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
s3fs_getattr(676):
[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/tmp/][time=1380227983][hit
count=4]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/][time=1380227983][hit
count=17]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/][time=1380227983][hit count=18]
is_uid_inculde_group(515): could not get group infomation.
HeadRequest(1742):
[tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
RequestPerform(1354): HTTP response code 404
RequestPerform(1378): HTTP response code 404 was returned, returning
ENOENT
HeadRequest(1742):
[tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz/]
RequestPerform(1354): HTTP response code 404
RequestPerform(1378): HTTP response code 404 was returned, returning
ENOENT
HeadRequest(1742):
[tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz_$folder$]
RequestPerform(1354): HTTP response code 404
RequestPerform(1378): HTTP response code 404 was returned, returning
ENOENT
list_bucket(2203):
[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
ListBucketRequest(2092):
[tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
RequestPerform(1354): HTTP response code 200
append_objects_from_xml_ex(2277): contents_xp->nodesetval is empty.
append_objects_from_xml_ex(2277): contents_xp->nodesetval is empty.
s3fs_create(783):
[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz][mode=100644][flags=32961]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/tmp/][time=1380227983][hit
count=5]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/][time=1380227983][hit
count=18]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/][time=1380227983][hit count=19]
is_uid_inculde_group(515): could not get group infomation.
HeadRequest(1742):
[tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
RequestPerform(1354): HTTP response code 404
RequestPerform(1378): HTTP response code 404 was returned, returning
ENOENT
HeadRequest(1742):
[tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz/]
RequestPerform(1354): HTTP response code 404
RequestPerform(1378): HTTP response code 404 was returned, returning
ENOENT
HeadRequest(1742):
[tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz_$folder$]
RequestPerform(1354): HTTP response code 404
RequestPerform(1378): HTTP response code 404 was returned, returning
ENOENT
list_bucket(2203):
[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
ListBucketRequest(2092):
[tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
RequestPerform(1354): HTTP response code 200
append_objects_from_xml_ex(2277): contents_xp->nodesetval is empty.
append_objects_from_xml_ex(2277): contents_xp->nodesetval is empty.
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/tmp/][time=1380227984][hit
count=6]
is_uid_inculde_group(515): could not get group infomation.
create_file_object(742):
[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz][mode=100644]
PutRequest(1864):
[tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
PutRequest(1878): create zero byte file object.
PutRequest(1957): uploading...
[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz][fd=-1][size=0]
RequestPerform(1354): HTTP response code 200
DelStat(375): delete stat cache
entry[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
s3fs_getattr(676):
[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/tmp/][time=1380227984][hit
count=7]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/][time=1380227984][hit
count=19]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit
[path=/mands_ftp_test/][time=1380227984][hit count=20]
is_uid_inculde_group(515): could not get group infomation.
HeadRequest(1742):
[tpath=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
RequestPerform(1354): HTTP response code 200
AddStat(247): add stat cache
entry[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
GetStat(170): stat cache hit
[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz][time=1380227984][hit
count=0]
s3fs_release(1997):
[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz][fd=6]
DelStat(375): delete stat cache
entry[path=/mands_ftp_test/science_data_mqfte_test/tmp/20130121.tgz]
s3fs_destroy(2606): destroy

s3...@googlecode.com

unread,
Sep 26, 2013, 10:27:46 PM9/26/13
to s3fs-...@googlegroups.com

Comment #14 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, jeremy

Thanks for response quickly.

This problem occurred by "Bad request" response(400) from S3 when s3fs
uploaded by multipart upload(or parallel upload).
But now I don't know why 400 error is occur.

If you can, please try to run s3fs with "-f" and "-o curldbg" again.
curldbg option puts curl VERBOSE message on stdout, but maybe many logs is
output.

Thanks in advance for your help.

s3...@googlecode.com

unread,
Sep 26, 2013, 11:46:36 PM9/26/13
to s3fs-...@googlegroups.com

Comment #15 on issue 371 by jer...@rosengren.org: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

I've run another test, same tarball extraction, with r487. I've attached 2
files - s3fs-session is the curldbg output and s3fs.out is the
corresponding s3fs debug output.

Thanks!

Attachments:
s3fs-session 36.6 KB
s3fs.out 15.8 KB

s3...@googlecode.com

unread,
Sep 27, 2013, 1:11:10 AM9/27/13
to s3fs-...@googlegroups.com

Comment #16 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, jeremy

Thanks a lot.

I saw your logs, and found that "Content-Length" is wrong surely.
Then I thnk, S3 server waited data from s3fs, but it was timeout and the
server respond timeout error(400).

I'm going to check s3fs codes, please wait a while.

Regards,

s3...@googlecode.com

unread,
Sep 27, 2013, 3:42:34 AM9/27/13
to s3fs-...@googlegroups.com

Comment #17 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, jeremy

I updated a code as r488 which fixed about forgot cast for file size length.
Please check new revision(r488), and I hope this change fixes your problem.

Thanks in advance for your help.

s3...@googlecode.com

unread,
Sep 27, 2013, 12:14:14 PM9/27/13
to s3fs-...@googlegroups.com

Comment #18 on issue 371 by jer...@rosengren.org: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

r488 appears to have resolved the issue. I've run both a "tar -xvf" and
an "rsync" and I'm no longer seeing the errors.

Thanks for your help!

s3...@googlecode.com

unread,
Sep 27, 2013, 12:45:16 PM9/27/13
to s3fs-...@googlegroups.com

Comment #19 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi,

I felt relieved after your reply.
Can I close this issue?

Regards,

s3...@googlecode.com

unread,
Sep 27, 2013, 12:48:58 PM9/27/13
to s3fs-...@googlegroups.com

Comment #20 on issue 371 by jer...@rosengren.org: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Everything on my end still appears to be working as expected, so I think
you can go ahead and close this issue.

s3...@googlecode.com

unread,
Sep 27, 2013, 12:58:29 PM9/27/13
to s3fs-...@googlegroups.com

Comment #21 on issue 371 by jer...@rosengren.org: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

I may have spoken a bit too soon... Everything had been working correctly,
but I just saw another error while performing an rsync:

rsync: ftruncate failed on "/mnt/quantumftp/mands_ftp_test.tar": Software
caused connection abort (103)

I'll attempt to replicate and attach logs.

s3...@googlecode.com

unread,
Sep 27, 2013, 1:06:14 PM9/27/13
to s3fs-...@googlegroups.com

Comment #22 on issue 371 by i...@red-paint.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

We're still seeing:

rsync: ftruncate failed on "/mnt/s3drive/backup_201309190207.tar":
Input/output error (5)

I've updated to r488 umounted then re-mounted the s3 drive but haven't had
time to run with -f yet. Will do this next week. Can you please keep the
issue open until then?

Thanks

s3...@googlecode.com

unread,
Sep 27, 2013, 1:26:41 PM9/27/13
to s3fs-...@googlegroups.com

Comment #23 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, jeremy and info@

Thank you for your reply.
I do not close this issue, because a problem has not been yet settled
according to your reply.

Please post detail about this issue.

Thanks in advance or your help.

s3...@googlecode.com

unread,
Oct 6, 2013, 1:55:37 AM10/6/13
to s3fs-...@googlegroups.com

Comment #24 on issue 371 by moods...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi.

I got almost the same error:

rsync: ftruncate failed on "/mnt/s3backup/data_201307010000.tar": operation
not permitted.


Is this the same error or should I post a new issue?


Best regards,
Ish

s3...@googlecode.com

unread,
Oct 6, 2013, 10:00:54 AM10/6/13
to s3fs-...@googlegroups.com

Comment #25 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, moodsey211

What version(revision) of s3fs do you use?
v1.73 has a bug about ftruncate, and you should use latest revision(after
r488).

Please let us know what version you use.

Thanks in advance for your help.



s3...@googlecode.com

unread,
Oct 6, 2013, 9:20:05 PM10/6/13
to s3fs-...@googlegroups.com

Comment #26 on issue 371 by moods...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi. I used the latest revision which is r489. The case is the same as above
only my error message is different. I tried the proposed solution above but
still no luck.

Here is my rsync command:

rsync --progress --inplace --size-only -vvv --omit-dir-times --stats
--whole-file --no-p --no-o --no-g --no-X --no-A --no-D
--remove-source-files --recursive /media/data/dbbackups/* /mnt/s3backup/

My files sizes are around 250MB to 300MB each. and there is a total of 260G
worth of files.


Best regards.

s3...@googlecode.com

unread,
Oct 7, 2013, 2:08:56 AM10/7/13
to s3fs-...@googlegroups.com

Comment #27 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, moodsey211

Thanks for replying.

I have one more question, when do you got this error?
(The first transfer/After having made some file transfer)

I think this problem is occurred in s3fs_truncate() function.
But I'm sorry, only your information I can not get the reason.
So that, if you can, please retry to run s3fs with "-o f2 -f" option and
make log file.
Please let me know it.
(Please care for that it's log file is very large.)

Thanks in advance for your help.





s3...@googlecode.com

unread,
Oct 7, 2013, 10:56:06 AM10/7/13
to s3fs-...@googlegroups.com

Comment #28 on issue 371 by jer...@rosengren.org: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

I ran another test against r488 this morning and was able to replicate the
disconnect error. I've attached the "-f" log.

I didn't see a ton of information in the output, so I'm wondering if you'll
need another test with the curl output?

Attachments:
s3fs.out 900 KB

s3...@googlecode.com

unread,
Oct 7, 2013, 11:07:39 AM10/7/13
to s3fs-...@googlegroups.com

Comment #29 on issue 371 by i...@red-paint.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hello,

I've mounted my drive as above: "/usr/bin/s3fs xxx-backups /mnt/s3drive".

After updating to r488 I'm still seeing "rsync: ftruncate failed on XXX
Input/output error (5)"

Full output below:

# rsync -v -r --size-only --inplace --progress --stats --whole-file
/var/dumps/backup_201310060207.tar /mnt/s3drive
sending incremental file list
backup_201310060207.tar
19598214316 100% 41.53MB/s 0:07:30 (xfer#1, to-check=0/1)
rsync: ftruncate failed on "/mnt/s3drive/backup_201310060207.tar":
Input/output error (5)

Number of files: 1
Number of files transferred: 1
Total file size: 19598214316 bytes
Total transferred file size: 19598214316 bytes
Literal data: 19598214316 bytes
Matched data: 0 bytes
File list size: 44
File list generation time: 0.012 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 19600606772
Total bytes received: 31

sent 19600606772 bytes received 31 bytes 6600642.13 bytes/sec
total size is 19598214316 speedup is 1.00
rsync error: some files/attrs were not transferred (see previous errors)
(code 23) at main.c(1039) [sender=3.0.6]

The difference this time is that the file upload is working but it's just
reporting a failure. Not sure why.

I've tried comparing the md5sum of the files. Going on the Etag they
aren't the same but not sure if that's a reliable way of checking files on
S3?

s3...@googlecode.com

unread,
Oct 8, 2013, 4:21:33 AM10/8/13
to s3fs-...@googlegroups.com

Comment #30 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, jeremy

Thank you for reporting log file.

I found a bug in curl.cpp(retrying multipart post request function).
And I fixed it and checked in as r491.

Please retry to upload again.

Thanks a lot.

s3...@googlecode.com

unread,
Oct 8, 2013, 4:28:25 AM10/8/13
to s3fs-...@googlegroups.com

Comment #31 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, in...@red-paint.com

Thanks for checking new revision.

I think that the error on your s3fs is as same as jeremy's.
(I hope that...)

I found a bug in s3fs's retry request codes and fixed it as r491.
If you can, please retry to upload(rsync), and let me know the result.

Thanks in advance for your help.




s3...@googlecode.com

unread,
Oct 8, 2013, 4:29:34 PM10/8/13
to s3fs-...@googlegroups.com

Comment #32 on issue 371 by jer...@rosengren.org: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

I've done some testing against r491. While it does seem to be a bit more
stable overall, I was still able to trigger the ftruncate issue by rsyncing
a ~6GB file (rsync -av --progress --inplace). I *was* using a 100 GB cache
directory.

I've attached both the "-f" log (s3fs.out) as well as the curldbg log
(s3fs-session).

Attachments:
s3fs-session 1.2 MB
s3fs.out 125 KB

s3...@googlecode.com

unread,
Oct 8, 2013, 9:48:05 PM10/8/13
to s3fs-...@googlegroups.com

Comment #33 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, jeremy

Thank you for reporting new log file, and I must apologize.

I found a bug in retrying multipart post request function again.
And I updated a code and checked in as r492.

If you can, please retry to upload again.

Thanks in advance for your help.


s3...@googlecode.com

unread,
Oct 9, 2013, 2:50:32 AM10/9/13
to s3fs-...@googlegroups.com

Comment #34 on issue 371 by moods...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi Takeshi,

I have also tried testing r491. Here is the result of the command in my end.

data_201308211200.tar
274788956 100% 534.96kB/s 0:08:21 (xfer#1, to-check=0/1)
rsync: ftruncate failed on "/mnt/s3backup/data_201308211200.tar": Operation
not permitted (1)

Number of files: 1
Number of files transferred: 1
Total file size: 274788956 bytes
Total transferred file size: 274788956 bytes
Literal data: 274788956 bytes
Matched data: 0 bytes
File list size: 36
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 274822582
Total bytes received: 31

sent 274822582 bytes received 31 bytes 181821.11 bytes/sec
total size is 274788956 speedup is 1.00
rsync error: some files/attrs were not transferred (see previous errors)
(code 23) at main.c(1070) [sender=3.0.9]

Let me know if you need something else from me.


Best regards,
Ish

s3...@googlecode.com

unread,
Oct 9, 2013, 4:37:30 AM10/9/13
to s3fs-...@googlegroups.com

Comment #35 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, moodsey211

I'm sorry for I updated codes as r492 because s3fs had a bug yet.
if you can, please retry to check latest revision.

Regards,

s3...@googlecode.com

unread,
Oct 9, 2013, 7:50:04 AM10/9/13
to s3fs-...@googlegroups.com

Comment #36 on issue 371 by i...@red-paint.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hello Takeshi,

I've tried this using r492 and I'm still getting ftruncate failed and it's
hanging there for some time.

# /usr/bin/s3fs bucket /mnt/s3drive -ouse_cache=/tmp
-ourl=https://s3.amazonaws.com

# rsync -v --size-only --inplace --progress --omit-dir-times --stats
--whole-file --no-p /var/dumps/backup_201310080207.tar /mnt/s3drive
backup_201310080207.tar
19645430956 100% 39.26MB/s 0:07:57 (xfer#1, to-check=0/1)
rsync: ftruncate failed on "/mnt/s3drive/backup_201310080207.tar":
Input/output error (5)

Number of files: 1
Number of files transferred: 1
Total file size: 19645430956 bytes
Total transferred file size: 19645430956 bytes
Literal data: 19645430956 bytes
Matched data: 0 bytes
File list size: 44
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 19647829170
Total bytes received: 31

sent 19647829170 bytes received 31 bytes 5271047.40 bytes/sec
total size is 19645430956 speedup is 1.00
rsync error: some files/attrs were not transferred (see previous errors)
(code 23) at main.c(1039) [sender=3.0.6]

What additional info do you need from me?

Thanks
Andy

s3...@googlecode.com

unread,
Oct 9, 2013, 10:48:31 AM10/9/13
to s3fs-...@googlegroups.com

Comment #37 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, Andy

Thank you for checking new revision.
But it seems that s3fs has a bug yet.

I'm sorry for checking many time, if you can please get a log about this
problem by "-f" option.
I hope it helps us to resolve this problem.

Best Regards,

s3...@googlecode.com

unread,
Oct 9, 2013, 10:54:04 AM10/9/13
to s3fs-...@googlegroups.com

Comment #38 on issue 371 by jer...@rosengren.org: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

I just finished running a similar rsync test against r492 and saw the same
hanging behavior that Andy reported. I've attached the "-f" output as well
as the curldbg output.

Attachments:
s3fs-session 1.4 MB
s3fs.out 24.0 KB

s3...@googlecode.com

unread,
Oct 9, 2013, 11:19:56 AM10/9/13
to s3fs-...@googlegroups.com

Comment #39 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, Jeremy

Thanks for sending logs.
But I coud not find a error line, it seems that the log is stopped in
normally no error.

Please see these log and repost logs when error is occurred.

Thanks in advance.

s3...@googlecode.com

unread,
Oct 9, 2013, 11:21:27 AM10/9/13
to s3fs-...@googlegroups.com

Comment #40 on issue 371 by jer...@rosengren.org: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

The error didn't actually occur - the rsync simply hanged until I did a
control-c to cancel it.

s3...@googlecode.com

unread,
Oct 9, 2013, 11:29:59 AM10/9/13
to s3fs-...@googlegroups.com

Comment #41 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, Jeremy

I'm sorry.
I retry to check logs and codes for solving hanged up.

regards,

s3...@googlecode.com

unread,
Oct 11, 2013, 1:41:12 PM10/11/13
to s3fs-...@googlegroups.com

Comment #42 on issue 371 by i...@red-paint.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hello Takeshi,

I tried running # /usr/bin/s3fs bucket -f /mnt/s3drive -ouse_cache=/tmp
-ourl=https://s3.amazonaws.com

But it literally ran for 7 hours and the only output I got was:

set_moutpoint_attribute(2935): PROC(uid=0, gid=0) - MountPoint(uid=0,
gid=0, mode=40755)
s3fs_check_service(2546): check services.
CheckBucket(1927): check a bucket.
RequestPerform(1232): HTTP response code 200
s3fs_init(2491): init
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]
s3fs_getattr(676): [path=/]

At that point I hit ctrl+c.

For me the files are not actually transferring any more despite it seeming
to work before.

I get the following if I run rsync and leave it:

#rsync -v --size-only --inplace --progress --omit-dir-times --stats
--whole-file --no-p /var/dumps/backup_201310100207.tar /mnt/s3drive
backup_201310100207.tar
19883971756 100% 43.03MB/s 0:07:20 (xfer#1, to-check=0/1)
rsync: ftruncate failed on "/mnt/s3drive/backup_201310100207.tar":
Input/output error (5)
rsync: close failed on "/mnt/s3drive/": Operation not permitted (1)
rsync error: error in file IO (code 11) at receiver.c(730) [receiver=3.0.6]
rsync: connection unexpectedly closed (29 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(600)
[sender=3.0.6]
#

At that point checking the bucket shows a 0 byte file sitting there.

as -f doesn't seem to work all that well is there something else I can try
to give you debugging info. Could the problem be that I was using https?

Many thanks
Andy

s3...@googlecode.com

unread,
Oct 16, 2013, 5:56:50 PM10/16/13
to s3fs-...@googlegroups.com

Comment #43 on issue 371 by jer...@rosengren.org: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Has anyone been able to further troubleshoot this? Is there any additional
data or troubleshooting steps I could perform to gather more information?

Thanks!

s3...@googlecode.com

unread,
Oct 17, 2013, 3:35:21 AM10/17/13
to s3fs-...@googlegroups.com

Comment #44 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, all

I'm sorry for slow replying.

It seems that s3fs stopped after sending about 2GB data by Jeremy's log.
(Probably, Andy's case is same)
I need to confirm about this, is s3fs dead lock(loop) when you stopped s3fs?

If s3fs stay dead lock(loop), I want to know about status for your server.
ex. lsof(session is many opened), memory usage, etc...

And I would like to know results of follow checking if you can.
* a case of not https(http)
* a case of nosscache
* a case of that readwrite_timeout is set over 30.
* a case of that parallel_count option is set under 5.

I try to check this issue, too.

Thanks in advance for your help.

s3...@googlecode.com

unread,
Oct 28, 2013, 4:56:12 AM10/28/13
to s3fs-...@googlegroups.com

Comment #45 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, all

I'm sorry for replying too late.

I rechecked #35 jeremy's log, and I think It seems that s3fs run normally.

In logs, the "mday.tar" file was divided into 596 files as multiparts
uploading.
And jeremy tried to upload some times and one of these seems success.
But last uploading was stopped at 215/596 by ctrl-c.

As default, when s3fs uploads large file, s3fs works that the one of part
size is 10MB and 5 parallel uploading.
On this case I think it takes about 10 minutes on 100Mbps.
Please take care, if s3fs got an error from s3 server response, s3fs retry
to upload the part file.
And if One of part request timed out, s3fs have to wait for 120 sec by
error.(this value is hard coding at multipart uploading).

So that, I think that s3fs takes a lot of time for uploading.
If you can, please retry to test for uploading multipart(without ctrl-c).
I think you can check that s3fs is locking dead or not by seeing log files.

Last, I'm sorry that I rechecked codes about dead lock(lock object and
flock, etc), but I could not find the reason for dead lock yet.

s3...@googlecode.com

unread,
Nov 11, 2013, 9:43:25 AM11/11/13
to s3fs-...@googlegroups.com

Comment #46 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, all

I'm sorry for replying late.

I checked in new revision(r493) which fixed bugs about this issue.

I found below bugs.
* when renaming large object, s3fs sets wrong part number.
* s3fs sets headers which is not needed when multipart uploading(renaming)
* when s3fs gets 400 HTTP response for one of multipart uploading part,
s3fs does not retry to request.
These bugs are fixed by new revision(r493).

I think about your problem(failed uploading), one of reason is the 400 HTTP
response.
The failed truncating or changing stat(mtime/owner/permission), that reason
is probably wrong header.

The problem that rsync(s3fs) takes a lot of time after rsync
displaying "100%" is maybe not real problem.
When this problem occurred, maybe rsync runs below sequence.
1) At first rsync runs comparing local file and S3 object.
2) If there is the object on S3, s3fs downloads it into tmpfile. And rsync
compares these local files through s3fs.
(if there is not s3 object, comparing files is failed)
3) It is different in those files, so that rsync copies the file through
s3fs.
It means that the file is copied to tmpfile.
4) After copying the file, rsync displays "100%".
5) rsync do close(maybe) the copied file, then rsync flushes the file
through s3fs.
6) s3fs starts uploading the file.

When I tested rsync with 20GB file, rsync took 20-30 minutes for (1) to (4)
logic.
After that, rsync took over 60 minutes for (5),(6) logic.
But I got success by rsync 20GB file.

Please use new revision(r493) and check it.

Last, maybe your failed multipart uploading objects are still in S3 bucket.
Please see issue 380, run r493 revision with "-u" option, and remove these
failed obojects for payment.
( # s3fs <your bucket name> -u )

s3...@googlecode.com

unread,
Nov 12, 2013, 2:05:25 PM11/12/13
to s3fs-...@googlegroups.com

Comment #47 on issue 371 by jer...@rosengren.org: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

I ran a test against r494 and at first the behavior looked a lot better:
The file finished rsyncing completely, and then appeared to hang while it
was uploading to s3 from cache.

After a while, though, I got an error:

[root@filedrop quantumftp]# rsync -av --progress --inplace
/export/mday.tar .
sending incremental file list
mday.tar
6241341440 100% 12.13MB/s 0:08:10 (xfer#1, to-check=0/1)
rsync: failed to set times on "/mnt/quantumftp/mday.tar": Input/output
error (5)

sent 6242103398 bytes received 31 bytes 7758985.00 bytes/sec
total size is 6241341440 speedup is 1.00
rsync error: some files/attrs were not transferred (see previous errors)
(code 23) at main.c(1042) [sender=3.0.7]
[root@filedrop quantumftp]# ls -lh
total 512
-rw------- 1 root root 4.0G Nov 12 13:43 mday.tar

The original size of the file was 5.9GB, so it appears that the full file
was not uploaded to S3.

s3...@googlecode.com

unread,
Nov 12, 2013, 2:09:20 PM11/12/13
to s3fs-...@googlegroups.com

Comment #48 on issue 371 by jer...@rosengren.org: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Attaching logs from my last test.

Attachments:
s3fs.out 136 KB
s3fs-session 1.4 MB

s3...@googlecode.com

unread,
Nov 13, 2013, 11:40:39 AM11/13/13
to s3fs-...@googlegroups.com

Comment #49 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, jeremy

I found the reason of this problem in your logs.
I want to know about your os and s3fs build option.
This problem(#47) is that "st_size" member which is one of stat structure
for file is overflow.
So that, about 6GB file is displayed 4GB and s3fs can not know correct file
size.

If you need to use over 4GB(32bit: 0xFFFFFFFF), s3fs should build with
OFFSET_BIT=64.
(Then off_t and size_t is 8 byte)

AND I found other problem in s3fs about this issue.
I updated s3fs as new revision(r495).

Please try to build s3fs with new revision, and build with OFFSET_BIT as
64, try to run rsync.

Thanks in advance for your help.


s3...@googlecode.com

unread,
Nov 14, 2013, 10:54:59 AM11/14/13
to s3fs-...@googlegroups.com

Comment #50 on issue 371 by jer...@rosengren.org: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

I've run a test against r495 and am seeing similar behavior, although now
the file seems to be cutting off at just under 2 GB uploaded to s3 instead
of 4:

[jrosengren@filedrop quantumftp]$ rsync -av --progress --inplace
/export/mday.tar .
sending incremental file list
mday.tar
6241341440 100% 12.80MB/s 0:07:44 (xfer#1, to-check=0/1)
rsync: failed to set times on "/mnt/quantumftp/mday.tar": Input/output
error (5)

sent 6242103398 bytes received 31 bytes 8452408.16 bytes/sec
total size is 6241341440 speedup is 1.00
rsync error: some files/attrs were not transferred (see previous errors)
(code 23) at main.c(1042) [sender=3.0.7]
[jrosengren@filedrop quantumftp]$ ls -lh
total 512
-rw------- 1 jrosengren quantum 1.9G Nov 14 10:30 mday.tar

I'm building RPM packages for CentOS 5.x (i386) and am using the included
autogen.sh. I've looked at the config.log from the build and I do see
-D_FILE_OFFSET_BITS=64 was set. I've attached logs from my most recent
test.

Attachments:
s3fs.out 66.6 KB

s3...@googlecode.com

unread,
Nov 14, 2013, 10:02:34 PM11/14/13
to s3fs-...@googlegroups.com

Comment #51 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, jeremy

Thank you for testing new revision.
About offset bit is correct it seems no problem.

But rsync failed changing mtime of object after uploading it(6GB uploading
finished) from your log.
-------------
s3fs_utimens(1663): [path=/mday.tar][mtime=1380296141]
GetStat(170): stat cache hit [path=/mday.tar][time=1384443786][hit
count=1]
is_uid_inculde_group(515): could not get group infomation.
GetStat(170): stat cache hit [path=/mday.tar][time=1384443787][hit
count=2]
put_headers(652): [path=/mday.tar]
GetStat(170): stat cache hit [path=/mday.tar][time=1384443787][hit
count=3]
PutHeadRequest(1958): [tpath=/mday.tar]
PutHeadRequest(2027): copying... [path=/mday.tar]
RequestPerform(1483): HTTP response code 400
RequestPerform(1497): HTTP response code 400 was returned, returing EIO.
-------------

It is strange that so the object is over 5GB but s3fs did not use multipart
head request.
(Calling PutHeadRequest is wrong, MultipartHeadRequest is correct)

If you can, please run below test.
1) /mnt/quantumftp/mday.tar size by "s3cmd".
2) /mnt/quantumftp/mday.tar size by "ls" command now.
3) after s3fs restarted, same dommand do.(ls)
4) run command "touch -t xxxxx /mnt/quantumftp/mday.tar" for change mime
directly.

I think that the object size is correct(this means that s3fs success
uploading the object), but s3fs can not know correct size.
Maybe something wrong code is in s3fs.

And I'll recheck codes soon.

thanks in advance for your help.

s3...@googlecode.com

unread,
Nov 15, 2013, 2:51:59 AM11/15/13
to s3fs-...@googlegroups.com

Comment #52 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

I'm sorry for wrong file(target) path.

We need to know about "<mount point>/mday.tar", "/mnt/quantumftp/mday.tar"
is wrong.

Regards

s3...@googlecode.com

unread,
Nov 15, 2013, 3:08:54 AM11/15/13
to s3fs-...@googlegroups.com

Comment #53 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, Jeremy

I did not check your below message.
"CentOS 5.x (i386)"

Is your OS is 32bit?
If it is, please check st_size member bytes in "struct stat", this member
is defined its type by "off_t"(or size_t).
Then if your system's st_size is 4 bytes, probably you can not use over 4GB
object.
So s3fs and rsync uses this structure in these, maybe something error is
occurred.

For example, checking size of off_t:
printf("%zd¥n", sizeof(off_t));

Please check off_t size on your OS.

Thanks a lot.

s3...@googlecode.com

unread,
Nov 15, 2013, 11:52:52 AM11/15/13
to s3fs-...@googlegroups.com

Comment #54 on issue 371 by jer...@rosengren.org: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Results from checking the size of off_t: 4¥n

This is a 32-bit system. Is there really a 4 GB file size limit for FUSE
on 32-bit systems?

s3...@googlecode.com

unread,
Nov 16, 2013, 8:50:56 AM11/16/13
to s3fs-...@googlegroups.com

Comment #55 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, jeremy

Probably, I think that FUSE and s3fs is compiled with _FILE_OFFSET_BIT=64,
and it takes over 4GB on 32bit OS.
But your report is 4byte for off_t, it means s3fs does not use over
4GB.(maybe)

Please check that the option(CFLAGS etc) has "_FILE_OFFSET_BIT=64" when
compiling fuse and s3fs.

Thanks a lot.

s3...@googlecode.com

unread,
Nov 16, 2013, 11:28:30 AM11/16/13
to s3fs-...@googlegroups.com

Comment #56 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, jeremy

I'll check all codes for this problem.
Because s3fs code uses size_t and off_t, I need to check these.

Regards

s3...@googlecode.com

unread,
Nov 17, 2013, 3:55:58 AM11/17/13
to s3fs-...@googlegroups.com

Comment #57 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, jeremy

I updated codes as revision r496.
It changed a interna buffer size for file size from size_t to off_t.
s3fs used size_t buffer internally in some function, then it did not work
for over 4GB object completely on 32bit OS.
I checked new code on 32bit OS(AMI) and it seemed new s3fs worked without
problem.

Please try to use new revision.

Thanks a lot.

s3...@googlecode.com

unread,
Nov 18, 2013, 11:55:50 AM11/18/13
to s3fs-...@googlegroups.com

Comment #58 on issue 371 by jer...@rosengren.org: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

I'm currently running a test against r497, will report back when I have
some results.

I was seeing compile errors on both r496 and r497 and after diffing those
versions and r495, I noticed that the file encoding on fdcache.h had
changed to UTF-8:

$ file *
cache.cpp: C source, ASCII text
cache.h: C++ source, ASCII text
common.h: C source, ASCII text
curl.cpp: C source, ASCII text
curl.h: C++ source, ASCII text
fdcache.cpp: C source, ASCII text
fdcache.h: C++ source, UTF-8 Unicode (with BOM) text
Makefile.am: ASCII text
s3fs.cpp: C source, ASCII text
s3fs.h: C source, ASCII text
s3fs_util.cpp: C source, ASCII text
s3fs_util.h: C++ source, ASCII text
string_util.cpp: C source, ASCII text
string_util.h: C++ source, ASCII text

I was able to fix it by running "iconv -f UTF-8 -t ASCII//TRANSLIT
fdcache.h > fdcache.h.new"

s3...@googlecode.com

unread,
Nov 18, 2013, 3:59:18 PM11/18/13
to s3fs-...@googlegroups.com

Comment #59 on issue 371 by jer...@rosengren.org: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Initial tests are looking good. I rsynced a 5.9 GB file over and the only
error I saw was about rsync not being able to set the time on the file -
this is likely due to the file being over 5.9 GB in size, where AWS isn't
able to "copy" those objects "in-place".

I also rsynced over a file that was about 3.9 GB in size and saw no issues
there, either.

s3...@googlecode.com

unread,
Nov 18, 2013, 9:05:32 PM11/18/13
to s3fs-...@googlegroups.com

Comment #60 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, jeremy

Thanks a lot.

At first, I checked in new revision for your report that fdcache.h is wrong
file type.
New revision is changed one more issue that is condition when error
occurred on mutipart upload(head) request.

I narrowed a retrying condition at r497 for multipart head(404 error), but
it was not good.
I rechanged codes for extending, and checked in.

I think that if rsync put something error after changing time, this
problem(not changed uptime) is probably caused by narrowing a retrying
condition.

Please check out new and try it.

Thanks in advance for your help.

s3...@googlecode.com

unread,
Nov 24, 2013, 8:10:31 AM11/24/13
to s3fs-...@googlegroups.com

Comment #61 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, jeremy

I updated s3fs as new version 1.74.
It fixes some bugs and this issue problem.

Please try to use new version, and check it.

s3...@googlecode.com

unread,
Dec 9, 2013, 1:39:35 PM12/9/13
to s3fs-...@googlegroups.com

Comment #62 on issue 371 by jer...@rosengren.org: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

I've been using and testing 1.74 since you released it and all of my issues
appear to have been resolved. I've only been able to perform testing on
32-bit systems so far, but we do have a 64-bit use case so I'll be doing
some testing there in the next couple of weeks. So far so good!

Thanks,

-- jeremy

s3...@googlecode.com

unread,
Dec 10, 2013, 8:50:15 AM12/10/13
to s3fs-...@googlegroups.com

Comment #63 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi, jeremy

Thanks for testing new version, and on 64bit.
If you found another problem, please let me know.

Regards,
Takeshi

s3...@googlecode.com

unread,
Dec 23, 2013, 10:34:34 AM12/23/13
to s3fs-...@googlegroups.com
Updates:
Status: Done

Comment #64 on issue 371 by ggta...@gmail.com: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

Hi,

Can I close this issue?
If you have a problem yet, please let me know.
(Maybe I re-opened this issue or post new issue).

Regards,

s3...@googlecode.com

unread,
Dec 23, 2013, 1:03:35 PM12/23/13
to s3fs-...@googlegroups.com

Comment #65 on issue 371 by jer...@rosengren.org: ftruncate failed
http://code.google.com/p/s3fs/issues/detail?id=371

As far as I'm concerned, you can close this issue. Hopefully the others
that were seeing issues can post some feedback as well.
Reply all
Reply to author
Forward
0 new messages