Issue 409 in s3fs: slow reading of files even when cached with getimagesize() in php

325 views
Skip to first unread message

s3...@googlecode.com

unread,
Feb 2, 2014, 11:36:30 PM2/2/14
to s3fs-...@googlegroups.com
Status: New
Owner: ----
Labels: Type-Defect Priority-Medium

New issue 409 by jar...@headfirst.co.nz: slow reading of files even when
cached with getimagesize() in php
http://code.google.com/p/s3fs/issues/detail?id=409

I'm using s3fs as the file system mount for a website that uses a WYSIWYG
editor with the IMCE browser plugin.

The browser is very slow at reading the images mainly due to it needing to
call getimagesize() on each file to retrieve the width/height etc.

To eliminate all other factors I created a simple script that reads a
directory in the S3 bucket and runs getimagesize() on all of them.

The directory has 1042 files and the script takes just over 1 minute to run
initially (as it's creating the tmp images locally i assume) then about 20
seconds there after.

If I run the script pointing at the tmp folder directly it's lightning fast
which is how I would have expected it to run once it was all images had
been downloaded locally and cached when pointing at the mount.

Any advice on this would be great.

I'm using the following to mount:
sudo /usr/bin/s3fs bucket_name /mnt/bucket_name
-oallow_other,default_acl=public-read-write,uid=33,gid=33,use_cache="/tmp/s3fs",max_stat_cache_size="100000"
(uid/gid 33 = www-data)

Cheers,
Jarrod.




--
You received this message because this project is configured to send all
issue notifications to this address.
You may adjust your notification preferences at:
https://code.google.com/hosting/settings

s3...@googlecode.com

unread,
Sep 30, 2014, 11:35:42 AM9/30/14
to s3fs-...@googlegroups.com

Comment #1 on issue 409 by arvids.g...@gmail.com: slow reading of files
even when cached with getimagesize() in php
https://code.google.com/p/s3fs/issues/detail?id=409

I had the same problem. The problem is caused by file stat cache being
cleared for given file before opening it (to solve Issue 368). As
workaround, I commented out the stat cache item deletion (see r485),
recompiled the package and remounted the bucket. Maybe stat cache clearing
before opending files can be made configurable? Of course, this might
introduce inconsistency issues, but there are scenarios where file updates
are not so common (mostly create/delete operations).

s3...@googlecode.com

unread,
Feb 7, 2015, 10:36:06 AM2/7/15
to s3fs-...@googlegroups.com
Updates:
Status: Done

Comment #2 on issue 409 by ggta...@gmail.com: slow reading of files even
when cached with getimagesize() in php
https://code.google.com/p/s3fs/issues/detail?id=409

Hi

I'm sorry for replying too late.

If you edit the file often, s3fs has been kept the stat for all file(but
you have enough stat cache size).
Then I think your editor up fast after second time.

If you have been had this issue yet, please post new issue on
Github(https://github.com/s3fs-fuse/s3fs-fuse).
Because we moved this s3fs project to Github, and please use latest version.

I'm going to close this issue.
Please see s3fs-fuse on Github.
Reply all
Reply to author
Forward
0 new messages