Upload Speed from Local Folder (Slow)

137 views
Skip to first unread message

Rory

unread,
Aug 2, 2009, 5:51:55 PM8/2/09
to ResourceSpace
I'm importing files in batch from a local folder. I've done two
uploads, both 500 images (each ~3MB).

The first upload processed 3 - 4 images / minute. The second upload is
taking 3 - 5 minutes PER image.

Doe anyone know why RS would bog down like this? Any tips for speeding
upload times?

Rory

Tom Gleason

unread,
Aug 2, 2009, 5:59:51 PM8/2/09
to resour...@googlegroups.com
what type of files are they?

Rory

unread,
Aug 2, 2009, 7:00:21 PM8/2/09
to ResourceSpace
All .jpg

On Aug 2, 5:59 pm, Tom Gleason <theorysav...@gmail.com> wrote:
> what type of files are they?
>

Rory

unread,
Aug 2, 2009, 7:45:57 PM8/2/09
to ResourceSpace
I should add that I am running RS on a NAS running busybox linux - a
truncated linux variant. I'm wondering if I can do a bit more with the
php.ini to make things work a bit more quickly. From the wiki install
guide I read:

# Tweak your PHP.INI settings as follows: you will probably want to
# increase the size of the "memory_limit", "post_max_size" and
# "upload_max_filesize" variables in your system's php.ini file if
# you will be handling large files.

Note that my NAS did not want to allow me to allocate so much memory
to memory_limit so this might be impacting performance. Here are the
relevant settings from my phi.ini

memory_limit = 200M
post_max_size = 100M
upload_max_filesize = 128M

Any thoughts appreciated.

Tom Gleason

unread,
Aug 2, 2009, 8:13:49 PM8/2/09
to resour...@googlegroups.com
it doesn't sound like you have big files, but I often use top to see what's going on and if any processes might have stalled or something.

When doing very big transformations that strain the processor, I find a restart sometimes makes things go that otherwise weren't going.

Rory

unread,
Aug 2, 2009, 8:27:57 PM8/2/09
to ResourceSpace
Here are a few of the processes running and the load they are putting
on the system:

12693 guest R 42M 10986 95.9 17.1 convert
12677 admin R 748 12582 1.1 0.2 top
10982 guest S 4908 10981 0.0 1.9 apache
11593 guest S 4796 10981 0.0 1.8 apache
1927 admin S 4564 1893 0.0 1.7 mysqld

Notice that convert is taking 95.9% of the CPU capacity. Seem that is
probably bogging things down. Is there anyway to make this work better
(besides upgrading to a more robust server)?


On Aug 2, 8:13 pm, Tom Gleason <theorysav...@gmail.com> wrote:
> it doesn't sound like you have big files, but I often use top to see what's
> going on and if any processes might have stalled or something.
>
> When doing very big transformations that strain the processor, I find a
> restart sometimes makes things go that otherwise weren't going.
>

Tom Gleason

unread,
Aug 2, 2009, 9:10:57 PM8/2/09
to resour...@googlegroups.com
convert will always use as much as it can. The problem for me (on the Mac at least) was when a process stalled. But that happened when doing files over half a gig, not the sizes you're talking about.

I am not really an expert on how the memory is managed, but perhaps it has something to do with the amount of RAM or swap space you have?

Rory

unread,
Aug 3, 2009, 7:24:41 PM8/3/09
to ResourceSpace
Yeah it seem pretty clear that the resources required are too much for
my NAS. The thing is, it is plenty fast serving files once they are
uploaded. I'm wondering if there isn't another way to do this. Does
anyone know if it is possible to run two installs that share one
database and filestore? What if I do this:

1. Install locally on my Mac with filestore remote on the NAS under
the webroot/.
2. Load all files into the system letting my Mac do the crunching. It
should be considerably faster.
3. Replicate the database on the NAS (from the Mac)
4. Use the NAS install for browsing and serving images but not for
managing resources.

My goal is to make viewing and downloading independent of my Mac (a
laptop which is not here very often). Obviously I would need to keep
tight control of who can write to the database when it is being
accessed through the NAS interface. Edits to the resources could only
be done via the Mac unless I were to also replicated from the NAS to
the Mac which would get confusing. It would be like having a
production and a live version of the RS.

I'm guessing that this won't work because user generated themes,
collections, tagging, etc. would not be reflected on the Mac
(production site). Perhaps if I were to lock down users on the live
(NAS) site such that it would only be used for viewing and downloading
files.

Rory

On Aug 2, 9:10 pm, Tom Gleason <theorysav...@gmail.com> wrote:
> convert will always use as much as it can. The problem for me (on the Mac at
> least) was when a process stalled. But that happened when doing files over
> half a gig, not the sizes you're talking about.
>
> I am not really an expert on how the memory is managed, but perhaps it has
> something to do with the amount of RAM or swap space you have?
>
Reply all
Reply to author
Forward
0 new messages