[AOLSERVER] large file uploads

226 views
Skip to first unread message

John Buckman

unread,
Nov 18, 2009, 12:06:57 AM11/18/09
to AOLS...@listserv.aol.com
I'm developing a music submission system for Magnatune.com using aolserver and I'm seeing some problems with large file uploads.

Specifically, aolserver periodically crashes due to a malloc error.

It looks like aolserver stores files being uploaded (via form enctype="multipart/form-data") in memory as they're being uploaded. When people are uploading zip files of WAV-quality rips of CDs, that's 800mb of RAM per upload. A few concurrent uploads, and splat.

It looks like Naviserver has changed the aolserver code in Naviserver 4.99.1 to
> - Using temporary files for large uploads when content exceeds pre-configured maxsize parameter
> - New spooler thread for handling large file uploads


Does it make sense to port this naviserver code over to aolserver?

Or, is there another (easier) way to handle large file uploads that doesn't use lots of memory?

-john


--
AOLserver - http://www.aolserver.com/

To Remove yourself from this list, simply send an email to <list...@listserv.aol.com> with the
body of "SIGNOFF AOLSERVER" in the email message. You can leave the Subject: field of your email blank.

Hossein Sharifi

unread,
Nov 18, 2009, 7:34:22 PM11/18/09
to AOLS...@listserv.aol.com
Are you using AOLserver 4.5x on a 64-bit platform?  nsd will crash on 32-bit platforms when memory usage (or log files) exceed 2GB.  I had to upgrade for a similar issue involving memory usage.

Although I think that it would be ideal to port the Naviserver code at some point.

-Hossein

John Buckman

unread,
Nov 18, 2009, 9:20:25 PM11/18/09
to AOLS...@listserv.aol.com
On Nov 18, 2009, at 4:34 PM, Hossein Sharifi wrote:

> Are you using AOLserver 4.5x on a 64-bit platform? nsd will crash on 32-bit platforms when memory usage (or log files) exceed 2GB. I had to upgrade for a similar issue involving memory usage.
>
> Although I think that it would be ideal to port the Naviserver code at some point.

No, I'm on a 32bit linux. I guess the solution is to upgrade to a 64bit OS and have way more memory than the max file size.

I built naviserver today, with their spooler thread, and tested large file uploads. Naviserver is very efficient at memory when handling the file upload, but it does hold the entire uploaded file in memory as it hands the file off to my form handler. So, it has the same peak memory usage as aolserver, though it needs the peak memory for less time. Still, not an optimal solution.

Steve Manning

unread,
Nov 19, 2009, 3:52:07 AM11/19/09
to AOLS...@listserv.aol.com

Don't think you can use it for uploads but Gustaf's Background Delivery thread might help you to serve these files more efficiently. Its discussed here http://www.openacs.org/xowiki/weblog-portlet?ptag=bgdelivery

    - Steve


Steve Manning
Systems Engineer
Du Maurier Ltd

Tel: +44 (0)116 284 9661
Email: st...@dumaurier.co.uk


Any views expressed in this email and any attachments are the senders own and do not represent the views of Du Maurier Limited. This email and any attachments should only be read by those persons to whom it is addressed. Accordingly, we disclaim all responsibility and accept no liability (including negligence) for the consequences of any person other than the intended recipients acting , or refraining from acting, on such information. If you have received this email in error, please accept our apologies and we simply request that you delete the email and any attachments. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and/or publication of this email is strictly prohibited.

Du Maurier Limited, Tel +44 (0)116 2849661.

Gustaf Neumann

unread,
Nov 19, 2009, 5:55:40 AM11/19/09
to AOLS...@listserv.aol.com
bgdelivery - as implemented - works only for downloads, not for uploads.
uploads are technically different, since the c-level driver
handles these already. Acually, i tend to believe, that naviserver
does the right thing (i have not tried, since we are running in
64bit mode since a while).

what naviserver version have you tried?
Have you set the driver parameter maxupload to "0" to
enable file spooling? Acutally, if the upload size>maxupload,
spooling to file happens in naviserver. When the spooler
finishes, the file are available under [ns_conn contentfile],
which returns the name of the spooled file.

Without looking into the details, i would not be wondering,
if the spooling to disc works fine, but the multipart processing
happens in memory (what you mentioned as form-handler)....
You might be able to use an exernal mime-decoder such as
http://www.freesoft.org/CIE/FAQ/mimedeco.c
(possibly via nsproxy).

-gustaf neumann

Steve Manning schrieb:


>
> Don't think you can use it for uploads but Gustaf's Background
> Delivery thread might help you to serve these files more efficiently.
> Its discussed here
> http://www.openacs.org/xowiki/weblog-portlet?ptag=bgdelivery
>
> - Steve
>
>
> On Wed, 2009-11-18 at 18:20 -0800, John Buckman wrote:
>> On Nov 18, 2009, at 4:34 PM, Hossein Sharifi wrote:
>>
>> > Are you using AOLserver 4.5x on a 64-bit platform? nsd will crash on 32-bit platforms when memory usage (or log files) exceed 2GB. I had to upgrade for a similar issue involving memory usage.
>> >
>> > Although I think that it would be ideal to port the Naviserver code at some point.
>>
>> No, I'm on a 32bit linux. I guess the solution is to upgrade to a 64bit OS and have way more memory than the max file size.
>>
>> I built naviserver today, with their spooler thread, and tested large file uploads. Naviserver is very efficient at memory when handling the file upload, but it does hold the entire uploaded file in memory as it hands the file off to my form handler. So, it has the same peak memory usage as aolserver, though it needs the peak memory for less time. Still, not an optimal solution.
>>
>> -john
>>
>>
>> --
>> AOLserver - http://www.aolserver.com/
>>

>> To Remove yourself from this list, simply send an email to <list...@listserv.aol.com <mailto:list...@listserv.aol.com>> with the


>> body of "SIGNOFF AOLSERVER" in the email message. You can leave the Subject: field of your email blank.
>>

> ------------------------------------------------------------------------


>
> Steve Manning
> Systems Engineer
> Du Maurier Ltd
>
> Tel: +44 (0)116 284 9661

> Email: st...@dumaurier.co.uk <mailto:st...@dumaurier.co.uk>
> ------------------------------------------------------------------------


>
> Any views expressed in this email and any attachments are the senders
> own and do not represent the views of Du Maurier Limited. This email
> and any attachments should only be read by those persons to whom it is
> addressed. Accordingly, we disclaim all responsibility and accept no
> liability (including negligence) for the consequences of any person
> other than the intended recipients acting , or refraining from acting,
> on such information. If you have received this email in error, please
> accept our apologies and we simply request that you delete the email
> and any attachments. Any form of reproduction, dissemination, copying,
> disclosure, modification, distribution and/or publication of this
> email is strictly prohibited.
>
> Du Maurier Limited, Tel +44 (0)116 2849661.

> ------------------------------------------------------------------------

Hossein Sharifi

unread,
Nov 19, 2009, 7:13:37 AM11/19/09
to AOLS...@listserv.aol.com
One possible solution would be to proxy your uploads through nginx (using a recent version).   This will not only give you the ability to poll for real-time upload status (and pass that info to the client using AJAX), but it would also allow you to queue the uploads to the backend AOLserver instance one-at-a-time. 

Tom Jackson

unread,
Nov 19, 2009, 11:56:57 AM11/19/09
to AOLS...@listserv.aol.com
There is a configuration setting which saves posted files to disk.
You need the ns_limits maxupload to be large enough, then maxinput
(not sure which config section) sets an upper limit for in memory
data.

This gets decided in driver.c, around line 1892:
1891 max = connPtr->roff + connPtr->contentLength + 2;
1892 if (max < connPtr->drvPtr->maxinput) {
1893 /*
1894 * Content will fit at end of request buffer.
1895 */
...
1899 } else {
1900 /*
1901 * Content must overflow to a temp file.
1902 */
1903
1904 connPtr->flags |= NS_CONN_FILECONTENT;
1905 connPtr->tfd = Ns_GetTemp();
1906 if (connPtr->tfd < 0) {
1907 return E_FDAGAIN;
1908 }

tom jackson

John Buckman

unread,
Nov 20, 2009, 3:21:10 PM11/20/09
to AOLS...@listserv.aol.com
On Nov 19, 2009, at 8:56 AM, Tom Jackson wrote:

> There is a configuration setting which saves posted files to disk.
> You need the ns_limits maxupload to be large enough, then maxinput
> (not sure which config section) sets an upper limit for in memory
> data.

Yes, the naviserver people talked about that, but the problem is that then you get the raw data, that you need to do something with, rather than temporary files.

I'm currently trying out naviserver, as they do seem to have solved the large-file-upload problem that aolserver has. It's working for me, and nsd process size is staying at 50mb even with multiple gig file uploads being handled.

The main thing that naviserver has, that would probably work simply on aolserver, is a ns_parseformfile function, which parses the raw upload data in a memory efficient way. Here is what the code looks like:

set tmpfile [ns_conn contentfile]
ns_parseformfile $tmpfile $form [ns_set iget [ns_conn headers] content-type]
array set formdata [ns_set array $form]

It might straightforward to copy the "ns_conn contentfile" and "ns_parseformfile" functions to aolserver to solve this problem.

Tom Jackson

unread,
Nov 21, 2009, 10:58:16 PM11/21/09
to AOLS...@listserv.aol.com
On Fri, Nov 20, 2009 at 12:21 PM, John Buckman <jo...@bookmooch.com> wrote:
> On Nov 19, 2009, at 8:56 AM, Tom Jackson wrote:
>
>> There is a configuration setting which saves posted files to disk.
>> You need the ns_limits maxupload to be large enough, then maxinput
>> (not sure which config section) sets an upper limit for in memory
>> data.
>
> Yes, the naviserver people talked about that, but the problem is that then you get the raw data, that you need to do something with, rather than temporary files.
>

Are you sure? Check out NsConnContent (in driver.c) and Ns_ConnContent
(in conn.c), they abstract the location of the data on disk or in
memory. This should be transparent to the application. Ns_ConnContent
just exports NsConnContent.

Also note that streaming to disk is done in the driver prior to
selecting a conn thread. The conn thread is where the content handler
is located. I cannot imagine how you can short-circuit content
handling.

> I'm currently trying out naviserver, as they do seem to have solved the large-file-upload problem that aolserver has.  It's working for me, and nsd process size is staying at 50mb even with multiple gig file uploads being handled.
>

Still not sure if you ever tested it on AOLserver 4.5.

If this doesn't work, it would be a real surprise the only thing in
memory should be the headers and file offsets (see ParseMultiInput).

Are you using [ns_conn files]?

tom jackson

Tom Jackson

unread,
Nov 21, 2009, 11:28:55 PM11/21/09
to AOLS...@listserv.aol.com
John,

Also, if you want to avoid an extra temp file/s on disk avoid
ns_getform and just use [ns_conn files] directly with [fcopy]. The
example in ns_getform is one way to do it by hand. Here is my updated
slightly safer ns_getform:

## ns_getform made saf(er)
# An improved version would return the open fp instead of the tmpfile,
# or better use a system call which returns an fp

rename ns_getform ns_getform_unsafe

#
# ns_getform --
#
# Return the connection form, copying multipart form data
# into temp files if necessary.
#

proc ns_getform {{charset ""}} {

global _ns_form _ns_formfiles

#
# If a charset has been specified, use ns_urlcharset to
# alter the current conn's urlcharset.
# This can cause cached formsets to get flushed.
if {$charset != ""} {
ns_urlcharset $charset
}

if {![info exists _ns_form]} {
set _ns_form [ns_conn form]
foreach {file} [ns_conn files] {
set off [ns_conn fileoffset $file]
set len [ns_conn filelength $file]
set hdr [ns_conn fileheaders $file]
set type [ns_set get $hdr content-type]
set fp ""
while {$fp == ""} {
set tmpfile [ns_tmpnam]
set fp [ns_openexcl $tmpfile]
}
fconfigure $fp -translation binary
ns_conn copy $off $len $fp
close $fp
ns_atclose "ns_unlink -nocomplain $tmpfile"
set _ns_formfiles($file) $tmpfile
#ns_set put $_ns_form $file.content-type $type
# NB: Insecure, access via ns_getformfile.
#ns_set put $_ns_form $file.tmpfile $tmpfile
}
}
return $_ns_form
}

tom jackson


On Fri, Nov 20, 2009 at 12:21 PM, John Buckman <jo...@bookmooch.com> wrote:
> On Nov 19, 2009, at 8:56 AM, Tom Jackson wrote:
>
>> There is a configuration setting which saves posted files to disk.
>> You need the ns_limits maxupload to be large enough, then maxinput
>> (not sure which config section) sets an upper limit for in memory
>> data.
>
> Yes, the naviserver people talked about that, but the problem is that then you get the raw data, that you need to do something with, rather than temporary files.
>
> I'm currently trying out naviserver, as they do seem to have solved the large-file-upload problem that aolserver has.  It's working for me, and nsd process size is staying at 50mb even with multiple gig file uploads being handled.
>
> The main thing that naviserver has, that would probably work simply on aolserver, is a ns_parseformfile function, which parses the raw upload data in a memory efficient way. Here is what the code looks like:
>
> set tmpfile [ns_conn contentfile]
> ns_parseformfile $tmpfile $form [ns_set iget [ns_conn headers] content-type]
> array set formdata [ns_set array $form]
>
> It might straightforward to copy the "ns_conn contentfile" and "ns_parseformfile" functions to aolserver to solve this problem.

John Buckman

unread,
Nov 23, 2009, 9:18:48 PM11/23/09
to AOLS...@listserv.aol.com
Tom, thanks for the help!

Setting maxinput as per:

> ns_section "ns/server/$server1/module/nssock"
> ns_param maxinput 1024

does indeed avoid memory bloat during the large file upload. I used your safer ns_getform: it works fine,

However, your ns_getform causes the nsd process to grow to the size of the uploaded file.

I can't figure out what it is in your code that uses lots of memory. Any idea?

-john

John Buckman

unread,
Nov 23, 2009, 11:47:27 PM11/23/09
to AOLS...@listserv.aol.com
Tom, there is some sort of weird interaction effect on aolserver when doing tcl stuff and there is a large file as the temporary file. Some cases cause the nsd process to bloat to the size of the memory. I wasn't able to figure out why. I wasn't able to fix your ns_getform to not bloat.

However, I was able to copy Vlad's ns_parseformfile proc for use on aolserver from naviserver, and it doesn't bloat.

Here is the code for handling a large file uploaded as a file, rather than in memory:

set type [ns_set iget [ns_conn headers] content-type]
set form [ns_set create]
ns_parseformfile $form $type


array set formdata [ns_set array $form]

puts "final array: [array get formdata]"


proc ns_parseformfile { form contentType } {

set fp [ns_conn contentchannel]

if { ![regexp -nocase {boundary=(.*)$} $contentType 1 b] } {
puts "Warning: no MIME boundary"
return
}

fconfigure $fp -encoding binary -translation binary
set boundary "--$b"

while { ![eof $fp] } {
# skip past the next boundary line
if { ![string match $boundary* [string trim [gets $fp]]] } {
continue
}

# fetch the disposition line and field name
set disposition [string trim [gets $fp]]
if { ![string length $disposition] } {
break
}

set disposition [split $disposition \;]
set name [string trim [lindex [split [lindex $disposition 1] =] 1] \"]

# fetch and save any field headers (usually just content-type for files)

while { ![eof $fp] } {
set line [string trim [gets $fp]]
if { ![string length $line] } {
break
}
set header [split $line :]
set key [string tolower [string trim [lindex $header 0]]]
set value [string trim [lindex $header 1]]

ns_set put $form $name.$key $value
}

if { [llength $disposition] == 3 } {
# uploaded file -- save the original filename as the value
set filename [string trim [lindex [split [lindex $disposition 2] =] 1] \"]
ns_set put $form $name $filename

# read lines of data until another boundary is found
set start [tell $fp]
set end $start

while { ![eof $fp] } {
if { [string match $boundary* [string trim [gets $fp]]] } {
break
}
set end [tell $fp]
}
set length [expr $end - $start - 2]

# create a temp file for the content, which will be deleted
# when the connection close. ns_openexcl can fail, hence why
# we keep spinning

set tmp ""
while { $tmp == "" } {
set tmpfile [ns_tmpnam]
set tmp [ns_openexcl $tmpfile]
}

catch {fconfigure $tmp -encoding binary -translation binary}

if { $length > 0 } {
seek $fp $start
ns_cpfp $fp $tmp $length
}

close $tmp
seek $fp $end
ns_set put $form $name.tmpfile $tmpfile

if { [ns_conn isconnected] } {
ns_atclose "ns_unlink -nocomplain $tmpfile"
}

} else {
# ordinary field - read lines until next boundary
set first 1
set value ""
set start [tell $fp]

while { [gets $fp line] >= 0 } {
set line [string trimright $line \r]
if { [string match $boundary* $line] } {
break
}
if { $first } {
set first 0
} else {
append value \n
}
append value $line
set start [tell $fp]
}
seek $fp $start
ns_set put $form $name $value
}
}
close $fp

John Buckman

unread,
Nov 24, 2009, 5:13:02 PM11/24/09
to AOLS...@listserv.aol.com
Naviserver has a very nice feature that allows (via javascript) to show a user the percent upload progress of a file. I tried porting their progress.c file to aolserver, but it's a significant effort, as it depends on other changes naviserver has implemented to the aolserver code.

However, I was wondering if there wasn't a fairly simple way to implement something similar on aolserver.

What I'd need is a way to know how much data has been uploaded.

The way naviserver does it, is by asking you to POST your upload to a unique URL, and then they provide a tcl command that returns how many bytes have been uploaded to that unique URL. A javascript regularly polls an ADP page that uses the tcl progress command to tell you the number of bytes uploaded.

Is there any access (in C or Tcl) to an upload-in-progress in aolserver?

-john

John Buckman

unread,
Nov 24, 2009, 5:44:40 PM11/24/09
to AOLS...@listserv.aol.com
Tom, I figured out the problem with the "memory bloat" when I used your alternative ns_getform to parse a large file upload temp file.

The problem is that if I call ns_queryget after calling your ns_getform, that causes aolserver to re-parse the large file upload, and to do it in the old memory-inefficient way.

I'm not sure if there's a way for your version of ns_getform to tell the aolserver internals that the parsing of the form is done, so subsequent ns_queryget calls don't cause a re-parse.

At any rate, there's an easy workaround, which is to read things right out the ns_set that your ns_getform populates.

-john

Tom Jackson

unread,
Nov 24, 2009, 5:04:44 PM11/24/09
to AOLS...@listserv.aol.com
John,

I would ditch using ns_getform and roll your own instead. What you
seem to have proven is that you can upload large files with AOLserver
4.5 and the overflow goes to disk.

The question remains: how to deal with the data on disk? [ns_conn
files] gives you offsets, how do you copy that data somewhere else
without bloating memory?

Maybe [ns_conn copy] is the problem. If it loads $len bytes into
memory before it writes a file, that will bloat the memory. You might
verify that this isn't a memory leak, but rather a high water mark on
memory usage. Do two identical uploads, or three continue to increase
memory usage, or does it stop growing? If it keeps growing, that could
indicate some mem leak bug.

tom jackson

Agustin Lopez

unread,
Nov 25, 2009, 4:37:27 AM11/25/09
to AOLS...@listserv.aol.com

Hi!

We are using AOL 4.5.1 (OpenACS) in production environment with many file uploads (and downloads)
and the memory usage increases continously. We begin with near 4 GB RAM and the this increase to
near 11 GB of virtual memory and we have to restart the server. When the server is over 6 GB of
 memory, the execs (sendmail, unzipos, ...) can not be launched. Fork problem. I will like find any
solution to that mem increase.

We have developed a small ns library (using mkZiplib1.0) that we have named nsunzip, to avoid the execs
of unzip. We plan add too zip action. If anybody are interested I can send it.

Regards,
Agustin

Tom Jackson escribió:

Tom Jackson

unread,
Nov 24, 2009, 7:51:07 PM11/24/09
to AOLS...@listserv.aol.com
John,

I'm just going to venture a guess. I hope that Jim D. or someone else
more familiar with the internals will set me straight.

The problem with upload progress monitoring is that uploads are
finished before a conn thread is allocated.

Uploads are done in the driver thread, or a worker thread before the
conn thread is given control.

One thing which could help is if AOLserver introduced chunked transfer
encoding for uploads. But to report progress back to the user would
require a different setup. You would need to monitor the progress of
some other process/thread. AOLserver has chosen to stratify io. By
stratify I mean the io handling is moved through a pipeline. The
pipeline may transfer handling between threads but also enforces the
direction of communication. There is no asynchronous io, each step in
the pipeline is focused on one-way communication. For instance, the
driver thread only reads, the conn threads only write. There is a kind
of fiction that conn threads do any reading from the client. These
threads only read from buffers or disk. The reason should be obvious:
dos attacks. The driver thread is highly tuned to protect the
expensive conn threads.

In addition to that issue, the connio uses scatter/gather io, so the
application gets a very imperfect view of the actual progress. Of
course the io is much more efficient and progress reporting would
require an independent observation and messaging system. A first step
would be the ability to log upload progress to error.log. If you can't
do that, reporting back to the client will be impossible (unless you
think the logging system should operate with less available
information than individual conn threads).

There is one possibility. There is a pre-queue filter in AOLserver
(run inside the driver thread). It works from the Tcl level, but
creates a memory leak equal to the size of an interp, in other words a
huge memory leak. However, maybe at the C level, you could create a
handler which would do something interesting before returning control
back to the driver thread and ultimately the conn thread. I'm not sure
exactly when the pre-queue filters are activated, but if it is before
reading the message body, it might be useful.

tom jackson

Gustaf Neumann

unread,
Nov 25, 2009, 7:07:41 AM11/25/09
to AOLS...@listserv.aol.com
Tom Jackson schrieb:

> John,
>
> I'm just going to venture a guess. I hope that Jim D. or someone else
> more familiar with the internals will set me straight.
>
> The problem with upload progress monitoring is that uploads are
> finished before a conn thread is allocated.
>
> Uploads are done in the driver thread, or a worker thread before the
> conn thread is given control.
>
Tom,

the typical meachnism for upload progress bars is that separate (ajax)
queries
are used to query the state of an running upload, which is identified by
some
unique ID (e.g. X-Progress-ID in
http://wiki.nginx.org/NginxHttpUploadProgressModule,
or some other heuristics. e.g. based on URL and peer addr).
So, one needs just a thread-safe introspection meachnism, which
checks in the spooler the state (such as "ns_upload_stats"
in naviserver) and returns it back. The spooler hast just to update
these statistics when it recieves data.

-gustaf neumann

John Buckman

unread,
Nov 25, 2009, 12:08:41 PM11/25/09
to AOLS...@listserv.aol.com
> the typical meachnism for upload progress bars is that separate (ajax) queries
> are used to query the state of an running upload, which is identified by some
> unique ID (e.g. X-Progress-ID in http://wiki.nginx.org/NginxHttpUploadProgressModule,
> or some other heuristics. e.g. based on URL and peer addr).
> So, one needs just a thread-safe introspection meachnism, which
> checks in the spooler the state (such as "ns_upload_stats"
> in naviserver) and returns it back. The spooler hast just to update
> these statistics when it recieves data.

Exactly - what is needed in aolserver is simply a tcl command for introspection, specifically how many bytes have been received in some particular case. That data must be somewhere, what I was wondering is if anyone had an idea *where*.

-j

John Buckman

unread,
Nov 25, 2009, 12:16:23 PM11/25/09
to AOLS...@listserv.aol.com
> We are using AOL 4.5.1 (OpenACS) in production environment with many file uploads (and downloads)
> and the memory usage increases continously. We begin with near 4 GB RAM and the this increase to
> near 11 GB of virtual memory and we have to restart the server. When the server is over 6 GB of
> memory, the execs (sendmail, unzipos, ...) can not be launched. Fork problem. I will like find any
> solution to that mem increase.
>
> We have developed a small ns library (using mkZiplib1.0) that we have named nsunzip, to avoid the execs
> of unzip. We plan add too zip action. If anybody are interested I can send it.

Agustin, there are a few things you should do which will solve your problem.

First, I recommend having fewer threads running, as that will lower your bloat significantly. I run with 10.

Then, switch to using temporary files (instead of memory), by changing your config to:


> ns_section "ns/server/$server1/module/nssock"
> ns_param maxinput 1024

and at the bottom:
> ns_limits set default -maxupload "2048000000"


that will get rid of the nsd process bloat while the file is uploading. You'll still have bloat when the file is unzipped, which you can solve by removing any mention of ns_queryget when you receive the upload, and replacing it with Tom's ns_getform (posted here a few days ago) and then reading the ns_set that process returns.

After these changes, my nsd process is stable at 16mb of RAM, despite receiving 800mb zip uploads that I unzip inside aolserver.

Hope that helps!

-john

rename ns_getform ns_getform_unsafe

proc ns_getform {{charset ""}} {

global _ns_form _ns_formfiles

Tom Jackson

unread,
Nov 25, 2009, 12:13:16 PM11/25/09
to AOLS...@listserv.aol.com
Gustaf,

I've seen these working, although I'm never sure where exactly the
magic happens. It looks like the ngix idea is to work as a proxy:

"It works because Nginx acts as an accelerator of an upstream server,
storing uploaded POST content on disk, before transmitting it to the
upstream server. Each individual POST upload request should contain a
progress unique identifier. "

I wonder if the progress is from you to ngix, or from ngix to the final server.

Tom Jackson

unread,
Nov 25, 2009, 12:29:59 PM11/25/09
to AOLS...@listserv.aol.com
Gustaf,

Oops, accidentally hit send.

I just started work on an event driven http client (called htclient).
It can monitor downloads just by using a variable trace. I haven't
reversed the idea for uploads yet, but it would be easy. Not so easy
is guessing the length of the encoded file prior to sending. Seems
like a better solution for large file uploads would be to do the
upload as binary data instead of multi-part and get direct local
monitoring and the ability to cancel, and/or restart a failed upload.

For clients who have tcl/tk installed, it would be easy to
auto-generate a tcl script to handle one or several uploads.

tom jackson

Jeff Rogers

unread,
Nov 25, 2009, 5:46:06 PM11/25/09
to AOLS...@listserv.aol.com
It looks like the pre-queue filters are run after the message body has
been read, but before it is passed off to the Conn thread, so no help
there. However it looks like it would not be hard to add in a new
callback to the middle of the read loop, tho it's debatable if that's a
good idea or not (for one, it would get called a *lot*).

Curious about that tcl prequeue leak. I guess no one uses or cares
about it, since the symptom is serious (more than just a really big
memory leak, it crashes the server too), the cause is pretty obvious and
the fix appears on the surface to be pretty obvious, although it does
result in prequeue filters working differently from all the others, in
particular that it would use a different interp from the rest of the
request.

-J

Tom Jackson wrote:

> There is one possibility. There is a pre-queue filter in AOLserver
> (run inside the driver thread). It works from the Tcl level, but
> creates a memory leak equal to the size of an interp, in other words a
> huge memory leak. However, maybe at the C level, you could create a
> handler which would do something interesting before returning control
> back to the driver thread and ultimately the conn thread. I'm not sure
> exactly when the pre-queue filters are activated, but if it is before
> reading the message body, it might be useful.

Tom Jackson

unread,
Nov 24, 2009, 8:06:35 PM11/24/09
to AOLS...@listserv.aol.com
John,

I think your analysis is correct. You have to avoid certain API or
you do extra work, and maybe take up extra space. Usually this is
unimportant, but in your case it isn't.

Anyway, sounds like you are making progress. Please let me know if I
have led you down the wrong path. This stuff isn't really documented
anywhere.

tom jackson

John Buckman

unread,
Nov 29, 2009, 1:21:56 AM11/29/09
to AOLS...@listserv.aol.com
> I think your analysis is correct. You have to avoid certain API or
> you do extra work, and maybe take up extra space. Usually this is
> unimportant, but in your case it isn't.
>
> Anyway, sounds like you are making progress. Please let me know if I
> have led you down the wrong path. This stuff isn't really documented
> anywhere.

Your code works great and you didn't lead me down any wrong paths.

As long as no calls to ns_queryget are made when a large file upload is received, then your form parser grabs the files out of the form in a memory efficient way. I'm just about to put it into widespread production, but it's been working on my production linux server and osx development machines perfectly so far.

-john

Jim Davidson

unread,
Dec 1, 2009, 11:08:38 AM12/1/09
to AOLS...@listserv.aol.com
Howdy,

Looking back at the code and trying to remember what I was thinking at the time, I ran across the header comment to "NsConnContent" which mentions the possibility of a mapping failure (see below). This reminded me of what was going on...

Originally there was no "spool to file" option -- everything just read into a big heap-based buffers which was goofy, needing potentially lots of fragmented heap and/or subject to denial of service attacks (i.e., lots of requests to upload big things overloading memory).

The "spool to file" thing came later (can't remember when), using a nifty open temp file thing which avoids the comparably high-cost of actually creating a file on each connection, instead simply re-using an open fd and truncating it after each request. There are a few side effects with this:

1. The open file itself is not visible on disk -- it's deleted right after open on Unix (there's some weird "secure" option on Windows which can't delete an open file). This is a security benefit but it means you can't simply "stat" the underlying file from another thread/connection to monitor progress. And, you can't simply rename the file to some permanent location -- you have to actually copy the content to a new file if that's your intention (in practice, you save a portion of the upload content in the multipart stuff anyway).

2. The original code assuming everything was in memory and a string so, like most AOLserver improvements, that presented a backwards compatibility problem. The compromise was that if you called the original code (e.g., with "ns_conn form"), the new code mapped the file to make it look like it was in memory. This was an improvement as after the connection the mapping could go away, reclaiming the memory and avoiding fragmentation, but in practice you could still get enough simultaneous big uploads to run out of virtual memory. You could then switch to 64bit or avoid those calls (in your code and possibly other filter code) using the clever incremental Tcl read/parse code in ns_parsefromfile mentioned below.

3. The file upload is pre-connection so it's being spooled/copied in the single "driver" thread (driver is vestigial name which no longer makes sense). This is efficient enough but means something needs to be done to communicate progress to another thread if you want to get a background status updates. A quick look at SockReadContent in driver.c shows you may just want the difference between connPtr->contentLength (to upload) and connPtr->avail (already copied). But, those things aren't updated under a mutex lock -- you could do a dirty read (probably safe enough) in another thread but if you're digging around in that code you may as well add a structure to maintain progress specifically and a quick API to fetch it. Lock contention/overhead would be insignificant. You'd have one more problem of identifying the connection of interest and pondering security issues, if any, with that approach. Plus you could have the problem of a rotor of machines where the upload is on!
one machine and the check is on another so alternatively you could add some "progress callback" API instead that could be arbitrarily clever, e.g., sending it's update progress at certain points to some other shared store (hmm... that would be pretty cool feature).


-Jim


/*
*----------------------------------------------------------------------
*
* NsConnContent --
*
* Return the connction content buffer, mapping the temp file
* if necessary.
*
* NB: It may make better sense to update the various content
* reading/parsing code to handle true incremental reads from
* an open file instead of risking a potential mapping failure.
* The current approach keeps the code simple and flexible.
*
* Results:
* Pointer to start of content or NULL if mapping failed.
*
* Side effects:
* If nextPtr and/or availPtr are not NULL, they are updated
* with the next byte to read and remaining content available.
*
*----------------------------------------------------------------------
*/

Jim Davidson

unread,
Dec 1, 2009, 11:48:35 AM12/1/09
to AOLS...@listserv.aol.com
Right -- the pre-queue thing operates within the driver thread only, after all content is read, before it's dispatched to a connection. The idea is that you may want to use the API to fetch using event-style network I/O some other bit of context to attach to the connection using the "connection specific storage" API. So, it won't fire during the read.

However, a progress callback API could be good -- wouldn't be called unless someone wrote something special and careful enough to not bog things down. Maybe an "onstart", "onend", and "on %" and/or "on #" callback which could do something like dump the progress in some shared (process or machine wide) thinger for other threads to check. I like the idea...


Oh, and the pre-queue leak sounds like a bug -- should just be re-using the same Tcl interp for all callbacks.

-Jim

Jeff Rogers

unread,
Dec 1, 2009, 4:45:11 PM12/1/09
to AOLS...@listserv.aol.com
Jim Davidson wrote:
> Right -- the pre-queue thing operates within the driver thread only,
> after all content is read, before it's dispatched to a connection.
> The idea is that you may want to use the API to fetch using
> event-style network I/O some other bit of context to attach to the
> connection using the "connection specific storage" API. So, it won't
> fire during the read.

Can you share any specific examples of how it has been used? It's
always struck me as an unfinished (or very specific-purpose) API since
it's undocumented and it seems that doing anything nontrivial is liable
to gum up the whole works since the driver is just a single thread.

> However, a progress callback API could be good -- wouldn't be called
> unless someone wrote something special and careful enough to not bog
> things down. Maybe an "onstart", "onend", and "on %" and/or "on #"
> callback which could do something like dump the progress in some
> shared (process or machine wide) thinger for other threads to check.
> I like the idea...
>
> Oh, and the pre-queue leak sounds like a bug -- should just be
> re-using the same Tcl interp for all callbacks.

In the case of a tcl filter proc, ProcFilter gets the server/thread
specific interpreter for the connection and expects NsFreeConnInterp to
be called later, but in the case of pre-queue filters NsFreeConnInterp
is never called in the driver thread so it allocates (PopInterp) a new
interp every time. Adding in a call to NsFreeConnInterp after the
prequeue filters looks like it fixes the problem. If a filter proc is
added into SockRead the same thing would need to happen (potentially in
the reader thread instead of the driver thread).

One thing I am confused about tho, is why without calling
NsFreeConnInterp in the driver thread it just leaks the interps rather
than crashing when it tries to use the interp in the conn thread, since
it looks like a new conn interp wouldn't get allocated in that case.

I also don't understand why there can be multiple interps per
server+thread combo in the first place (PopInterp/PushInterp); I'd
expect that only one conn can be in a thread at a time and that it
always releases the interp when it leaves the thread.

-J

Jim Davidson

unread,
Dec 1, 2009, 6:28:03 PM12/1/09
to AOLS...@listserv.aol.com
On Dec 1, 2009, at 4:45 PM, Jeff Rogers wrote:

> Jim Davidson wrote:
>> Right -- the pre-queue thing operates within the driver thread only,
>> after all content is read, before it's dispatched to a connection.
>> The idea is that you may want to use the API to fetch using
>> event-style network I/O some other bit of context to attach to the
>> connection using the "connection specific storage" API. So, it won't
>> fire during the read.
>
> Can you share any specific examples of how it has been used? It's always struck me as an unfinished (or very specific-purpose) API since it's undocumented and it seems that doing anything nontrivial is liable to gum up the whole works since the driver is just a single thread.

Nope, no examples I'm aware of. You're right it's unfinished, mostly because it's missing a curl-like simple HTTP interface on top and a Tcl interface to configure it to do the common thing of fetching something over the net via REST or SOAP and shoving the result into CLS for later use. And, there's no documentation so how would anyone know what or why it would be useful? Also, it's a bit "advanced" as it were for just the reason you mentioned -- if you mucked up your callback, you would stall all accept and read-ahead.

Frankly, I always thought it was a cool idea but could never get anyone else interested. In practice folks had various "search database here..." or "fetch xml there..." type stuff riddled throughout their existing ADP scripts and it wasn't strictly necessary or a priority to re-factor that stuff to move it all to the pre-queue interface. My goal of getting all the "context" into the connection before processing began, eliminating any potential long-waiting stalls in the connection threads was never something that got others super excited. Even so, the concept is still cool.


>
>> However, a progress callback API could be good -- wouldn't be called
>> unless someone wrote something special and careful enough to not bog
>> things down. Maybe an "onstart", "onend", and "on %" and/or "on #"
>> callback which could do something like dump the progress in some
>> shared (process or machine wide) thinger for other threads to check.
>> I like the idea...
>> Oh, and the pre-queue leak sounds like a bug -- should just be
>> re-using the same Tcl interp for all callbacks.
>
> In the case of a tcl filter proc, ProcFilter gets the server/thread specific interpreter for the connection and expects NsFreeConnInterp to be called later, but in the case of pre-queue filters NsFreeConnInterp is never called in the driver thread so it allocates (PopInterp) a new interp every time. Adding in a call to NsFreeConnInterp after the prequeue filters looks like it fixes the problem. If a filter proc is added into SockRead the same thing would need to happen (potentially in the reader thread instead of the driver thread).


Ah ... this does sound like a bug. Since the interface is undocumented and not used, I don't think it would hurt to add a call to NsFreeConnInterp as a special case of a pre-queue filter. NsFreeConnInterp does call a "free conn" Tcl trace which assumes end of connection but I guess that's ok to call pre-queue (if there has ever been one registered).

>
> One thing I am confused about tho, is why without calling NsFreeConnInterp in the driver thread it just leaks the interps rather than crashing when it tries to use the interp in the conn thread, since it looks like a new conn interp wouldn't get allocated in that case.


Hmm... appears it's working when it shouldn't, Interps are supposed to be per-thread but maybe in this case the interp is floating from one thread to the next and still working, avoiding things that wouldn't work. Anyway, calling NsFreeConnInterp should clear this out.

>
> I also don't understand why there can be multiple interps per server+thread combo in the first place (PopInterp/PushInterp); I'd expect that only one conn can be in a thread at a time and that it always releases the interp when it leaves the thread.

I think there was an edge case that led to a need for a cache of possible interps instead of just holding one. In practice, it's always just one unless someone writes some weird code to do a pop/push directly to have some alternate interp for a special purpose.


-Jim

Tom Jackson

unread,
Dec 1, 2009, 6:29:08 PM12/1/09
to AOLS...@listserv.aol.com
Jeff,

Interps are confined to a specific thread. You can transfer the sock
around, but not the interp. But the big reason for different interps
is that they are or can be specialized. The prequeue interp could be
very simple. Conn interps tend to be big and expensive so you don't
want to use them any more than you have to. This is why we have a
driver thread handling all i/o prior to queuing the conn. The driver
thread is the upload equivalent of the download helper. (plus a bunch
of other stuff).

As an example of an additional prequeue filter, maybe the new http
client (written in C) using the ?ns_task? API. Basically seems like
you could dole out a task and return once the task (getting something
via event driven http) is complete.

Maybe a good use of the prequeue filter would be to actually return
this download progress information. Since it is a filter, it would
fire on a particular url. Maybe you could do a quick return and abort
the connection, never using a conn thread. (Note this is a separate
http request from the upload request.) It should be very fast if you
never fire up an interp.

tom jackson

Tom Jackson

unread,
Dec 1, 2009, 5:31:51 PM12/1/09
to AOLS...@listserv.aol.com
When I tested using the prequeue filter, it didn't crash the server.
The server just ran out of physical memory, which might be even worse.
But it just happened because I was doing load testing. I wanted to try
logging incoming connections before they got dumped to conn threads.
It worked, for a while.

Anyway, the prequeue filter must use a separate interp, since it is in
a different thread and conn threads are expensive. I was thinking the
prequeue filter was more likely to work with a filter written in C, so
I lost interest in it.

One possibly easy modification may entail a slightly modified ns_sock
module. All reading/writing goes through that point, and you could
fairly easily copy and modify the current module.

How to connect it to a monitoring page:

Before the content is read, all headers have been processed, so you
could use a cookie, probably necessary for sites which allow POSTed
data.

Assuming the cookie is opaque and not easily guessed unless you have
access to the packets, you could generate a monitor page using an
nsv_array/arrays. Each connection also gets a new connid, so you can
tie together connid and cookie.

One problem is if the upload is ever "upgraded" to HTTP/1.1, which
allows chunked transfer (why?). You could still track a total, but you
have no idea the expected size. The other problem is that each query
requires a connection thread to process.

Why not just create a client side agent which can count output bytes?

tom jackson

russell muetzelfeldt

unread,
Dec 1, 2009, 7:16:44 PM12/1/09
to AOLS...@listserv.aol.com
On 02/12/2009, at 9:31 AM, Tom Jackson wrote:

> One problem is if the upload is ever "upgraded" to HTTP/1.1, which
> allows chunked transfer (why?). You could still track a total, but you
> have no idea the expected size.

even if the server is only capable of saying "I've received X bytes" that can still be presented as a meaningful feedback to the user instead of the negligible feedback ("page loading"?) that a plain HTTP POST gives in the browser...


> Why not just create a client side agent which can count output bytes?

because it requires a client-side agent of some sort in addition to the browser, and because it fails in the face of a local cache. when copying a file to a remote server over WebDAV and with a local squid cache configured, the copy progress shows bytes written to the cache since that's all the client knows about. if the cache is large and on on a fast local network we see the "progress" get up to 100% as fast as the client can write to the cache and then sit at "finishing" for as long as it takes the cache to transmit the file up over a slow DSL connection to the remote WebDAV server.

the only way to get a reliable upload progress is to ask the server how much data it's received.

Tom Jackson

unread,
Dec 1, 2009, 7:06:04 PM12/1/09
to AOLS...@listserv.aol.com
On Tue, Dec 1, 2009 at 3:28 PM, Jim Davidson <jgdav...@mac.com> wrote:
> On Dec 1, 2009, at 4:45 PM, Jeff Rogers wrote:
>>
>> I also don't understand why there can be multiple interps per server+thread combo in the first place (PopInterp/PushInterp); I'd expect that only one conn can be in a thread at a time and that it always releases the interp when it leaves the thread.
>
>
>
> I think there was an edge case that led to a need for a cache of possible interps instead of just holding one.  In practice, it's always just one unless someone writes some weird code to do a pop/push directly to have some alternate interp for a special purpose.

One thing which causes multiple interps per thread is the "default" threadpool.

If you have a default threadpool the threads are shared across
servers. So, as each server uses a thread, an interp for that server
is created.

My advice is to always register a default threadpool for each server
so that the process wide default threadpool is never used.

I'm not sure if there is any security problem, but a shared threadpool
makes each thread more expensive, and I doubt it saves resources.

tom jackson

Jim Davidson

unread,
Dec 7, 2009, 11:21:01 PM12/7/09
to AOLS...@listserv.aol.com
Hi,

I just checked in some changes to hopefully fix the pre-queue interp leak muck (and other bugs). I also added read and write filter callbacks -- the read callbacks can be used to report file upload progress somewhere. And, I added new ns_cls and ns_quewait commands to work with the curious Ns_QueueWait interface. Some man pages were updated to reflect the changes including an example in ns_quewait.n. There's not yet an HTTP protocol interface on top of ns_quewait but it could be added, letting you do some REST things, for example, in an event-driven manner before your connection gets running.

-Jim


On Dec 1, 2009, at 4:45 PM, Jeff Rogers wrote:

Tom Jackson

unread,
Dec 9, 2009, 7:06:39 PM12/9/09
to AOLS...@listserv.aol.com
Jim,

Looks like a lot of really cool stuff.

One question about the ns_quewait: the only events are NS_SOCK_READ
and NS_SOCK_WRITE, which matches up with the tcl level, and also match
up with Ns_QueueWait. Are the other possible file events handled
somewhere else? Note that tcl includes exceptional conditions into
read/write, which seems not ideal, but this interface seems to ignore
all other conditions.

If the idea that the connection will fail by default, which will cause
connection abort, then this is a great design.

Anyway, for instance, the WaitCallback function distinguishes
READ/WRITE/DROP, but the registration proc NsTclQueWaitObjCmd only
handles "readable,writable".

One huge advantage of the AOLserver "fileevent" interface (over the
tcl interface) is that the event type is more clearly defined. This
makes callback/handlers a little simpler. My only worry in this
particular case is that a connection will get stuck with an unhandled
event. We currently define:

#define NS_SOCK_READ 0x01
#define NS_SOCK_WRITE 0x02
#define NS_SOCK_EXCEPTION 0x04
#define NS_SOCK_EXIT 0x08
#define NS_SOCK_DROP 0x10
#define NS_SOCK_CANCEL 0x20
#define NS_SOCK_TIMEOUT 0x40
#define NS_SOCK_INIT 0x80
#define NS_SOCK_ANY (NS_SOCK_READ|NS_SOCK_WRITE|NS_SOCK_EXCEPTION)

Another question about use of interps.

Interps are bound to threads, so they don't move around or follow a connection.

The new filter points may create an interp. I'm not sure which thread
creates the interp. The prequeue filter runs after all content is
uploaded. In the prequeue filter you register a read/write filter
(opening a socket). This is quite new, something like a recursive
filter. (Or do these filters fire for I/O on the main conn?)

Are these interps created and destroyed for each connection, or can
they be shared?

It seems that there a lot of interesting possibilities with this new
code. It is actually difficult to compare with tcl's [fileevent]
interface because this appears much more powerful. For instance, it
seems very likely that you could turn AOLserver into a proxy server
without ever invoking connection threads, everything would be done in
high-speed C based event I/O, but the transfer would still have access
to a tcl interp.

My last question is the initialization of the interp. One driver
thread could service multiple virtual servers. When an interp is
created for use is there any choice? My understanding of the
conn-thread pools is that they partition interps into somewhat similar
groups. For instance, thread pools which handle static files would
tend to not grow in size over time. Threads which handle adp or tcl
files could be expected to grow as they serve unrelated dynamic
content.

tom jackson

Dossy Shiobara

unread,
Jan 17, 2010, 7:08:51 PM1/17/10
to AOLS...@listserv.aol.com
On 11/24/09 5:13 PM, John Buckman wrote:
> Is there any access (in C or Tcl) to an upload-in-progress in aolserver?

It'd be nice if we extended ns_info with [ns_info driver ...] that could
give you connection-level info. from the driver thread. In its simplest
form, all we need is to expose the total bytes read/written on a socket
from the driver thread. Bytes read of the POST request's body and the
anticipated Content-Length enables us to compute a rough "progress" -
using the unique URL bit gives us an opaque handle to identify which
connection we're interested in.

Fortunately for me, I haven't built any applications where large file
upload handling has been a requirement. ;-)


--
Dossy Shiobara | do...@panoptic.com | http://dossy.org/
Panoptic Computer Network | http://panoptic.com/
"He realized the fastest way to change is to laugh at your own
folly -- then you can let go and quickly move on." (p. 70)

John Buckman

unread,
Jan 18, 2010, 5:39:39 AM1/18/10
to AOLS...@listserv.aol.com
> On 11/24/09 5:13 PM, John Buckman wrote:
>> Is there any access (in C or Tcl) to an upload-in-progress in aolserver?
>
> It'd be nice if we extended ns_info with [ns_info driver ...] that could
> give you connection-level info. from the driver thread. In its simplest
> form, all we need is to expose the total bytes read/written on a socket
> from the driver thread. Bytes read of the POST request's body and the
> anticipated Content-Length enables us to compute a rough "progress" -
> using the unique URL bit gives us an opaque handle to identify which
> connection we're interested in.

I've learned a few things by deploying a large-file-upload feature on aolserver:

1) IE times out on large file uploads over DSL, as does Chrome and Safari. Only Firefox seems to have a long enough timeout to enable 600mb file uploads over DSL.

2) All the other file upload sites use a client-side widget to upload a file in parts, not using the browser's upload feature at all. Then, they have a thin server-side program which accepts small chunks of the file upload at a time. Once the widget decides the entire file has been sent, it submits to a new web page, which then collects all the small file chunks.

So... instead of working on an upload-in-progress feature, it would make sense instead to have a client-side widget (javascript/flash/java) that sends file upload chunks to a server-side tcl script, and then have a "harvester" tcl script once the widget says the file upload is complete.

-john

Jim Davidson

unread,
Jan 18, 2010, 3:51:50 PM1/18/10
to AOLS...@listserv.aol.com
Hi,

I think we were talking about this about a month ago. I updated the source to enable upload-progress checking with a combination of ns_register_filter and nsv -- there's an example at the latest ns_register_filter man page (pasted below). This may work for you although it would require compiling from latest sources. It assumes you have some javascript thinger that makes repeated calls to check the status of the upload in progress on another thread.

-Jim

EXAMPLE
The following example uses a read filter to update status of a large HTTP POST to the
/upload/key url where key is some client-specified unique value. While the upload is in
progress, it can be monitored with repeated GET requests to the /status/key url with the same
key:

#
# Register procs to receive uploads and check status
# mainted in an nsv array.
#

ns_register_proc POST /upload upload.post
ns_register_proc GET /status upload.status

proc upload.status {} {
set key [ns_conn urlv 1]
if {[catch {set status [nsv_get status $key]}]} {
set status "unknown"
}
ns_return 200 text/plain $status
}

proc upload.post {} {
set key [ns_conn urlv 1]
nsv_unset status $key
# ... do something with content ...
ns_return 200 text/plain received
}

#
# Register a read filter ot update status
#

ns_register_filter read POST /upload/* upload.update

proc upload.update {why} {
set key [ns_conn urlv 1]
set expected [ns_conn contentlength]
set received [ns_conn contentavail]
set status [list $expected $received]
nsv_set status $key $status
return filter_ok

Tom Jackson

unread,
Jan 19, 2010, 4:16:51 PM1/19/10
to AOLS...@listserv.aol.com
This method could also have the advantage of recovery in case of a
failed upload. A client would look much like a udp application which
tracks packets at the application level.

Once the server side API is set, the client could be javascript, java,
flash or tcl.

The client-side solution also has the advantage of being tailored to
the website, and you could use more bandwidth-efficient compression
and/or binary transfer instead of encoded transfer.

But if you want to provide upload progress to your customers, it seems
counterproductive to create a client which queries the progress via a
separate url. That just greatly multiplies the number of requests the
server must handle.

tom jackson

On Mon, Jan 18, 2010 at 2:39 AM, John Buckman <jo...@bookmooch.com> wrote:
>> On 11/24/09 5:13 PM, John Buckman wrote:
>>> Is there any access (in C or Tcl) to an upload-in-progress in aolserver?
>>
>> It'd be nice if we extended ns_info with [ns_info driver ...] that could
>> give you connection-level info. from the driver thread.  In its simplest
>> form, all we need is to expose the total bytes read/written on a socket
>> from the driver thread.  Bytes read of the POST request's body and the
>> anticipated Content-Length enables us to compute a rough "progress" -
>> using the unique URL bit gives us an opaque handle to identify which
>> connection we're interested in.
>
> I've learned a few things by deploying a large-file-upload feature on aolserver:
>
> 1) IE times out on large file uploads over DSL, as does Chrome and Safari.  Only Firefox seems to have a long enough timeout to enable 600mb file uploads over DSL.
>
> 2) All the other file upload sites use a client-side widget to upload a file in parts, not using the browser's upload feature at all.  Then, they have a thin server-side program which accepts small chunks of the file upload at a time. Once the widget decides the entire file has been sent, it submits to a new web page, which then collects all the small file chunks.
>
> So... instead of working on an upload-in-progress feature, it would make sense instead to have a client-side widget (javascript/flash/java) that sends file upload chunks to a server-side tcl script, and then have a "harvester" tcl script once the widget says the file upload is complete.

Jeff Rogers

unread,
Jan 22, 2010, 6:03:59 PM1/22/10
to AOLS...@listserv.aol.com
The YUI upload control looks like a good place to start for the flash
client-upload feature. I haven't looked into it too deeply tho, so I
don't know what the server side looks like.

YUI Uploader widget: http://developer.yahoo.com/yui/uploader/

Other that that, I was pondering the plain upload issue. Since
IE/Chrome/Safari are timing out on the upload, I wonder if the
connection could be kept alive by sending something - anything - back to
the client while it is still uploading. This might be doable with Jim's
new "read" filter. Of course, the browsers might respond to data by
closing their connection or stopping sending, or crashing (you never
know with IE). And then even if it works, you have the problem of not
having the tcp connection interrupted for however long it takes, which
can be iffy in the world of flaky wireless connections and ISPs.

-J

Tom Jackson

unread,
Jan 21, 2010, 9:19:38 PM1/21/10
to AOLS...@listserv.aol.com
I don't have any problem with this solution. It is superior to using a
forward proxy which uploads the entire file then reports progress to
the final server (this was the original model proposed in this thread,
by example).

In fact, I pointed out that the server thread is a proxy, handling
upload prior to allocating a conn thread. If you could peek into this
process, you get feedback.

But this is extremely inefficient. All of these solutions require a
specialized client, even it it seems somewhat transparent to the end
user. So I have pointed out several times that the best solution is a
client side solution which tracks upload progress by knowing the total
file size and the amount of bytes sent.

This really only make sense when the client is somewhat smaller than
the typical upload. Also, downloads are usually faster than uploads,
so the specialized client looks more attractive. the only critical
factor is ease of installation of the client.

Given the size of a tcl client, about 2 meg, any website with a
typical upload of 3+ megs would benefit from an easy to install and
use specialized client. My guess is that the javascript, flash and
java clients could be smaller, but would vary more than a simple tcl
client, which would work unchanged at the script level.

tom jackson

On Mon, Jan 18, 2010 at 12:51 PM, Jim Davidson <jgdav...@mac.com> wrote:
> Hi,
>

> I think we were talking about this about a month ago. áI updated the source to enable upload-progress checking with a combination of ns_register_filter and nsv -- there's an example at the latest ns_register_filter man page (pasted below). áThis may work for you although it would require compiling from latest sources. áIt assumes you have some javascript thinger that makes repeated calls to check the status of the upload in progress on another thread.
>
> -Jim
>
>
>
>
>
> EXAMPLE
> á á á The áfollowing áexample áuses áa áread áfilter áto áupdate ástatus áof áa large HTTP POST to the
> á á á /upload/key url where key is some áclient-specified áunique ávalue. á While áthe áupload áis áin
> á á á progress, áit ácan ábe monitored with repeated GET requests to the /status/key url with the same
> á á á key:
>
> á á á á á á á#
> á á á á á á á# Register procs to receive uploads and check status
> á á á á á á á# mainted in an nsv array.
> á á á á á á á#
>
> á á á á á á áns_register_proc POST /upload upload.post
> á á á á á á áns_register_proc GET /status upload.status
>
> á á á á á á áproc upload.status {} {
> á á á á á á á áset key [ns_conn urlv 1]
> á á á á á á á áif {[catch {set status [nsv_get status $key]}]} {
> á á á á á á á á á set status "unknown"
> á á á á á á á á}
> á á á á á á á áns_return 200 text/plain $status
> á á á á á á á}
>
> á á á á á á áproc upload.post {} {
> á á á á á á á áset key [ns_conn urlv 1]
> á á á á á á á ánsv_unset status $key
> á á á á á á á á# ... do something with content ...
> á á á á á á á áns_return 200 text/plain received
> á á á á á á á}
>
> á á á á á á á#
> á á á á á á á# Register a read filter ot update status
> á á á á á á á#
>
> á á á á á á áns_register_filter read POST /upload/* upload.update
>
> á á á á á á áproc upload.update {why} {
> á á á á á á á áset key [ns_conn urlv 1]
> á á á á á á á áset expected [ns_conn contentlength]
> á á á á á á á áset received [ns_conn contentavail]
> á á á á á á á áset status [list $expected $received]
> á á á á á á á ánsv_set status $key $status
> á á á á á á á áreturn filter_ok
> á á á á á á á}


>
>
>
>
>
> On Jan 18, 2010, at 2:39 AM, John Buckman wrote:
>
>>> On 11/24/09 5:13 PM, John Buckman wrote:
>>>> Is there any access (in C or Tcl) to an upload-in-progress in aolserver?
>>>
>>> It'd be nice if we extended ns_info with [ns_info driver ...] that could

>>> give you connection-level info. from the driver thread. áIn its simplest


>>> form, all we need is to expose the total bytes read/written on a socket

>>> from the driver thread. áBytes read of the POST request's body and the


>>> anticipated Content-Length enables us to compute a rough "progress" -
>>> using the unique URL bit gives us an opaque handle to identify which
>>> connection we're interested in.
>>
>> I've learned a few things by deploying a large-file-upload feature on aolserver:
>>

>> 1) IE times out on large file uploads over DSL, as does Chrome and Safari. áOnly Firefox seems to have a long enough timeout to enable 600mb file uploads over DSL.
>>
>> 2) All the other file upload sites use a client-side widget to upload a file in parts, not using the browser's upload feature at all. áThen, they have a thin server-side program which accepts small chunks of the file upload at a time. Once the widget decides the entire file has been sent, it submits to a new web page, which then collects all the small file chunks.

Tom Jackson

unread,
Jan 23, 2010, 11:59:21 AM1/23/10
to AOLS...@listserv.aol.com
On Fri, Jan 22, 2010 at 3:03 PM, Jeff Rogers <dv...@diphi.com> wrote:
> The YUI upload control looks like a good place to start for the flash
> client-upload feature.  I haven't looked into it too deeply tho, so I don't
> know what the server side looks like.
>
> YUI Uploader widget: http://developer.yahoo.com/yui/uploader/
>
> Other that that, I was pondering the plain upload issue.  Since
> IE/Chrome/Safari are timing out on the upload, I wonder if the connection
> could be kept alive by sending something - anything - back to the client
> while it is still uploading.

This is just caused by a brain damaged application. TCP/IP handles
connection timeouts all by itself. As long as packets are being sent
and acknowledged received, the application should not care. But very
likely what is happening is that you have a blocking worker thread
with is being controlled by another thread just using a simple
timeout, without monitoring progress. Anyone who has noticed their
browser freeze while loading google analytics, or some other ad iframe
has experienced this poor event programming model.

Either Firefox avoids this with active monitoring, or it doesn't use a
timeout at the application level, or the timeout is very large.

> This might be doable with Jim's new "read"
> filter.  Of course, the browsers might respond to data by closing their
> connection or stopping sending, or crashing (you never know with IE).  And
> then even if it works, you have the problem of not having the tcp connection
> interrupted for however long it takes, which can be iffy in the world of
> flaky wireless connections and ISPs.

Until the entire POST is complete, you have no method of communicating
back to the client, this is the ultimate cause of the no progress
being reported. To stay within the HTTP protocol, you would have to
send multiple smaller chunks, and wait for the server to acknowledge
it has received the data at the application level. Also, the chunked
transfer encoding doesn't really help here since proxies are sometimes
required to remove this encoding, cache the entire body and maybe
retransmit it in chunks.

tom jackson

Jim Davidson

unread,
Jan 18, 2010, 4:18:05 PM1/18/10
to AOLS...@listserv.aol.com


Ah -- old message I didn't see at first.... replies in-line below....


Yup -- at the C level it's read and/or write and the Tcl level read or write.  But, the code could be hacked to handle more conditions, e.g., "priority" data (although I'm not sure what that is) or specific checks for dropped connections (apparently the Ns_Task interface silently sets POLLIN if it sees a POLLHUP but that's not the case elsewhere in the code).    Being consistent would be smart although perhaps we may be inviting new bugs in weird ways.



Another question about use of interps.

Interps are bound to threads, so they don't move around or follow a connection.

The new filter points may create an interp. I'm not sure which thread
creates the interp. The prequeue filter runs after all content is
uploaded.  In the prequeue filter you register a read/write filter
(opening a socket). This is quite new, something like a recursive
filter. (Or do these filters fire for I/O on the main conn?)

Are these interps created and destroyed for each connection, or can
they be shared?



Nope -- the interps are allocated/deallocated as needed just like ordinary connection interps.  But, since the connection will shuffle from one thread to the next, interps used by the connection (if any) go through the "garbage collection" phase as needed, e.g., closing all open files and clearing global vars.  This is why the "ns_cls" interface would be needed to stash per-connection context between threads.

And, this means you could have 3 interp/threads involved:

-- read callbacks in a "reader" thread (if configured, optional for ordinary sockets, required for ssl)
-- pre-queue callbacks in the "driver" thread
-- normal execution in the connection thread.

The code I checked in a month ago tried to deal with all that stuff and avoid the leak that was reported by not doing it properly.  Digging in, it was clear the interface wasn't complete -- hopefully it's complete and robust now and the manpages are close to accurate.




It seems that there a lot of interesting possibilities with this new
code. It is actually difficult to compare with tcl's [fileevent]
interface because this appears much more powerful. For instance, it
seems very likely that you could turn AOLserver into a proxy server
without ever invoking connection threads, everything would be done in
high-speed C based event I/O, but the transfer would still have access
to a tcl interp.


Yup -- the interface is a bit arcane but it can do interesting things like that.



My last question is the initialization of the interp. One driver
thread could service multiple virtual servers. When an interp is
created for use is there any choice? My understanding of the
conn-thread pools is that they partition interps into somewhat similar
groups. For instance, thread pools which handle static files would
tend to not grow in size over time. Threads which handle adp or tcl
files could be expected to grow as they serve unrelated dynamic
content.


The interps are allocated from per-server caches just like in a connection thread so the state should look like you expect (although, as mentioned, global vars will "disappear" between the reader/pre-queue interps and the connection interps).  As the driver thread never exists, misused interps in this case would lead to memory leaks/bloat that may be possible to mitigate in connection threads via the "die after so many connections..." options.  I suppose we could add a "die after so many uses..." config to get the same result in the driver thread -- that's a good idea, a call to Ns_TclMarkForDelete should do the trick after some counter...


BTW: I'm learning more and more about the whole LAMP stack (like most of us, I suspect).  While PHP is quite comfortable, the gymnastics AOLserver goes through to "warm-up" and "re-use" interps is absent in the LAMP world. There are "APC" caches which store and re-use the bytecodes (similar to ADP's caches) but none of the complexity around registering at-init routines, garbage collection, etc.  While all that stuff was a bit messy and confusing on AOLserver, it worked -- performance of interesting LAMP apps like Drupal seem to suffer for lack of such lower level design principles we had in AOLserver.  Interesting to see how this has evolved over 15 years (some of the first code for multithreaded AOLserver appeared in early 1995 -- I had more hair then).

-Jim

Jim Davidson

unread,
Jan 22, 2010, 5:38:09 PM1/22/10
to AOLS...@listserv.aol.com
The method of checking progress on a separate URL similar to the
example I sent does result in repeated requests during upload. But,
they're trivial by comparison - easily in the 100's of req/sec range
of response time and throughput. A bit goofy, but over a single keep-
alive socket for upload operations (likely rare relative to download,
e.g. what you'd expect at YouTube) probably ok

Jim

Sent from a phone

Reply all
Reply to author
Forward
0 new messages