Sausalito timeout processing

9 views
Skip to first unread message

Gunther Rademacher

unread,
Mar 23, 2012, 9:45:44 AM3/23/12
to Sausalito Users
Hi,

a Sausalito application when deployed to portal.28msec.com apparently
is limited to 59 seconds of server time per request. To demonstrate, I
have created timeout.my28msec.com, containing an (intentionally
inefficient) calculation of exponentiation. When I ask it to calculate
2^25, the squid proxy that I am behind after 59 seconds reports an
HTTP 502 response with an error document that says

The following error was encountered while trying to retrieve the
URL: http://timeout.my28msec.com/default/index
Zero Sized Reply
Squid did not receive any data for this request.

Without a proxy, I have seen the Firefox web console indicating a
status of "undefined".

Is this the intended behavior? Is there any way to involve the error
module of the Sausalito app in case of a timeout? If not, what would
you recommend to do on client side to handle this?

Sorry for doing lengthy calculations, but this is what happens when I
ask for the XQuery-update 3.0 railroad diagram. By the way, that
calculation terminates in 17 seconds on my local machine, when I use
zorba_sausastore.exe from Sausalito 1.4.4 coresdk.

Thanks
Gunther

Till Westmann

unread,
Mar 23, 2012, 5:41:25 PM3/23/12
to sausali...@googlegroups.com
Hi Gunther,

the 59 second time limit is the time limit of Amazon's Elastic Load Balancer for idle connections (c.f. https://forums.aws.amazon.com/thread.jspa?threadID=33427).
So this works as designed and we are aware that this is a problem sometimes.
To allow for longer running requests we are planning to allow asynchronous requests.
Also, we will report an error for requests that exceed this limit.
But both of these features are not available on 1.4.4.

Is there a way to split the calculation into smaller parts (e.g. calculate the railroad diagrams for a subset of the productions at a time)?
If so you could split it up and store partial results in a collection for later assembly.

Would this help?

Also, it is surprising that the calculation takes a lot less time on your local machine.
Could you tell us a little bit more about
- the specification of the machine and about
- the memory consumption while it performs this calculation?

Regards,
Till

Gunther Rademacher

unread,
Mar 26, 2012, 3:35:34 PM3/26/12
to Sausalito Users
Hi Till,

thank you for your response.

Yes, it could be restructured, but I was trying to avoid this, because
it times out only on large grammars, and Zorba performance presumably
will improve over time, so restructuring might not be necessary any
more.

I have tested this again with a Sausalito application, both on my
local system and when deployed to portal.28msec.com. Times were
measured by Firefox web console.

The local machine is running Windows 7 on an "Intel(R) Core(TM)
i7-2600 CPU @ 3.40 GHz 3.40 GHz". Single executions completed between
16903 ms and 17242 ms. Getting just the preformatted xhtml page from a
local Apache takes less than 100 ms. Now accessing the deployed app
via http://xquery-update-30-rr.my28msec.com/default/index took between
35745ms and 39799ms. When accessing this at a different time through a
different connection, it was between 35661 ms and 42847 ms. No
timeouts this time, but it could be the case that I was doing slightly
more work when facing the timeout.

As you had asked for memory consumption, I also used perfmon to find
that a single execution causes the "Private Bytes" of sausa_fcgi.exe
to increase by about 220 MB, then falling back to where it was (or
slightly higher?).

I could send you complete source code for this test, if you want.

Best regards
Gunther


On Mar 23, 11:41 pm, Till Westmann <till.westm...@28msec.com> wrote:
> Hi Gunther,
>
> the 59 second time limit is the time limit of Amazon's Elastic Load Balancer for idle connections (c.f.https://forums.aws.amazon.com/thread.jspa?threadID=33427).

Till Westmann

unread,
Mar 28, 2012, 3:21:26 AM3/28/12
to sausali...@googlegroups.com
Hi Gunther,

thank you for the more detailed analysis.
With these numbers it seems that there is indeed no immediate need to restructure the code.
And I also agree with you that Sausalito is bound to become faster, so things will improve anyway.

Looking at it in a little more detail, you CPU seems to be quite a bit more powerful than the 1-2 EC2 compute units that the request would use, which explains the difference in the processing time. The memory consumption should obviously not be a problem.

It would indeed be helpful, if you would send the source code for the test.
If e-mail is ok, could you send it to sup...@28msec.com?

Regards,
Till

Gunther Rademacher

unread,
Mar 28, 2012, 3:51:21 AM3/28/12
to sausali...@googlegroups.com
Hi Till,

will send source code to sup...@28msec.com in a minute.

Thanks
Gunther
Reply all
Reply to author
Forward
0 new messages