Jetty or Tomcat, Which web container is recommended to use to deploy the Lift app ?

431 views
Skip to first unread message

Neil.Lv

unread,
Nov 19, 2009, 8:35:42 PM11/19/09
to Lift
Hi all,

I have a silly question about the deploy.

Which web container is recommended to use to deploy the Lift app ?
Jetty or Tomcat ?

I want to use the Comet to push the data in the app.

* Apache + Tomcat ?
* Apache + what ?
* Nginx + what ?

Thanks for any suggestion !

Cheers,
Neil

Margaret

unread,
Nov 19, 2009, 8:44:08 PM11/19/09
to lif...@googlegroups.com
* Apache + Tomcat

I deploy a comet actor demo on tomcat
the url is http://maweis.com:8080
you can try it?

-----------------------------------------------------
mawe...@gmail.com
13585201588
http://maweis.com
> --
>
> You received this message because you are subscribed to the Google Groups "Lift" group.
> To post to this group, send email to lif...@googlegroups.com.
> To unsubscribe from this group, send email to liftweb+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/liftweb?hl=.
>
>
>

monty chen

unread,
Nov 19, 2009, 8:59:56 PM11/19/09
to Lift
LVS + Nginx + Haproxy + Tomcat


On 11月20日, 上午9时44分, Margaret <mawei...@gmail.com> wrote:
>  * Apache + Tomcat
>
> I deploy a comet actor demo on tomcat
> the url ishttp://maweis.com:8080
> you can try it?
>
> -----------------------------------------------------
> mawei...@gmail.com
> 13585201588http://maweis.com

Margaret

unread,
Nov 19, 2009, 9:03:09 PM11/19/09
to lif...@googlegroups.com
perfect enviorment

WHO will be the database ?


-----------------------------------------------------
mawe...@gmail.com
13585201588
http://maweis.com




2009/11/20 monty chen <mont...@qq.com>:

monty chen

unread,
Nov 19, 2009, 9:13:35 PM11/19/09
to Lift

mysql( for
transaction data)
/
/
LVS + Nginx + Haproxy + Tomcat - - - memcache ( for cache)
\
\
cassandra (for
web 2.0 data)

monty chen

unread,
Nov 19, 2009, 9:17:42 PM11/19/09
to Lift
Hi, Margaret, Store engine use: mysq + cassandra , and plus memcached
for cache:


mysql
( transaction data)

Margaret

unread,
Nov 19, 2009, 9:19:36 PM11/19/09
to lif...@googlegroups.com
I will go and look cassandra

-----------------------------------------------------
mawe...@gmail.com
13585201588
http://maweis.com

Margaret

unread,
Nov 19, 2009, 9:20:38 PM11/19/09
to lif...@googlegroups.com
would you like give us the url of your application website?

-----------------------------------------------------
mawe...@gmail.com
13585201588
http://maweis.com

Neil.Lv

unread,
Nov 19, 2009, 9:36:49 PM11/19/09
to Lift

If i use "LVS + Nginx + Haproxy + Tomcat " to deploy the app, Can it
work with Apache at simultaneously ?

Cheers,
Neil

philip

unread,
Nov 19, 2009, 9:58:03 PM11/19/09
to Lift
Hi Neil,

I use maven to build and Jetty in the maven, so I run from my IDE the
maven to run the Jetty. Well it works well for me.
I also have SEAM framework on my Jetty as well as part of the setup.

Philip

Margaret

unread,
Nov 19, 2009, 10:04:18 PM11/19/09
to lif...@googlegroups.com
jetty is a lightweight container and can be webserver, in the cloud
computing , jetty is much agile for system start , restart.
-----------------------------------------------------
mawe...@gmail.com
13585201588
http://maweis.com




David Pollak

unread,
Nov 19, 2009, 10:27:04 PM11/19/09
to lif...@googlegroups.com
I recommend Nginx + Jetty.

Apache is the worst front end for this situation... it can only support a few hundred simultaneous connections before it falls over.  Ngnix on the other hand can proxy tens of thousands.

Jetty's continuations make it a much better choice than Tomcat.  You can have thousands of open Comet request to a Jetty instance where Tomcat is capped at a couple of hundred.

Once the Servlet 3.0 spec in implemented in Glassfish, etc., Lift will support 3.0 continuations and any 3.0 container will have the same scaling characteristics that Jetty currently does.

--

You received this message because you are subscribed to the Google Groups "Lift" group.
To post to this group, send email to lif...@googlegroups.com.
To unsubscribe from this group, send email to liftweb+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/liftweb?hl=.





--
Lift, the simply functional web framework http://liftweb.net
Beginning Scala http://www.apress.com/book/view/1430219890
Follow me: http://twitter.com/dpp
Surf the harmonics

Xuefeng Wu

unread,
Nov 19, 2009, 9:01:41 PM11/19/09
to lif...@googlegroups.com
Jetty is the better if you use Comet which support  jetty with Continuations.

On Fri, Nov 20, 2009 at 9:35 AM, Neil.Lv <ani...@gmail.com> wrote:
--

You received this message because you are subscribed to the Google Groups "Lift" group.
To post to this group, send email to lif...@googlegroups.com.
To unsubscribe from this group, send email to liftweb+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/liftweb?hl=.





--
Scala中文社区:  http://groups.google.com/group/scalacn

Margaret

unread,
Nov 19, 2009, 10:35:43 PM11/19/09
to lif...@googlegroups.com
thanks for your reply

when we package a scala+lift app as war,

deploy it on tomcat will be use comet?
deploy it on jetty will be use continuations?

is that true?
-----------------------------------------------------
mawe...@gmail.com
13585201588
http://maweis.com




David Pollak

unread,
Nov 19, 2009, 10:51:14 PM11/19/09
to lif...@googlegroups.com
On Thu, Nov 19, 2009 at 7:35 PM, Margaret <mawe...@gmail.com> wrote:
thanks for your reply

when we package a scala+lift app as war,

deploy it on tomcat will be use comet?
deploy it on jetty will be use continuations?

Comet is long polling.  In Jetty, Lift takes advantage of Jetty's continuations so that during the long poll, there's no thread consumed.  In Tomcat, 1 thread is consumed for each client that's connected to the server.

Margaret

unread,
Nov 19, 2009, 10:53:09 PM11/19/09
to lif...@googlegroups.com
I will try jetty
-----------------------------------------------------
mawe...@gmail.com
13585201588
http://maweis.com




On Fri, Nov 20, 2009 at 11:51 AM, David Pollak

monty chen

unread,
Nov 20, 2009, 12:59:06 AM11/20/09
to Lift
Hi,David Pollk!

Nginx only comes with a round-robin balancer and a hash-based
balancer, so if a request takes a while to load, Nginx will start
routing requests to backends that are already processing requests — as
a result, some backends will be queueing up requests while some
backends will remain idle. You will get an uneven load distribution,
and the unevenness will increase with the amount of load subject to
the load-balancer.

Haproxy as a LB can:
1: Plenty of load-balancing algorithms, including a “least
connections” strategy that picks the backend with the fewest pending
connections. Which happens to be just what we want.

2: Backends can be sanity- and health-checked by URL to avoid routing
requests to brain-damaged backends. (It can even stagger these checks
to avoid spikes.)

3: Requests can be routed based on all sorts of things: cookies, URL
substrings, client IP, etc.

So, I use nginx + haproxy + tomcat(jetty).


On 11月20日, 上午11时27分, David Pollak <feeder.of.the.be...@gmail.com>
wrote:
> > liftweb+u...@googlegroups.com<liftweb%2Bunsu...@googlegroups.com>
> > .
> > For more options, visit this group at
> >http://groups.google.com/group/liftweb?hl=.
>
> --
> Lift, the simply functional web frameworkhttp://liftweb.net
> Beginning Scalahttp://www.apress.com/book/view/1430219890

Timothy Perrett

unread,
Nov 20, 2009, 4:33:19 AM11/20/09
to lif...@googlegroups.com
Your missing a trick here - there is a fork of nginx done by Ezra that
includes a fair load balencer.

Google for it and you'll find the link as I don't have it handy - this
version would remove the need for your intermediate proxy.

Cheers, Tim

Sent from my iPhone

On 20 Nov 2009, at 06:59, monty chen <mont...@qq.com> wrote:

> Hi,David Pollk!
>
> Nginx only comes with a round-robin balancer and a hash-based
> balancer, so if a request takes a while to load, Nginx will start
> routing requests to backends that are already processing requests --

monty chen

unread,
Nov 20, 2009, 6:07:45 AM11/20/09
to Lift
thanks Timothy Perrett!

is it you say "a fork of nginx done by Ezra Zygmuntowicz" :

http://github.com/gnosek/nginx-upstream-fair/

Timothy Perrett

unread,
Nov 20, 2009, 6:42:24 AM11/20/09
to lif...@googlegroups.com
Ah yes, that's the one. Engine yard put up some money for someone to
build it and if memory serves it was done by some Russian dude. I
forget, but I've been using this fair load balencer in production for
about 2 years. It's solid.

Cheers, Tim

Sent from my iPhone

David Pollak

unread,
Nov 20, 2009, 11:44:46 AM11/20/09
to lif...@googlegroups.com


2009/11/19 monty chen <mont...@qq.com>
Hi,David Pollk!

Nginx only comes with a round-robin balancer and a hash-based
balancer, so if a request takes a while to load, Nginx will start
routing requests to backends that are already processing requests -- as

a result, some backends will be queueing up requests while some
backends will remain idle. You will get an uneven load distribution,
and the unevenness will increase with the amount of load subject to
the load-balancer.

Haproxy as a LB can:

Can it deal with AJP13?  That should give any statistics about back-end health.

Can it deal with 100,000 open connections?  Looking at the docs, it seems to eschew keep-alive and I'm wondering how it deals with long polling.
 
To unsubscribe from this group, send email to liftweb+u...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/liftweb?hl=.





--
Lift, the simply functional web framework http://liftweb.net
Beginning Scala http://www.apress.com/book/view/1430219890

monty chen

unread,
Nov 21, 2009, 2:09:01 AM11/21/09
to Lift
David Pollk, thanks you reply!

Let us talk about tomcat or jetty Which web container is recommended
to use to deploy the Lift app.

I remember your reply of Derek Chen-Becker:

On 5月5日, 上午5时59分, David Pollak <feeder.of.the.be...@gmail.com> wrote:
> Derek,
> Please note that about half of the requests failed in Jetty. Jetty does not
> seem to be explicitly closing the NIO sockets leading to an out of IO
> descriptor problem... that's why I used Tomcat.
>
> Thanks,
>
> David
>

But at now why you recommend Nginx + Jetty ?


> On Mon, May 4, 2009 at 2:47 PM, Derek Chen-Becker <dchenbec...@gmail.com>wrote:
>
>
>
> > Just to throw in another data point, I ran the tests on my AMD Phenom X2
> > 720 (3 cores, 6GB of RAM):
>
> > I generated the archetype exactly as you have it here.
>
> > Ran "mvn -Drun.mode=production -Djetty.port=9090 jetty:run"
>
> > Output from Apache Bench:
>
> > $ ab -c 10 -n 20000http://192.168.2.254:9090/user_mgt/login
> > This is ApacheBench, Version 2.3 <$Revision: 655654 $>
> > Copyright 1996 Adam Twiss, Zeus Technology Ltd,http://www.zeustech.net/
> > Licensed to The Apache Software Foundation,http://www.apache.org/
>
> > Benchmarking 192.168.2.254 (be patient)
> > Completed 2000 requests
> > Completed 4000 requests
> > Completed 6000 requests
> > Completed 8000 requests
> > Completed 10000 requests
> > Completed 12000 requests
> > Completed 14000 requests
> > Completed 16000 requests
> > Completed 18000 requests
> > Completed 20000 requests
> > Finished 20000 requests
>
> > Server Software: Jetty(6.1.16)
> > Server Hostname: 192.168.2.254
> > Server Port: 9090
>
> > Document Path: /user_mgt/login
> > Document Length: 3635 bytes
>
> > Concurrency Level: 10
> > Time taken for tests: 37.110 seconds
> > Complete requests: 20000
> > Failed requests: 10191
> > (Connect: 0, Receive: 0, Length: 10191, Exceptions: 0)
> > Write errors: 0
> > Total transferred: 79276096 bytes
> > HTML transferred: 72626584 bytes
> > Requests per second: 538.94 [#/sec] (mean)
> > Time per request: 18.555 [ms] (mean)

David Pollak

unread,
Nov 21, 2009, 9:48:01 AM11/21/09
to lif...@googlegroups.com


2009/11/20 monty chen <mont...@qq.com>

David Pollk, thanks you reply!

Let us talk about tomcat or jetty Which web container is recommended
to use to deploy the Lift app.

 I remember your reply of Derek Chen-Becker:

On 5月5日, 上午5时59分, David Pollak <feeder.of.the.be...@gmail.com> wrote:
> Derek,
> Please note that about half of the requests failed in Jetty.  Jetty does not
> seem to be explicitly closing the NIO sockets leading to an out of IO
> descriptor problem... that's why I used Tomcat.
>
> Thanks,
>
> David
>

But at now why you recommend Nginx + Jetty ?

The particular version of Jetty I used for the test had this problem.  The problem manifest during significant churn of http requests (> 2K serviced per second).  This is different from the long polling scenario.  Further, if you're serving 2K/second sustained, you are in the top 1% of all web sites... you're in Twitter/LinkedIn territory.

Jetty is the best option for Lift Comet (long polling) apps.
 
--

You received this message because you are subscribed to the Google Groups "Lift" group.
To post to this group, send email to lif...@googlegroups.com.
To unsubscribe from this group, send email to liftweb+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/liftweb?hl=.


monty chen

unread,
Nov 23, 2009, 3:32:18 AM11/23/09
to Lift
I am uging lift develop a projectI, I want to deploy my project using
jetty instead of tomcat, so i test jetty 6.1.22.

my notebook: lenovo Y330(cpu: Core2 Duo p7350 2GHz, mem: 3G)
my os: debian lenny for amd64
jdk: Java(TM) SE Runtime Environment (build 1.6.0_17-b04)

step 1: download jetty 6.1.22 from http://dist.codehaus.org/jetty/jetty-6.1.22/jetty-6.1.22.zip

step 2: unzip jetty 6.1.22 in my home dir.

setp3: set environment var
export JAVA_OPTS="-Drun.mode=production -server -Xmx2048"

setp3: run jetty (cd jetty-6.1.22/ ; java -jar start.jar )

test 1: start -----------------------------------------------------

test the "Hello World Servlet" example of jetty-6.1.22,
the example url: http://localhost:8080/hello/

monty@den:~/jetty-6.1.22$ ab -n 10000 -c 300 http://localhost:8080/hello/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software: Jetty(6.1.22)
Server Hostname: localhost
Server Port: 8080

Document Path: /hello/
Document Length: 39 bytes

Concurrency Level: 300
Time taken for tests: 10.344 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 1060212 bytes
HTML transferred: 390078 bytes
Requests per second: 966.72 [#/sec] (mean)
Time per request: 310.329 [ms] (mean)
Time per request: 1.034 [ms] (mean, across all concurrent
requests)
Transfer rate: 100.09 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 12 152.7 2 3000
Processing: 200 293 53.0 266 566
Waiting: 1 93 53.6 65 364
Total: 206 305 159.1 271 3251

Percentage of the requests served within a certain time (ms)
50% 271
66% 338
75% 345
80% 350
90% 359
95% 366
98% 378
99% 434
100% 3251 (longest request)

===the test1 result is good! =====
Failed requests: 0
Requests per second: 966.72 [#/sec] (mean)

test 1: end ################

test 2: start -----------------------------------------------------
test the "Request Dump JSP" exmaple of jetty-6.1.22,
the example url: http://localhost:8080/snoop.jsp

monty@den:~/jetty-6.1.22$ ab -n 50 -c 3 http://localhost:8080/snoop.jsp
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient).....done


Server Software: Jetty(6.1.22)
Server Hostname: localhost
Server Port: 8080

Document Path: /snoop.jsp
Document Length: 2249 bytes

Concurrency Level: 3
Time taken for tests: 0.044 seconds
Complete requests: 50
Failed requests: 23
(Connect: 0, Receive: 0, Length: 23, Exceptions: 0)
Write errors: 0
Total transferred: 124547 bytes
HTML transferred: 114676 bytes
Requests per second: 1132.76 [#/sec] (mean)
Time per request: 2.648 [ms] (mean)
Time per request: 0.883 [ms] (mean, across all concurrent
requests)
Transfer rate: 2755.50 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 1 3 2.9 1 12
Waiting: 1 2 2.9 1 12
Total: 1 3 2.9 1 12

Percentage of the requests served within a certain time (ms)
50% 1
66% 3
75% 3
80% 3
90% 9
95% 10
98% 12
99% 12
100% 12 (longest request)

===the test2 result is bad! =====
Failed requests: 23
(Connect: 0, Receive: 0, Length: 23, Exceptions: 0)
Requests per second: 1132.76 [#/sec] (mean)

why the "Request Dump JSP" exmaple testing result is failed, so less
Number of requests(50) and less concurrency (3),

Jeremy Day

unread,
Nov 20, 2009, 5:59:49 AM11/20/09
to lif...@googlegroups.com
All,

I'm admittedly quite a n00b here and I have  very little Maven experience.  Can someone provide a POM for the Nginx + Jetty configuration?  I think that I would find it quite helpful.  Thanks.

Jeremy

2009/11/20 Timothy Perrett <tim...@getintheloop.eu>
To unsubscribe from this group, send email to liftweb+u...@googlegroups.com.

Timothy Perrett

unread,
Nov 23, 2009, 11:10:25 AM11/23/09
to lif...@googlegroups.com
Ummm, there is no pom for this - its not that kind of thing.

NGINX is a front end C application - if you want the fair load balencer, just download the module and compile into your NGINX build.

Cheers, Tim
Reply all
Reply to author
Forward
0 new messages