Nginx-Clojure Let You Deploy Clojure Web App on Nginx Without Any Java Web Server

1,548 views
Skip to first unread message

Xfeep Zhang

unread,
Jan 9, 2014, 10:42:31 AM1/9/14
to clo...@googlegroups.com

Alt textNginx-Clojure is a Nginx module for embedding Clojure or Java programs, typically those Ring based handlers.

There are some core features :

  1. Compatible with Ring and obviously supports those Ring based frameworks, such as Compojure etc.
  2. One of benifits of Nginx is worker processes are automatically restarted by a master process if they crash
  3. Utilizes lazy headers and direct memory operation between Nginx and JVM to fast handle dynamic contents from Clojure or Java code.
  4. Utilizes Nginx zero copy file sending mechanism to fast handle static contents controlled by Clojure or Java code.
  5. Supports Linux x64, Win32 and Mac OS X


Use Nginx-Clojure, you can deploy clojure web app on Nginx without any Java web server.  For more detials please check Nginx-Clojure github site.


Julien

unread,
Jan 9, 2014, 7:23:02 PM1/9/14
to clo...@googlegroups.com
Impressive!
Did you run some benchmark? How does it compare to ring-jetty and http-kit?

Julien

Xfeep Zhang

unread,
Jan 10, 2014, 5:05:49 PM1/10/14
to clo...@googlegroups.com
Thank you! I think it's useful.


I have done some simple tests. But I think general performance test may be meaningless regardless of real world requirements.

os : ubuntu 13.10 64bit
memory: 16G
cpu: intel i7 4700MQ (4 cores 2.4GHz)


1. static file test

file: 29.7k (real contents from https://groups.drupal.org/node/167984)
ring handler :

(def test-handler [req]
 {:status 200
    :headers {"content-type" "text/html"}
    :body (java.io.File. "resources/index.html") })


warmed by "ab -n  400000 -c 10000 http://localhost:${port}/"


test command:

ab -n  100000 -c 10000 http://localhost:${port}/


(1) nginx-clojure-0.1.0

===================================================================

Document Path:          /
Document Length:        29686 bytes

Concurrency Level:      10000
Time taken for tests:   3.464 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      2991200000 bytes
HTML transferred:       2968600000 bytes
Requests per second:    28867.39 [#/sec] (mean)
Time per request:       346.412 [ms] (mean)
Time per request:       0.035 [ms] (mean, across all concurrent requests)
Transfer rate:          843243.63 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       18  113 208.6     74    3082
Processing:    82  219  51.6    225     748
Waiting:       13   78  37.6     70     604
Total:        102  332 216.9    310    3190

Percentage of the requests served within a certain time (ms)
  50%    310
  66%    329
  75%    337
  80%    341
  90%    348
  95%    423
  98%   1295
  99%   1309
 100%   3190 (longest request)


(2) http-kit 2.1.16
=======================================================================

Document Path:          /
Document Length:        29686 bytes

Concurrency Level:      10000
Time taken for tests:   4.104 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      2980800000 bytes
HTML transferred:       2968600000 bytes
Requests per second:    24363.92 [#/sec] (mean)
Time per request:       410.443 [ms] (mean)
Time per request:       0.041 [ms] (mean, across all concurrent requests)
Transfer rate:          709218.63 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       49  130 229.3     98    3062
Processing:   124  269  38.6    267     603
Waiting:       39   87  24.0     88     353
Total:        243  398 233.7    369    3665

Percentage of the requests served within a certain time (ms)
  50%    369
  66%    379
  75%    387
  80%    395
  90%    415
  95%    443
  98%   1310
  99%   1396
 100%   3665 (longest request)


(3) ring-jetty
=======================================================================

Document Path:          /
Document Length:        29686 bytes

Concurrency Level:      10
Time taken for tests:   4.991 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      2982200000 bytes
HTML transferred:       2968600000 bytes
Requests per second:    20037.89 [#/sec] (mean)
Time per request:       0.499 [ms] (mean)
Time per request:       0.050 [ms] (mean, across all concurrent requests)
Transfer rate:          583564.46 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       1
Processing:     0    0   0.8      0      75
Waiting:        0    0   0.8      0      75
Total:          0    0   0.8      0      75

Percentage of the requests served within a certain time (ms)
  50%      0
  66%      1
  75%      1
  80%      1
  90%      1
  95%      1
  98%      1
  99%      1
 100%     75 (longest request)



2. simple string

warmed by "ab -n  400000 -c 10000  http://localhost:${port}/"

ring handler :

(def test-handler [req]
 {:status 200
    :headers {"content-type" "text/html"}
    :body "Hello, Clojure!" })


test command:

ab -n  100000 -c 10000  http://localhost:${port}/


(1) nginx-clojure-0.1.0
======================================================================
Document Path:          /
Document Length:        15 bytes

Concurrency Level:      10
Time taken for tests:   1.952 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      17000000 bytes
HTML transferred:       1500000 bytes
Requests per second:    51241.03 [#/sec] (mean)
Time per request:       0.195 [ms] (mean)
Time per request:       0.020 [ms] (mean, across all concurrent requests)
Transfer rate:          8506.81 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       1
Processing:     0    0   0.1      0       3
Waiting:        0    0   0.1      0       3
Total:          0    0   0.1      0       4

Percentage of the requests served within a certain time (ms)
  50%      0
  66%      0
  75%      0
  80%      0
  90%      0
  95%      0
  98%      0
  99%      0
 100%      4 (longest request)


(2) http-kit 2.1.16
=======================================================================

Document Path:          /
Document Length:        15 bytes

Concurrency Level:      10
Time taken for tests:   2.424 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      13400000 bytes
HTML transferred:       1500000 bytes
Requests per second:    41258.40 [#/sec] (mean)
Time per request:       0.242 [ms] (mean)
Time per request:       0.024 [ms] (mean, across all concurrent requests)
Transfer rate:          5399.05 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       1
Processing:     0    0   0.1      0       4
Waiting:        0    0   0.1      0       3
Total:          0    0   0.1      0       5

Percentage of the requests served within a certain time (ms)
  50%      0
  66%      0
  75%      0
  80%      0
  90%      0
  95%      0
  98%      0
  99%      0
 100%      5 (longest request)


(3) ring-jetty
=========================================================================

Document Path:          /
Document Length:        15 bytes

Concurrency Level:      10
Time taken for tests:   3.445 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      16700000 bytes
HTML transferred:       1500000 bytes
Requests per second:    29030.44 [#/sec] (mean)
Time per request:       0.344 [ms] (mean)
Time per request:       0.034 [ms] (mean, across all concurrent requests)
Transfer rate:          4734.46 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       1
Processing:     0    0   0.1      0       4
Waiting:        0    0   0.1      0       4
Total:          0    0   0.1      0       4

Percentage of the requests served within a certain time (ms)
  50%      0
  66%      0
  75%      0
  80%      0
  90%      0
  95%      0
  98%      1
  99%      1
 100%      4 (longest request)
Message has been deleted

Xfeep Zhang

unread,
Jan 12, 2014, 10:21:06 AM1/12/14
to clo...@googlegroups.com
Sorry for my mistake!

1. In the static file test, the ring-jetty result is about 10 concurrents. NOT 10000 concurrents  ("Concurrency Level:      10" in  the ab report ).
2. In the small string test, All results about three server are about 10 concurrents. NOT 10000 concurrents.

There are right results about these two mistake :

1. static file test

(3) ring-jetty  more bad than 10 concurrents

=======================================================================
Document Path:          /
Document Length:        29686 bytes

Concurrency Level:      10000
Time taken for tests:   6.303 seconds

Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      2982200000 bytes
HTML transferred:       2968600000 bytes
Requests per second:    15864.43 [#/sec] (mean)
Time per request:       630.341 [ms] (mean)
Time per request:       0.063 [ms] (mean, across all concurrent requests)
Transfer rate:          462020.65 [Kbytes/sec] received


Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       12  328 535.0     43    3041
Processing:    25  124 112.9     96    3523
Waiting:        8   47  99.4     28    3523
Total:         52  452 544.5    157    4546


Percentage of the requests served within a certain time (ms)
  50%    157
  66%    305
  75%   1071
  80%   1102
  90%   1139
  95%   1155
  98%   1462
  99%   3100
 100%   4546 (longest request)


2. simple string (10000 concurrents)

http-kit is the fastest.  But nginx-clojure is too young and has vast room for growth :)

(1) nginx-clojure-0.1.0


Document Path:          /
Document Length:        15 bytes

Concurrency Level:      10000
Time taken for tests:   2.834 seconds

Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      17000000 bytes
HTML transferred:       1500000 bytes
Requests per second:    35291.16 [#/sec] (mean)
Time per request:       283.357 [ms] (mean)
Time per request:       0.028 [ms] (mean, across all concurrent requests)
Transfer rate:          5858.88 [Kbytes/sec] received


Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       51  118  21.6    118     178
Processing:    73  150  33.8    146     263
Waiting:       42  110  32.0    104     246
Total:        177  268  25.6    269     327


Percentage of the requests served within a certain time (ms)
  50%    269
  66%    278
  75%    285
  80%    288
  90%    297
  95%    309
  98%    314
  99%    318
 100%    327 (longest request)


(2) http-kit 2.1.16

Document Path:          /
Document Length:        15 bytes

Concurrency Level:      10000
Time taken for tests:   2.691 seconds

Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      13400000 bytes
HTML transferred:       1500000 bytes
Requests per second:    37165.27 [#/sec] (mean)
Time per request:       269.068 [ms] (mean)
Time per request:       0.027 [ms] (mean, across all concurrent requests)
Transfer rate:          4863.42 [Kbytes/sec] received


Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       72  118  46.2    114    1094
Processing:    31  134  26.1    136     344
Waiting:       21   81  33.5     71     273
Total:        183  252  43.8    251    1435


Percentage of the requests served within a certain time (ms)
  50%    251
  66%    258
  75%    259
  80%    261
  90%    263
  95%    263
  98%    265
  99%    266
 100%   1435 (longest request)



(3) ring-jetty


Document Path:          /
Document Length:        15 bytes

Concurrency Level:      10000
Time taken for tests:   9.740 seconds

Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      16700000 bytes
HTML transferred:       1500000 bytes
Requests per second:    10267.16 [#/sec] (mean)
Time per request:       973.979 [ms] (mean)
Time per request:       0.097 [ms] (mean, across all concurrent requests)
Transfer rate:          1674.43 [Kbytes/sec] received


Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0  193 399.8     11    3006
Processing:     0   51 207.6      5    7050
Waiting:        0   39 204.5      4    7050
Total:          0  244 482.0     28    8080


Percentage of the requests served within a certain time (ms)
  50%     28
  66%     79
  75%    283
  80%    306
  90%   1009
  95%   1067
  98%   1283
  99%   1886
 100%   8080 (longest request)



On Friday, January 10, 2014 8:23:02 AM UTC+8, Julien wrote:

Xfeep Zhang

unread,
Jan 12, 2014, 10:57:05 PM1/12/14
to clo...@googlegroups.com

So far I have found why nginx-clojure is slower than http-kit when 10000 concurrents. (when < = 1000 concurrents nginx-clojure is faster than http-kit.)
I have set too many connections per nginx worker (worker_connections = 20000) . This make nginx only use one worker to handle ab  requests (every request is tiny).
I plan to take note of c-erlang-java-performance and fork clojure-web-server-benchmarks to  do some  real world tests.

Sergey Didenko

unread,
Jan 13, 2014, 1:41:50 PM1/13/14
to clo...@googlegroups.com
Looks very interesting, thank you for your work!

I wonder how this is going to improve latency in comparison to nginx + http-kit for some real world test that is not using heavy DB operations.


--
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clo...@googlegroups.com
Note that posts from new members are moderated - please be patient with your first post.
To unsubscribe from this group, send email to
clojure+u...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
---
You received this message because you are subscribed to the Google Groups "Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clojure+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Xfeep Zhang

unread,
Jan 13, 2014, 9:31:03 PM1/13/14
to clo...@googlegroups.com
You're welcome.

I think there are several difficult phases :

(1)  update the test program in clojure-web-server-benchmarks , make the some packages to be the latest. (eg. http-kit from 1.3.0-alpha2 --> 2.1.16) and add nginx-php testing
(2)  test about real world size contents by group eg. tiny, small, medium, huge.
(3)  test about real world connection circumstances where a lot of connection is inactive but keep open.
(4)  try some real asynchronous test to fetch external resources (eg. rest service , db) before response to the client. eg.  using libdrizzle a no-blocking mysql  client from  https://launchpad.net/drizzle

Xfeep Zhang

unread,
Jan 14, 2014, 3:44:18 AM1/14/14
to clo...@googlegroups.com

I have done the first one. The result is HERE ( https://github.com/ptaoussanis/clojure-web-server-benchmarks )
Thanks Taoussanis for his invitation to the project clojure-web-server-benchmarks hosted on Github.

Feng Shen

unread,
Jan 14, 2014, 4:50:34 AM1/14/14
to clo...@googlegroups.com
Hi, 

Thanks for your work on nginx-clojure. It looks great!  

As I know Nginx spawns many processes(correct me if I am wrong),  does that mean, there will be many JVM process?

Xfeep Zhang

unread,
Jan 14, 2014, 8:12:16 AM1/14/14
to clo...@googlegroups.com
You are welcome!

Yes, you are right.  One JVM instance is embed  per Nginx Worker process.  The number of Nginx Workers  is generally the same with the number of CPU.

If one Worker crashs the Nginx Master will create a new one so you don't worry about JVM crashs accidentally.

Although there will be several JVM instances,  there 's only one main thread attached with the Nginx Woker process.

So the JVM instance uses less memory and no thread context switch cost in every JVM instance.

In some cases If you can  use only one JVM instance,  you can set the Nginx Worker number to be 1 and set jvm_workers > 1,  nginx-clojure will create

a thread pool with fixed number of thread.

to handle requests for you.

Mingli Yuan

unread,
Jan 14, 2014, 2:07:13 PM1/14/14
to clo...@googlegroups.com
Hi, Xfeep,

Thanks for your contribution, and the project looks interesting.

For me, the idea of driving ring webapp behind nginx is not new. 
We use uwsgi to drive our ring app behind nginx in our production.
uwsgi support JVM and ring for almost one year, and I think the code is relative stable right now.

- it support a native protocol between nginx and uwsgi which is more efficient than http
- it support unix socket
- and a rich uwsgi api layer to provide some means to communicate between webapps
- and according to the performance tests by the author, it is a little bit faster than jetty.

It is on our production for half a year, quite stable, and very harmonious with the python app.

I am not want to sale the solution of uwsgi, but it worth taking a look and make some comparison.

Regards,
Mingli

Xfeep Zhang

unread,
Jan 14, 2014, 9:10:40 PM1/14/14
to clo...@googlegroups.com
Hi Mingli,

Thanks for  your suggestion.

Nginx-Clojure is quite different from uwsgi when supports JVM.

Nginx-Clojure make JVM embed into Nginx worker process. JVM and Nginx worker have  the same memory process space.

 Nginx-Clojure heavy uses pointer operation just like C to handle memory with Nginx worker.

 If I'm not wrong,  uwsgi create every process for every request, or shared JVM processes between request?

When using Nginx-Clojure there's no IPC cost  even no thread switch cost if jvm_workers = 0 which is default.

So it's why Nginx-Clojure is so fast!

Xfeep Zhang

unread,
Jan 14, 2014, 11:27:34 PM1/14/14
to clo...@googlegroups.com
I have check the uwsgi document again and the JVM integration document is HERE http://uwsgi-docs.readthedocs.org/en/latest/JWSGI.html .

When using uwsgi to integrate JVM, the JVM process is not the same process of Nginx worker.

So it can not avoid IPC cost or socket cost ! So it will use more system handle (at least double ones) and more copy operataion between process

and maybe more memory cost.

So I think using uwsgi to integrate JVM maybe will not be so fast!

Roberto De Ioris

unread,
Jan 14, 2014, 11:39:57 PM1/14/14
to clo...@googlegroups.com

> Hi Mingli,
>
> Thanks for your suggestion.
>
> Nginx-Clojure is quite different from uwsgi when supports JVM.
>
> Nginx-Clojure make JVM embed into Nginx worker process. JVM and Nginx
> worker have the same memory process space.
>
> Nginx-Clojure heavy uses pointer operation just like C to handle memory
> with Nginx worker.
>
> If I'm not wrong, uwsgi create every process for every request, or
> shared
> JVM processes between request?

uWSGI creates a pool of processes with a number of threads in each and
then they are used for the whole server life cycle. (a pretty standard
behaviour)


>
> When using Nginx-Clojure there's no IPC cost even no thread switch cost
> if
> jvm_workers = 0 which is default.
>
> So it's why Nginx-Clojure is so fast!
>

i strongly suggest you to avoid the "performance" as a selling point, your
project is cool but not for performance (and you are using a pipe to
transfer requests data from nginx to the jvm so there ipc in place too,
even if you can improve things using OS-specific syscall like splice).

It is cool because it simplify deployments, but nginx is not an
application server, so things like multi-strategy gracefully reloads,
stuck-requests managers, offloading and so on are not available.

Regarding the "ipc problem", you generally put a load balancer on front of
your app, so there is always ipc

Do not get me wrong, as i have already said your project is cool, but you
should focus not on performance but on better integration with the nginx
api for writing non-blocking apps (if possible), something at uWSGI
unfortunately failed (the JVM has no support for switching stacks out of
the box, something that is required for async mode in uWSGI)


--
Roberto De Ioris
http://unbit.it

Xfeep Zhang

unread,
Jan 15, 2014, 4:51:11 AM1/15/14
to clo...@googlegroups.com

Nginx is not an  application server, but it can be a good application with some module such as nginx-lua which largely used in  production.


Regarding the "ipc problem", you generally put a load balancer on front of
your app, so there is always ipc


If we use F5 or LVS(Linux Virtual Server) as a balancer there 's no IPC :)
 
Do not get me wrong, as i have already said your project is cool, but you
should focus not on performance but on better integration with the nginx
api for writing non-blocking apps (if possible), something at uWSGI
unfortunately failed (the JVM has no support for switching stacks out of
the box, something that is required for async mode in uWSGI)

Thank you very much! Really good advices ! :)

Xfeep Zhang

unread,
Jan 15, 2014, 8:18:56 AM1/15/14
to clo...@googlegroups.com


On Wednesday, January 15, 2014 12:39:57 PM UTC+8, Roberto De Ioris wrote:

i strongly suggest you to avoid the "performance" as a selling point, your
project is cool but not for performance (and you are using a pipe to
transfer requests data from nginx to the jvm so there ipc in place too,
even if you can improve things using OS-specific syscall like splice).
 
Although I have make it clear in the nginx english mail list,  people joined clojure group may still misunderstand.

So please forgive me repeating the message here.

In the nginx english mail list , I have said :

" With the default setting pipe is not used.

Pipe is only used for enable jvm thread pool mode only when jvm_workers > 0 (jvm_workers default = 0).

Further more pipe is never used to transfer the whole request or response message.

When under jvm thread pool mode, pipe is only used to transfer a event flag (only one pointer size)。"
 

Sergey Didenko

unread,
Jan 20, 2014, 5:25:02 PM1/20/14
to clo...@googlegroups.com
Hi Xfeep,

What are the good ways to handle some heavy Clojure calculations when using nginx-clojure?

Under nginx model it's bad to block other incoming requests by holding a working thread for too long, right?

So is it better to route complex job to http-kit? Or to use some kind of queue? Or may be to use this nginx-clojure JVM pool that is off by default?


--

Xfeep Zhang

unread,
Jan 21, 2014, 9:17:08 AM1/21/14
to clo...@googlegroups.com

On Tuesday, January 21, 2014 6:25:02 AM UTC+8, Sergey Didenko wrote:
Hi Xfeep,

What are the good ways to handle some heavy Clojure calculations when using nginx-clojure?

Do you mean every request will cost too much time ?  If time cost is mainly caused by IO blocking,  java thread pool can be used to resolve this problem.

Otherwise you must add more  computers or use more fast hardware to handle those pure  CPU concentrated tasks.
 
Under nginx model it's bad to block other incoming requests by holding a working thread for too long, right?

Yes. Typically  the nginx worker processes won't be too many  and maybe the same number of your cpu cores.
If you can not reduce single reponse time,  all worker will be blocked by those slow tasks.


So is it better to route complex job to http-kit? Or to use some kind of queue? Or may be to use this nginx-clojure JVM pool that is off by default?


Suppose your time cost is mainly caused by IO blocking, java thread pool  can be used by two ways :

(1)  use pcall , pvalue etc. provided by clojure to execute your tasks parallel  to reduce single request-response time.
OR
(2)  with nginx-clojure just simply set jvm_workers to some medium  number eg 100, or bigger number if you get more memory. This thread pool will make nginx workers not blocked until all threads are exhausted.

When jvm_workers > 0,  there's additional cost to one transfer  event flag  by  pipe. But this cost can be ignored for your slow response and this cost will be lower than using nginx as a proxy to  pass requests to backend java server.

In the future release of Nginx-Clojure,  synchronized non-blocking IO APIs  will be provided.  I think those synchronized non-blocking APIs will simpilfy to handle some IO blocked tasks.

By that time jvm_workers maybe no need to be > 0 .
 

Sergey Didenko

unread,
Jan 21, 2014, 7:42:03 PM1/21/14
to clo...@googlegroups.com
I see, thanks.
Reply all
Reply to author
Forward
0 new messages