Besides the other comments made, one should note that even _bash_ is
faster than Go.. try this snippet, for instance:
$ nginx
--
Gustavo Niemeyer
http://niemeyer.net
http://niemeyer.net/blog
http://niemeyer.net/twitter
Not necessarily.
It's not just that go is serving more data - it may take more time to
generate that data. With trivially small pages, it's hard to tell where
the time is going - for all we know, it's going into some trivial
function call as part of printing the date string.
>
> But still, node.js is not really the fastest. I was hoping Golang can be
> orders of magnitude faster than node.js.
--
Scott Lawrence
Orders of magnitude faster? 10 to 100 times faster? Node is supposed
to be pretty fast. I assume the node guys aren't a bunch of idiots, so
I'd be surprised if any web server is that fast.
Andrew
This sounds like a very exciting competition. For the benefit of people like me who would like to play along at home, could I suggest some ground rules.
1. The code to be be benchmarked, both go and node goes into your favourite DVCS Repo.
2. Included in that repo is a script which will invoke (and compile if required) the relevant code then invoke the chosen benchmarking tool.
3. The benchmark script should assert the versions of the runtime and support libraries for consistency. Those can change over time, but should be identifiable as benchmarking parameters.
That way contributions from both sides of the fence could be incorporated in a consistent way.
Cheers
Dave
Sent from my C64
To prove golang users are not nuts, please beat this silly benchmark.
Michael T. Jones
Chief Technology Advocate, Google Inc.
1600 Amphitheatre Parkway, Mountain View, California 94043
Email: m...@google.com Mobile: 650-335-5765 Fax: 650-649-1938
Organizing the world's information to make it universally accessible and useful
I was definitely not implying that. (just to be explicit)
> But anyway you might be surprised by looking at this:
> http://www.yesodweb.com/blog/2011/03/preliminary-warp-cross-language-benchmarks
Is that graph a quad core benchmark? If so, I'm not at all surprised
that an efficient Haskell web server running on 4 cores is roughly 4x
more efficient than an equivalent node.js server running on a single
core. I would expect Go to exhibit similar performance in future.
> As Dave Cheney suggested, for benefit of people who would like to play
> along at home,
> there is a link in that post to a benchmarking framework github
> repository.
Looks like a good place to start. Thanks!
Andrew
golang is still slower than node.js, which comes from a script language.
--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
I do some test for fun:
CPU: Intel(R) Core(TM) i5-2500 CPU @ 3.30GHz RAM: 16G # redis, SET: 217391.30 requests per second # It's the upper limit, probably redis-benchmark -q # nginx, response a file of 1K # Requests per second: 148442.25 [#/sec] (mean), ab -n 300000 -c 100 -k http://lion.mei.fm/ # http-kit (http://http-kit.org) respond hello world # code: https://gist.github.com/4704565 # Requests per second: 111179.02 [#/sec] (mean) ab -n 400000 -c 100 -k http://lion.mei.fm:8080/ # The hello world go version, with modification of "http" => "net/http" # Requests per second: 17465.92 [#/sec] (mean) ab -n 100000 -c 100 -k http://lion.mei.fm:8080/ # node v0.6.19 # Requests per second: 12964.05 [#/sec] (mean) ab -n 100000 -c 100 -k http://127.0.0.1:8080
Go is quite fast, 17465.92 is a big number, does not seem to be the bottleneck in real life use.
Kind of disappointing. Golang is supposedly "closer" to the metal. But
I don't expect it only be comparable to node.js, and don't expect it
actually node.js is 45% faster than golang.
In the test, GOMAXPROCS is set to 1. Setting it to higher numbers
actually does not have much effect.
For go:
Concurrency Level: 100
Time taken for tests: 152.330 seconds
Complete requests: 1000000
Failed requests: 0
Write errors: 0
Total transferred: 110000000 bytes
HTML transferred: 14000000 bytes
Requests per second: 6564.69 [#/sec] (mean)
Time per request: 15.233 [ms] (mean)
Time per request: 0.152 [ms] (mean, across all concurrent
requests)
Transfer rate: 705.19 [Kbytes/sec] received
For node.js:
Concurrency Level: 100
Time taken for tests: 104.538 seconds
Complete requests: 1000000
Failed requests: 0
Write errors: 0
Total transferred: 78000000 bytes
HTML transferred: 14000000 bytes
Requests per second: 9565.93 [#/sec] (mean)
Time per request: 10.454 [ms] (mean)
Time per request: 0.105 [ms] (mean, across all concurrent
requests)
--
Is it possible that a http server written in Go beat nginx someday by taking advantage of the capability of massive concurrency?
Go does not have the same goal; instead, go focuses on good speed, but not at the sacrifice of safety or concurrent language features. In this sense, go is not as really optimizable as c, but is considerably more convenient and expressive, and the sum of go qualities outweighs any one of its characteristics.
Former node core contributor here (who is in love with go now).Node's http parser is indeed modeled after nginx / very fast. From the benchmarking I have done however, it is not necessarily the "magic piece" that makes things fast.I have a benchmark running node's low level JS binding to the http parser against a 623 byte GET request, and it's able to parse ~130k requests / second using a single core (~600 MBit/sec). This is about ~10x faster than what you'll see in any full stack benchmarks, as discussed in this thread.So where does the rest of the performance go to? It's hard to say for sure, but AFAIK the following are big factors:* accept()ing new connections read() / write() on the sockets calling C/C++* functions from JS and vise versa (this is something the v8 team is working on* optimizing, but last I checked it was a big cost factor) allocating the* user-facing JS objects / garbage collecting themI'm mentioning this because node.js, while being heavily optimized, is still far from being as fast in this benchmark as it could be. So I feel that given some time and effort, I'm not worried that go could catch up (I'm certainly going to look into it myself as time allows).That being said, I feel that go is light years ahead of node.js when it comes to http processing. Node's http stack, starting with the http parser itself [1], is a mess [2] compared to go [3]. A lot of this is a great showcase for goroutines, as they allow keeping most of the parser state inside the functions, while node has to use a huge state machine and lots of book keeping to make the parser streaming/resumable.So even if node was to dominate this vanity benchmark in the future, I'd still happily accept this in exchange for a clear and readable http stack.--fg
On Monday, 4 February 2013 22:29:03 UTC+1, Kevin Gillette wrote:Architecturally, nginx, lighttpd, and similar projects are so fast specifically because they leverage os facilities like epoll and kqueue while, importantly, avoiding the overhead of multithreading. The extensions and modules for those servers also have to be written with single-threaded operation in mind.Go does not have the same goal; instead, go focuses on good speed, but not at the sacrifice of safety or concurrent language features. In this sense, go is not as really optimizable as c, but is considerably more convenient and expressive, and the sum of go qualities outweighs any one of its characteristics.
--
I do have a real Go program, Weed-FS, a distributed file system, which is basically to serve static files just like nginx.
Making the http package faster will benefit many applications.
Former node core contributor here (who is in love with go now).Node's http parser is indeed modeled after nginx / very fast. From the benchmarking I have done however, it is not necessarily the "magic piece" that makes things fast.I have a benchmark running node's low level JS binding to the http parser against a 623 byte GET request, and it's able to parse ~130k requests / second using a single core (~600 MBit/sec). This is about ~10x faster than what you'll see in any full stack benchmarks, as discussed in this thread.So where does the rest of the performance go to? It's hard to say for sure, but AFAIK the following are big factors:* accept()ing new connections read() / write() on the sockets calling C/C++* functions from JS and vise versa (this is something the v8 team is working on* optimizing, but last I checked it was a big cost factor) allocating the* user-facing JS objects / garbage collecting themI'm mentioning this because node.js, while being heavily optimized, is still far from being as fast in this benchmark as it could be. So I feel that given some time and effort, I'm not worried that go could catch up (I'm certainly going to look into it myself as time allows).That being said, I feel that go is light years ahead of node.js when it comes to http processing. Node's http stack, starting with the http parser itself [1], is a mess [2] compared to go [3]. A lot of this is a great showcase for goroutines, as they allow keeping most of the parser state inside the functions, while node has to use a huge state machine and lots of book keeping to make the parser streaming/resumable.So even if node was to dominate this vanity benchmark in the future, I'd still happily accept this in exchange for a clear and readable http stack.--fg
On Monday, 4 February 2013 22:29:03 UTC+1, Kevin Gillette wrote:
You can set Content-Length header explicitly to avoid chunked encoding.package mainimport ("io""net/http""strconv"
)func HelloServer(w http.ResponseWriter, req *http.Request) {
msg := "Hello World!\n";w.Header().Add("Content-Type", "text/plain")w.Header().Add("Content-Length", strconv.Itoa(len(msg)))io.WriteString(w, msg)}func main() {http.HandleFunc("/", HelloServer)http.ListenAndServe(":8000", nil)}
Additionally, why does the date string need to be regenerated in this case at all? If the content of the resource has not semantically changed, then the date should reflect the initial point when this incarnation of the resource has been valid. In other words, if the content of a resource won't change for 5 years, why update the date string? There's usually no value in generating a recent modification date just for its own sake (that doesn't magically make the resource "dynamic", and even if it did, there's no magic value in being "dynamic" when the content may be static).
Using tip this shouldn't be required. I can't find the commit message, but must be two, three weeks back then when this was altered for response sizes < 2k bytes AFAIK.
Am Dienstag, 5. Februar 2013 13:10:09 UTC+1 schrieb Naoki INADA:
You can set Content-Length header explicitly to avoid chunked encoding.package mainimport ("io""net/http""strconv")func HelloServer(w http.ResponseWriter, req *http.Request) {msg := "Hello World!\n";w.Header().Add("Content-Type", "text/plain")w.Header().Add("Content-Length", strconv.Itoa(len(msg)))
io.WriteString(w, msg)}func main() {http.HandleFunc("/", HelloServer)http.ListenAndServe(":8000", nil)}
--
I hope Go 1.2 is better for high performance middleware with HTTP API than Go 1.0.3.
(I'm too late to Go 1.1, maybe.)
Its been a while since this was first posted, but this is something that isn't true at all on my system
Den torsdagen den 16:e juni 2011 kl. 20:43:09 UTC+2 skrev ChrisLu:In the test, GOMAXPROCS is set to 1. Setting it to higher numbers
actually does not have much effect.
I made the following changes to the Go program source
package main
import ("net/http";"io";"runtime";"fmt")
func HelloServer(w http.ResponseWriter, req *http.Request) {
io.WriteString(w, "hello, world!\n")
}
func main() {
fmt.Println(runtime.GOMAXPROCS(0))
http.HandleFunc("/", HelloServer)
http.ListenAndServe(":8080", nil)
}
My machine is a Core i7-2600 running 64-bit Ubuntu 12.04LTS and I'm using Go 1.0.3 and node.js is the version shipped with Ubuntu, v0.6.12
The node.js result is
Concurrency Level: 100
Time taken for tests: 7.340 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Total transferred: 7600000 bytes
HTML transferred: 1200000 bytes
Requests per second: 13624.73 [#/sec] (mean)
Time per request: 7.340 [ms] (mean)
Time per request: 0.073 [ms] (mean, across all concurrent requests)
Transfer rate: 1011.21 [Kbytes/sec] received
which is slightly higher compared to the first node.js result posted in this thread. I tried to vary the number of CPU-cores node.js is allowed to use but it does not really affect the end result. Running strace on the node.js binary suggest that it is really a single threaded application using epoll to track all active sockets.
Go result when using GOMAXPROCS=1
Concurrency Level: 100
Time taken for tests: 6.742 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Total transferred: 11100000 bytes
HTML transferred: 1400000 bytes
Requests per second: 14832.05 [#/sec] (mean)
Time per request: 6.742 [ms] (mean)
Time per request: 0.067 [ms] (mean, across all concurrent requests)
Transfer rate: 1607.77 [Kbytes/sec] received
which is very close to the node.js number, but actually slightly better.
But look at the result when using GOMAXPROCS=2
Concurrency Level: 100
Time taken for tests: 3.760 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Total transferred: 11100000 bytes
HTML transferred: 1400000 bytes
Requests per second: 26595.79 [#/sec] (mean)
Time per request: 3.760 [ms] (mean)
Time per request: 0.038 [ms] (mean, across all concurrent requests)
Transfer rate: 2882.94 [Kbytes/sec] received
80% more connections per unit of time, which is quite impressive considering the small amount of work being done on a per session basis!
It doesn't really scale beyond that, using GOMAXPROCS=3 or GOMAXPROCS=4 yield roughly the same result.
So Go really is an order of magnitude faster as I assume that all programmers count in base 2 ;)
I'm not sure your forking-node code does what you described: if the pid is the "cluster master", then you fork -- otherwise if it's already a forked child, you listen. Unless there are magic unicorns in there somewhere, or I'm interpreting it in as wrong a fashion as possible, what would happen is that the initial invocation would fork numCPUs children and then die, and each of those children would attempt to listen, only one of which would succeed (the rest failing silently [or failing loudly to nowhere, perhaps]). If that's so, you're getting 92k rps on node via only one pid (which means you should be getting 92k rps without any of the forking code).
When you call server.listen(...)
in a worker, it serializes the
arguments and passes the request to the master process. If the master
process already has a listening server matching the worker's
requirements, then it passes the handle to the worker. If it does not
already have a listening server matching that requirement, then it will
create one, and pass the handle to the child.
This was what happened when I took some time to invalidate an article and boundary that was simply sporting unrealistically numbers:See the benchmark results from the bottom of the README. This particular benchmark was between machines on a LAN (some details higher up in the readme), had basic network tuning, and was hitting Postgres for data.The database/sql concurrency patch mentioned in the readme is here: https://codereview.appspot.com/6855102/
On Friday, February 8, 2013 3:43:45 PM UTC-8, Robotic Artichoke wrote:I still think we need to put our brains together and create more meaningful benchmarks. Hello worlding with the vanilla node server is close to pointless IMO.
In short, I strongly agree with you that more meaningful benchmarks are critical, as they turn up patches like the above.
--
Kind of disappointing. Golang is supposedly "closer" to the metal. But
I don't expect it only be comparable to node.js, and don't expect it
actually node.js is 45% faster than golang.
In the test, GOMAXPROCS is set to 1. Setting it to higher numbers
actually does not have much effect.
For go:
Concurrency Level: 100
Time taken for tests: 152.330 seconds
Complete requests: 1000000
Failed requests: 0
Write errors: 0
Total transferred: 110000000 bytes
HTML transferred: 14000000 bytes
Requests per second: 6564.69 [#/sec] (mean)
Time per request: 15.233 [ms] (mean)
Time per request: 0.152 [ms] (mean, across all concurrent
requests)
Transfer rate: 705.19 [Kbytes/sec] received
For node.js:
Concurrency Level: 100
Time taken for tests: 104.538 seconds
Complete requests: 1000000
Failed requests: 0
Write errors: 0
Total transferred: 78000000 bytes
HTML transferred: 14000000 bytes
Requests per second: 9565.93 [#/sec] (mean)
Time per request: 10.454 [ms] (mean)
Time per request: 0.105 [ms] (mean, across all concurrent
requests)
Transfer rate: 728.66 [Kbytes/sec] received
Here are the codes for go and node.js
go code:
package main
import ("http";"io";"runtime")
func HelloServer(w http.ResponseWriter, req *http.Request) {
io.WriteString(w, "hello, world!\n")
}
func main() {
runtime.GOMAXPROCS(1)
http.HandleFunc("/", HelloServer)
http.ListenAndServe(":8080", nil)
}
node.js code:
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('hello, world!\n');
}).listen(8080, "127.0.0.1");
console.log('Server running at http://127.0.0.1:8080/');
Unrelated, but I have found wkr[1] and weighttp[2] produce more consistent results than ab or siege.
[1]: https://github.com/wg/wrk
[2]: http://redmine.lighttpd.net/projects/weighttp/wiki
On Feb 3, 2013, at 2:08 AM, Dave Cheney <da...@cheney.net> wrote:
> Here are some results comparing tip to the current node.js release
>
> hardware: lenovo x220, ubunut 12.10, amd64
>
> Node.js:
>
> Server Hostname: localhost
> Server Port: 1337
>
> Document Path: /
> Document Length: 12 bytes
>
> Concurrency Level: 100
> Time taken for tests: 9.523 seconds
> Complete requests: 100000
> Failed requests: 0
> Write errors: 0
> Total transferred: 11300000 bytes
> HTML transferred: 1200000 bytes
> Requests per second: 10501.06 [#/sec] (mean)
> Time per request: 9.523 [ms] (mean)
> Time per request: 0.095 [ms] (mean, across all concurrent requests)
> Transfer rate: 1158.81 [Kbytes/sec] received
>
> Connection Times (ms)
> min mean[+/-sd] median max
> Connect: 0 0 0.1 0 7
> Processing: 1 9 4.9 9 31
> Waiting: 1 9 4.9 9 31
> Total: 1 9 4.9 9 31
>
> Percentage of the requests served within a certain time (ms)
> 50% 9
> 66% 12
> 75% 13
> 80% 14
> 90% 16
> 95% 17
> 98% 20
> 99% 22
> 100% 31 (longest request)
>
> lucky(~) % go version
> go version devel +3a9a5d2901f7 Sun Feb 03 02:01:05 2013 -0500 linux/amd64
>
> lucky(~) % ab -n 100000 -c 100 localhost:1337/
> This is ApacheBench, Version 2.3 <$Revision: 655654 $>
> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
> Licensed to The Apache Software Foundation, http://www.apache.org/
>
> Benchmarking localhost (be patient)
> Completed 10000 requests
> Completed 20000 requests
> Completed 30000 requests
> Completed 40000 requests
> Completed 50000 requests
> Completed 60000 requests
> Completed 70000 requests
> Completed 80000 requests
> Completed 90000 requests
> Completed 100000 requests
> Finished 100000 requests
>
>
> Server Software:
> Server Hostname: localhost
> Server Port: 1337
>
> Document Path: /
> Document Length: 14 bytes
>
> Concurrency Level: 100
> Time taken for tests: 10.173 seconds
> Complete requests: 100000
> Failed requests: 0
> Write errors: 0
> Total transferred: 13500000 bytes
> HTML transferred: 1400000 bytes
> Requests per second: 9830.25 [#/sec] (mean)
> Time per request: 10.173 [ms] (mean)
> Time per request: 0.102 [ms] (mean, across all concurrent requests)
> Transfer rate: 1295.98 [Kbytes/sec] received
>
> Connection Times (ms)
> min mean[+/-sd] median max
> Connect: 0 0 0.2 0 8
> Processing: 2 10 3.8 9 29
> Waiting: 1 10 3.8 9 28
> Total: 6 10 3.8 9 29
>
> Percentage of the requests served within a certain time (ms)
> 50% 9
> 66% 10
> 75% 11
> 80% 12
> 90% 14
> 95% 18
> 98% 25
> 99% 26
> 100% 29 (longest request)
>
> Which is pretty damn close. However, if we compare with siege
>
> node.js:
>
> lucky(~) % siege -b -t 10s -c 100 localhost:1337/
> ** SIEGE 2.70
> ** Preparing 100 concurrent users for battle.
> The server is now under siege...
> Lifting the server siege... done.
> Transactions: 65944 hits
> Availability: 100.00 %
> Elapsed time: 9.17 secs
> Data transferred: 0.75 MB
> Response time: 0.01 secs
> Transaction rate: 7191.28 trans/sec
> Throughput: 0.08 MB/sec
> Concurrency: 99.56
> Successful transactions: 65944
> Failed transactions: 0
> Longest transaction: 0.05
> Shortest transaction: 0.00
>
> FILE: /var/log/siege.log
> You can disable this annoying message by editing
> the .siegerc file in your home directory; change
> the directive 'show-logfile' to false.
> [error] unable to create log file: Permission denied
>
> go version devel +3a9a5d2901f7 Sun Feb 03 02:01:05 2013 -0500 linux/amd64
>
> lucky(~) % siege -b -t 10s -c 100 localhost:1337/
> ** SIEGE 2.70
> ** Preparing 100 concurrent users for battle.
> The server is now under siege...
> Lifting the server siege... done.
>
> Transactions: 24215 hits
> Availability: 100.00 %
> Elapsed time: 9.93 secs
> Data transferred: 0.32 MB
> Response time: 0.04 secs
> Transaction rate: 2438.57 trans/sec
> Throughput: 0.03 MB/sec
> Concurrency: 99.35
> Successful transactions: 24215
> Failed transactions: 0
> Longest transaction: 0.06
> Shortest transaction: 0.00
>
> Benchmarks are hard.
>
> Dave
>
> On Sun, Feb 3, 2013 at 7:52 PM, Gal Ben-Haim <gben...@gmail.com> wrote:
>> https://gist.github.com/4700801
>>
>> Gal Ben-Haim
>>
>>
>> On Sun, Feb 3, 2013 at 8:39 AM, Dave Cheney <da...@cheney.net> wrote:
>>>
>>>> I'm also seeing that Node.js is much faster than Go in the hello world
>>>> test.
>>>
>>> Hello,
>>>
>>> Could you post your benchmark tests, and your benchmark results, we've
>>> made a lot of fixes in tip (not 1.0.3) recently which should have
>>> closed the gap.
>>>
>>>> what is the explanation for this ?
>>>
>>> Lack of unicorns
>>>
>>> Dave
Requests per second: 214.05 [#/sec] (mean)
Requests per second: 161.55 [#/sec] (mean)
On Feb 21, 2014 6:24 PM, "ChrisLu" <chri...@gmail.com> wrote:
>
> Your test seems not with concurrency enabled. Just single threaded.
>
> I am actually the original poster. What I found interesting from this unscientific test is that, at the beginning, many people will call it wrong test, not realistic, bad testing tool, bad code, etc, or even saying node.js is C implementation and it for sure outperforms Go in this test. Every doubt has its own valid reasoning. But few accepts the fact that there is a problem in Go implementation.
No matter how both implementation perform, the test is still wrong and invalid. this fact simply won't change.
> And after 2.5 years with some bug fixes, maybe due to better goroutine reuse, or/and some other performance improvements, Go can now consistently beat node.js in this test. Everybody just become quiet on this, accepting that Golang is of course faster than node.js.
it's quiet on this topic because this topic has been beaten to death and we finally get to the consensus that we should just leave this alone.
> What can we learn from it? Maybe the obvious: always create a benchmark for your own use case, don't trust other people's numbers.
what i learn from this huge thread is that people like fame wars and esp. using contrived and invalid benchmark to prove their point with numbers.
This is a simplified test to serve static files. Think about CDN where you want to serve lots of static files fast. Just calling it "invalid" was not valid.
On Fri, Feb 21, 2014 at 8:31 PM, ChrisLu <chri...@gmail.com> wrote:
This is a simplified test to serve static files. Think about CDN where you want to serve lots of static files fast. Just calling it "invalid" was not valid.so your static http server hard-code the file content into the source code?