Re: Webserver running out of sockets

1,627 views
Skip to first unread message
Message has been deleted

Tamás Gulácsi

unread,
Sep 20, 2012, 1:22:18 PM9/20/12
to golan...@googlegroups.com, tristan...@gmail.com
runtime.GOMAXPROCS(runtime.NumCPU())

+

r.Body.Close()

:


package main

import (
   "fmt"
   "net/http"
   "runtime"
)

func serve(w http.ResponseWriter, r *http.Request) {
    r.Body.Close()
    w.Write([]byte("Hello, world.\n"))
}

func main() {
    runtime.GOMAXPROCS(runtime.NumCPU())
    http.HandleFunc("/", serve)
    http.ListenAndServe(":5000", nil)
}


2012. szeptember 20., csütörtök 17:22:32 UTC+2 időpontban tristan...@gmail.com a következőt írta:
I'm testing 'helloworld' Node.js, Python-Flask, and Go web servers using 'ab' (also siege). Seems that Go is running out of sockets (regardless of ab or siege), while Node & Python are just fine.

[This question (with code) is also posted on stackexchange.]

Any suggestions about how I can get an apples for apples comparison here without modifying fd count or system variables?
How do I force the socket to close after serving the page?

I understand that I can increase the open socket threshold for the test. However, Node and Python are running just fine, so, as I see it, either the Go code needs to be modified to make it comparable or there is a problem with the http package.

Tamás Gulácsi

unread,
Sep 20, 2012, 1:34:12 PM9/20/12
to golan...@googlegroups.com, tristan...@gmail.com
package main

import (
    "fmt"
    "net/http"
    "runtime"
)

// var resp = []byte("Hello, world.\n")


func serve(w http.ResponseWriter, r *http.Request) {
    r.Body.Close()
    fmt.Fprintln(w, "Hello, world.")
    // w.Write(resp)

}

func main() {
    runtime.GOMAXPROCS(runtime.NumCPU())
    http.HandleFunc("/", serve)
    http.ListenAndServe(":5000", nil)
}


RESULT:
siege -c 1000 -t 10S 127.0.0.1:5000
** SIEGE 2.70
** Preparing 1000 concurrent users for battle.
The server is now under siege...^[[B
Lifting the server siege...      done.
Transactions:               16885 hits
Availability:              100.00 %
Elapsed time:                9.53 secs
Data transferred:            0.23 MB
Response time:                0.05 secs
Transaction rate:         1771.77 trans/sec
Throughput:                0.02 MB/sec
Concurrency:               83.35
Successful transactions:       16885
Failed transactions:               0
Longest transaction:            3.01
Shortest transaction:            0.00


BUT if I run it for 20 seconds, than the [error] socket: -277920000 address is unavailable.: Cannot assign requested address
errors come in.

Without r.Body.Close(), it is worse, comes in in the 10second version, too.

GThomas

Chandra Sekar S

unread,
Jan 1, 2013, 12:02:28 PM1/1/13
to golan...@googlegroups.com, tristan...@gmail.com
I have the exact same problem but I use siege instead of ab.

siege -c 500 -r 100

Increasing GOMAXPROCS and closing r.Body don't help in my case.

--
Chandru

On Friday, September 21, 2012 5:27:44 AM UTC+5:30, Grant_M wrote:
Tamás made good improvements. I would suggest using ab -r too.


Dave Cheney

unread,
Jan 1, 2013, 7:14:27 PM1/1/13
to Chandra Sekar S, golan...@googlegroups.com, tristan...@gmail.com
Run your test with GOGCTRACE=1 and lets us know the results.
Previously we have seen the garbage collector blocked by goroutines
that did not yield to the scheduler.
> --
>
>

Chandru

unread,
Jan 1, 2013, 10:54:32 PM1/1/13
to golan...@googlegroups.com
I've attached the output of running it with GOGCTRACE.

--
Chandra Sekar.S
gc_trace

Dave Cheney

unread,
Jan 1, 2013, 10:58:22 PM1/1/13
to Chandru, golan...@googlegroups.com
Doh! Sorry, I sent you the instructions for 'halp, my go program is
running out of memory'

Please try siege -b instead

If that does not work please reply with

1. the version of go you are using and operating system
2. the output of ulimit -a
3. the output of lsof -p $yourprocess, during the test run, preferably
just before the point that it usually fails.
4. the source to your test server program.

Cheers

Dave
> --
>
>

Dave Cheney

unread,
Jan 2, 2013, 12:25:44 AM1/2/13
to Chandru, golan...@googlegroups.com
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
benchmark 13862 chandru cwd DIR 8,1 4096 1494214
/home/chandru/benchmark
benchmark 13862 chandru rtd DIR 8,1 4096 2 /
benchmark 13862 chandru txt REG 8,1 3824015 1451692
/home/chandru/benchmark/benchmark
benchmark 13862 chandru mem REG 8,1 1811160 2237275
/lib/x86_64-linux-gnu/libc-2.15.so
benchmark 13862 chandru mem REG 8,1 135398 2240850
/lib/x86_64-linux-gnu/libpthread-2.15.so
benchmark 13862 chandru mem REG 8,1 149312 2245719
/lib/x86_64-linux-gnu/ld-2.15.so
benchmark 13862 chandru 0u CHR 136,1 0t0 4 /dev/pts/1
benchmark 13862 chandru 1u CHR 136,1 0t0 4 /dev/pts/1
benchmark 13862 chandru 2u CHR 136,1 0t0 4 /dev/pts/1
benchmark 13862 chandru 3u IPv6 990902 0t0 TCP *:8585 (LISTEN)
benchmark 13862 chandru 4r FIFO 0,8 0t0 990903 pipe
benchmark 13862 chandru 5w FIFO 0,8 0t0 990903 pipe
benchmark 13862 chandru 6u 0000 0,9 0 6920 anon_inode
benchmark 13862 chandru 11u IPv6 1039170 0t0 TCP
localhost:8585->localhost:45800 (ESTABLISHED)
benchmark 13862 chandru 13u IPv6 1035741 0t0 TCP
localhost:8585->localhost:35440 (ESTABLISHED)
benchmark 13862 chandru 14u IPv6 1048153 0t0 TCP
localhost:8585->localhost:59798 (ESTABLISHED)

^ there are three open sockets back to siege. Was this taken before,
during, or after the point you believe you are leaking file
descriptors ?

How are you determining that there is a file descriptor leak ?

Dave Cheney

unread,
Jan 2, 2013, 1:04:01 AM1/2/13
to Chandru, golang-nuts
+cc golang-nuts

So .. the problem is with siege ? You are not seeing any errors from
the Go process ?

I would say once siege starts to error out, it's probably broken,
which would explain why lsof reports few connections in use.

I noticed that you have set the number of open files to 1,000,000.
This is a very large number, and depending on your kernel and
architecture could consume a very large amount of kernel resources. I
would try with a lower limit, say 4096 or 8192.

If you still believe that this is a problem with the Go benchmark
process, then try substituting it for a know working webserver like
nginx (note, I am not advocating apache). This should help you isolate
if the problem is siege, your configuration, or the process being
benchmarked.

Dave

On Wed, Jan 2, 2013 at 4:57 PM, Chandru <chand...@gmail.com> wrote:
> I see several of these in siege output.
>
> socket: -1509542144 address is unavailable.: Cannot assign requested address
>
> I ran lsof immediately after these errors began.

Chandru

unread,
Jan 2, 2013, 1:17:19 AM1/2/13
to golang-nuts
Yes, I'm sure it has something to do with Go (or my Go code) because I tried the same benchmark on the below node.js code and it works fine every single time.


--
Chandra Sekar.S

Dave Cheney

unread,
Jan 2, 2013, 1:30:25 AM1/2/13
to Chandru, golang-nuts
Can you please run siege under strace -f and post the results.
> --
>
>

Chandru

unread,
Jan 2, 2013, 3:26:59 AM1/2/13
to golang-nuts
strace output attached.

--
Chandra Sekar.S
strace_go.gz

Chandru

unread,
Jan 2, 2013, 6:36:55 AM1/2/13
to golang-nuts
Upgrading to tip did the trick. Any idea which fix could've solved this?

--
Chandra Sekar.S

Dave Cheney

unread,
Jan 2, 2013, 6:40:43 AM1/2/13
to Chandra Sekar S, golang-nuts

That is excellent news. I cannot suggest which fix resolved your problem as I do not think I properly understood the issue. You can review the change log of the http package with

   hg log $GOROOT/src/pkg/net/http | less

Go 1.0.3 was published in august (I think)

--
 
 

Rémy Oudompheng

unread,
Jan 2, 2013, 1:46:22 PM1/2/13
to Dave Cheney, Chandra Sekar S, golang-nuts
If the difference (leak vs. no leak) is absolutely obvious, a hg
bisect run will give the answer.

Rémy.
Reply all
Reply to author
Forward
0 new messages