when sending several thousand HTTP requests I've noticed some
irregular memory consumption.
Even after stopping the requests and manually triggering a GC, the
allocated memory is pretty high.
I ran the process with memory profiling enabled which lead to this:
https://docs.google.com/open?id=0B3CCB7CMFJ8FNGVlZDVkYjItNzc4Yy00ZmQ4LWExYTUtNzI3MjY3OTUzYmZh
Any idea what I'm doing wrong here?
Best regards,
Sascha
--
Through the darkness of future past
the magician longs to see
One chants out between two worlds
Fire walk with me.
Can you please post some more details, including the source.
Are you using 386 or amd64 ?
How are you computing the memory usage of your process?
The runtime does not currently return memory to the operating system,
but there is a change in the works to improve the situation.
http://codereview.appspot.com/5451057/
Cheers
Dave
https://docs.google.com/open?id=0B3CCB7CMFJ8FZWQwOWYxYzgtYjFiOS00MDgxLWEyZTQtZGU1YzhmNDY3ZjJh
goroutine 13328 [chan receive]:
net/http.(*persistConn).readLoop(0x3660eb00, 0x35f4f3a0)
/home/sascha/Development/Go/go/src/pkg/net/http/transport.go:528 +0x1b5
created by net/http.(*Transport).getConn
/home/sascha/Development/Go/go/src/pkg/net/http/transport.go:375 +0x591
No idea why there are so many of them.
Is there anything I can do to make them go away?
Russ
quite the contrary... All requests are going to the same server.
It's a simple forwarding (and if not able to, store in file and
forward later) HTTP logging proxy.
There's an input and an output channel and a buffer file which is used
when the output channel is clogged (entries stored to the file will be
sent later, when the output channel is less then 75% full).
There's a limited number of workers working on the output channel (in
the example here there were 4 worker goroutines).
resp, reqerr := fw.httpClient.Do(req)
if reqerr != nil {
return reqerr
}
resp.Body.Close()
I'd like to help but it is not clear from the memory profile, or the
snippet of code you provided exactly what is going on.
Are you able to provide a single file test case that exhibits the
problem? Just strip out the parts of your app that join your federated
writer to some http.Handler and wrap a main() around it. If that
reproduces the problem then we stand a sizeable chance of getting to
the bottom of the issue.
Cheers
Dave
Yes, that would be wonderful. I'd love to debug if I could reproduce.
> the defer is just an copy & paste error on my side (remember in the
> original code sendRequest is a function).
>
> Removed "defer" - behavior is the same.
>
> http://pastebin.com/UKuXbSar
It seems that the issue is that the goroutine calling putIdleConn does
not exit if putIdleConn discards the connection because the maximum is
hit.
Please try the attached patch.
diff --git a/src/pkg/net/http/transport.go b/src/pkg/net/http/transport.go
--- a/src/pkg/net/http/transport.go
+++ b/src/pkg/net/http/transport.go
@@ -235,15 +235,15 @@
return ""
}
-func (t *Transport) putIdleConn(pconn *persistConn) {
+func (t *Transport) putIdleConn(pconn *persistConn) bool {
t.lk.Lock()
defer t.lk.Unlock()
if t.DisableKeepAlives || t.MaxIdleConnsPerHost < 0 {
pconn.close()
- return
+ return false
}
if pconn.isBroken() {
- return
+ return false
}
key := pconn.cacheKey
max := t.MaxIdleConnsPerHost
@@ -252,9 +252,10 @@
}
if len(t.idleConn[key]) >= max {
pconn.close()
- return
+ return false
}
t.idleConn[key] = append(t.idleConn[key], pconn)
+ return true
}
func (t *Transport) getIdleConn(cm *connectMethod) (pconn *persistConn) {
@@ -565,7 +566,9 @@
lastbody = resp.Body
waitForBodyRead = make(chan bool)
resp.Body.(*bodyEOFSignal).fn = func() {
- pc.t.putIdleConn(pc)
+ if !pc.t.putIdleConn(pc){
+ return
+ }
waitForBodyRead <- true
}
} else {
@@ -578,7 +581,9 @@
// read it (even though it'll just be 0, EOF).
lastbody = nil
- pc.t.putIdleConn(pc)
+ if !pc.t.putIdleConn(pc) {
+ return
+ }
}
}
--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.