Possible memory leak in HTTPS connections

3,142 views
Skip to first unread message

Rob Napier

unread,
Oct 3, 2014, 3:24:14 PM10/3/14
to golan...@googlegroups.com
I seem to be leaking a small amount of memory in every HTTPS request. Here is a trivial server:

func httpHandler(w http.ResponseWriter, req *http.Request) {
 w
.Header().Set("Content-Type", "text/plain")
 w
.Write([]byte("This is an example server.\n"))
}

func main
() {
 http
.HandleFunc("/", httpHandler)
 err
:= http.ListenAndServeTLS(":8080", "cert.pem", "key.pem", nil)
 
if err != nil {
   log
.Fatal(err)
 
}
}


And a trivial infinite client:

func main() {
 tlsConfig
:= &tls.Config{
 
InsecureSkipVerify: true,
 
}

 count
:= 0
 
for {
   transport
:= &http.Transport{TLSClientConfig: tlsConfig}
   client
:= http.Client{Transport: transport}

   res
, err := client.Get("https://localhost:8080/")
   
if err != nil {
     panic
(err)
   
}

   bytes
, err := ioutil.ReadAll(res.Body)
   
if err != nil {
     panic
(err)
   
}

   fmt
.Printf("(%d) Received: %s", count, bytes)
   count
++
   res
.Body.Close()
   transport
.CloseIdleConnections()
 
}
}

This will steadily leak memory.

Most of the stacks in pprof are rooted at:

#	0x39cb3		crypto/tls.(*clientHandshakeState).doFullHandshake+0x273	/usr/local/Cellar/go/1.3.3/libexec/src/pkg/crypto/tls/handshake_client.go:227
#	0x39530		crypto/tls.(*Conn).clientHandshake+0x1310			/usr/local/Cellar/go/1.3.3/libexec/src/pkg/crypto/tls/handshake_client.go:184
#	0x37c50		crypto/tls.(*Conn).Handshake+0xf0				/usr/local/Cellar/go/1.3.3/libexec/src/pkg/crypto/tls/conn.go:974
#	0x9192b		net/http.func·021+0x3b						/usr/local/Cellar/go/1.3.3/libexec/src/pkg/net/http/transport.go:577


Is there a step I'm missing to release this memory? runtime.GC() does not improve things (didn't really expect it to). I do not leak goroutines, and the connections are closing (without CloseIdelConnections(), this will stop very quickly when it exhausts connections).

As noted in the stacks, this is in 1.3.3.

-Rob

Benjamin Measures

unread,
Oct 4, 2014, 4:44:54 AM10/4/14
to golan...@googlegroups.com
> This will steadily leak memory.

How are you measuring this?

Rob Napier

unread,
Oct 4, 2014, 8:51:56 AM10/4/14
to Benjamin Measures, golang-nuts
Pprof's heap keeps growing (which is where I got the stacks). Activity monitor on the Mac show's steady memory growth. We originally discoverer it when an app went into fast reconnect and after twenty minutes locked up, apparently unable to allocate any more memory (usage had grown to gigabytes). Since pprof indicated stacks in the TLS system, I built this simple app to troubleshoot.

I tried this without pprof and still see steady usage growth in activity monitor. I tried to test with Instruments, but I suspect it interferes with the garbage collector timing because attaching Instruments caused memory usage to balloon very quickly (rather than the slow growth without it). I haven't explored that deeply since pprof gives me better data.

Rob



On Oct 4, 2014, at 4:44 AM, Benjamin Measures <saint....@gmail.com> wrote:

>> This will steadily leak memory.
>
> How are you measuring this?
>
> --
> You received this message because you are subscribed to a topic in the Google Groups "golang-nuts" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/topic/golang-nuts/AhtZS3OgGM4/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to golang-nuts...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Rob Napier

unread,
Oct 4, 2014, 9:02:41 AM10/4/14
to Benjamin Measures, golang-nuts
To make it easier to reproduce, I've created this gist with everything required and a full dump of the heap after 5 seconds (this will steadily grow).


-Rob

Benjamin Measures

unread,
Oct 4, 2014, 4:39:14 PM10/4/14
to golan...@googlegroups.com, saint....@gmail.com
On Saturday, 4 October 2014 14:02:41 UTC+1, Rob Napier wrote:
To make it easier to reproduce, I've created this gist with everything required and a full dump of the heap after 5 seconds (this will steadily grow).


What exactly are you seeing "steadily grow" in the heap profile? All but two entries in your profile show any live objects, and even then they don't take much on the heap.

I suggest a garbage collector trace and go from there.

Rob Napier

unread,
Oct 4, 2014, 5:07:38 PM10/4/14
to Benjamin Measures, golan...@googlegroups.com
The number of heap objects steadily increases. This program should have no memory growth at all (and the server does not). The leak is small, but after several hundred thousand to a million they add up (which is the situation I'm dealing with). In my larger program, the number of bytes leaked in each iteration appears larger (probably because there is more real data), but I wanted to focus on a trivial example that demonstrates the problem.

If I switch this to http there is no memory growth and no increase in heap objects or in OS-reported memory use. It only occurs when making repeated https requests. 

I'll read over the troubleshooting guide. That seems useful. But this program couldn't be much simpler. I'm looking for how to keep https requests from leaking memory, which it seems to currently be doing.

Rob
--

Dave Cheney

unread,
Oct 4, 2014, 8:20:14 PM10/4/14
to golan...@googlegroups.com
Maybe you can make your repro simpler, you don't need a client and a server, just two gorountines, one serving connections, the other making them. Possibly add a flag to turn on/off tls. That should be about a page of code and will allow everyone to investigate the issue easier.

Carlos Castillo

unread,
Oct 4, 2014, 10:30:32 PM10/4/14
to golan...@googlegroups.com, saint....@gmail.com
See the output of go build -gcflags -m:

./client.go:17: func literal escapes to heap

./client.go:22: &tls.Config literal escapes to heap

./client.go:28: moved to heap: client

./client.go:30: client escapes to heap

./client.go:27: &http.Transport literal escapes to heap

./client.go:22: &tls.Config literal escapes to heap

./client.go:27: &http.Transport literal escapes to heap

./client.go:22: &tls.Config literal escapes to heap

./client.go:40: main ... argument does not escape


Your code most certainly is allocating memory. It creates a new http.Client and tls.Config value on every iteration, which since it's escaping must be on the heap (it doesn't re-use the memory). ioutil.ReadAll also has to allocate on the heap.

Sure the GC will clean up for you, according to your trace it's already run 79 times, the lines above the memory statistics (not prefixed with a #) are records of all places where the profiler saw an allocation *when it sampled*. A heap profile record is created for every 512K (by default) of memory allocated. Only the trace of the object allocated at the time the sample is taken is recorded. This is for speed (and space), and on average, the larger allocations are more likely to be profiled and be of relevance. When you look at the profile, you are only seeing the locations where the 524288th byte since a previous profile was allocated, not all allocations. Since there is likely not exactly the perfect # of bytes in your main loop so that the profiling always happens in the same places, over time you will get more profiling locations showing up.

The uncommented lines show the following information for a single stack trace:

0: 0 [2: 2048] @ 0x24eb5 0x24998 0x260e1 0x2616a 0x294bd 0x293f3 0x3378b 0x336da 0x33bac 0x346b0 0x36496 0x3a388 0x39530 0x37c50 0x9192b 0x14e80
1 2 3 4 5

  1. # of active (not collected by gc/re-used) objects allocated by samples using this trace
  2. The number of bytes in those objects
  3. # of total objects allocated by samples using this trace
  4. The total bytes of those allocations
  5. The stack trace
Thus for the given trace, and for most of them in your example, nearly all sampled objects have been collected by the time the profile was generated. In fact the first line shows the summary:

heap profile: 2: 90400 [82: 171840] @ heap/1048576

At the time the profile was taken only two of all the traced objects were still active for a total of 90400 bytes used, there were 82 total objects traced for 171840 bytes since the program (and tracing) started. The values at the bottom (a printout of the memory statistics) are a far more accurate representation what's going on (as far as go is concerned) because most of your allocations are very small compared to the profiling rate so most are missed in profiling. 

The important things to take away are that:
  • Profiling is not in any way an accurate measurement of memory use due to the large sample size, especially when the majority of memory use is from small objects
  • Each uncommented line represents a unique stack trace at the time memory was allocated, not a single allocation
  • The traces will still exist even after the memory is collected, so the number of traces in no way represents active memory
  • Most small allocations will be missed, but over time more and more traces will be added to the profile for memory allocations in a loop, unless you are very unlucky
In fact, since the number of traces will increase over time until potentially all are eventually found, and they take up memory which won't be freed, some of the memory increase you see will be for storing the heap profile.

That being said, when you use TLS as a client, you are using CGO code, whose memory use is neither tracked by go, nor is garbage collected, so any leaks are final. On the server side, I don't believe that code is run at all, since I think it's only used to validate a server's certificate when connecting as a clien, and even if it is you are not recreating TLS objects for every connection, so the memory would only be leaked once.

Rob Napier

unread,
Oct 4, 2014, 11:27:03 PM10/4/14
to Carlos Castillo, golan...@googlegroups.com, saint....@gmail.com
I believe I understand what you've said here, and I understand that profiling isn't perfect. The key is that in a program that generates an unbounded number of https connections, it will demand from the OS an unbounded amount of memory, which is how I originally began investigating this. In cases that there is an error that causes the server to hang up on the client for a long period of time; I eventually run out of memory and the client stops trying to reconnect (rather than eventually recovering; and of course, eating GB of memory is not acceptable in an app that runs on user machines and is supposed to have low resource usage). From your helpful analysis, I take it this leak is likely happening in the cgo code? If that's true, would building without cgo help explore it more easily? Since the problem is related to tls processing (no memory growth occurs for http), cgo code does seem a good suspect. 

Tomorrow or Monday I'll build a single file version so others can more easily reproduce the issue. I was trying to keep the leaking code to the shortest possible example. 

-Rob

Carlos Castillo

unread,
Oct 5, 2014, 9:29:10 AM10/5/14
to golan...@googlegroups.com, cook...@gmail.com, saint....@gmail.com
Yes I was saying that the memory profile, especially it's size in lines isn't the best way to get an accurate count of bytes used by the process. It's better for finding locations in the code where allocation is more frequent than in others, or where the allocations are larger than others (eg: close to or bigger than the sample size). Also that the process of taking the heap samples itself uses up a bounded amount of memory, since there is usually only a finite number of possible traces, but will hit that limit only after a large period of time since it takes a significant amount of time for each iteration of the loop (20-25ms) and it has a low chance to detect the many small allocations. If you aren't using the heap profiler, you can completely disable the sampling by setting runtime.MemProfileRate to 0 at the top of main (or in an init function). That can get rid of some noise in the results.

More useful numbers are at the bottom of the heap profile printout (the printout of a runtime.MemStats value), whose *Sys fields show you how much memory go has requested in total from the OS for various things. If those numbers do not grow over an extended period of time, then go is either not tracking the memory that is leaking, which may either be due to untracked sources of memory use (eg: cgo code) or a bug in go.

It turns out I was incorrect about how CGO is used by the TLS code. Although it does on darwin use cgo to retrieve the root certificates from the OS to check connections, that code is only called once in an init function so it wouldn't lead to the growth you experienced. Without CGO it gets the certificates by reading the output of a command at runtime, so TLS should still work if you turn off CGO. The other CGO code likely in use by your code is for DNS lookups (which there is also a non-CGO fallback), but I considered that unlikely since if that were the case, your http code would have been equally affected by any leaks there. Checking with CGO off is still a good idea though.

Also, I agree that a single test executable acting as both client and server would be preferable for testing.

Rob Napier

unread,
Oct 23, 2014, 11:06:01 AM10/23/14
to Carlos Castillo, golang-nuts, Benjamin Measures
Thanks for all the input. I've pulled the code into a single program that demonstrates what I believe is the problem and moved to using MemStats rather than the heap profiler.

IMO the following program should never request additional memory from the OS after the first few iterations, so the Sys field of MemStats should never increase. However, within a few minutes, this program reliably increases Sys. It's slow (256k over the first 22k connections), but it should be zero (and is zero when using http rather than https). And it continues to grow slowly over time.

Note that Alloc does not increase over time. Just Sys. I'm not certain what this suggests. I see a similar steady growth in "Real Private Memory" as reported by Activity Monitor (darwin).

I know this all sounds small, but my process runs on thousands of end-user machines and servers for potentially months at a time. I need to demonstrate zero system memory growth.

As a note, I tried moving all the := allocations out of the for-loop in the client so that the variables are reused (and to avoid recreating the client every loop). That didn't seem to change the behavior. Adding a GC() call in the client loop didn't change anything, nor did adding a call to FreeOSMemory().

I'm currently doing all my testing on darwin. The first increase in Sys happens for me reliably around 3 minutes. The second increase is much more variable, and can take anywhere from 10 minutes to half an hour.

Thanks again.
-Rob


package main

import (
"crypto/tls"
"crypto/x509"
"fmt"
"io/ioutil"
"log"
"net"
"net/http"
"runtime"
"sync/atomic"
"time"
)

const port = ":8080"
const sampleInterval = 1 * time.Minute

var numConnections uint64

const cert = `
-----BEGIN CERTIFICATE-----
MIIDHDCCAm6gAwIBAgIJAJzp0JfeVFY6MA0GCSqGSIb3DQEBBQUAMFkxCzAJBgNV
BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX
aWRnaXRzIFB0eSBMdGQxEjAQBgNVBAMTCWxvY2FsaG9zdDAeFw0xNDEwMDMxODE3
MzZaFw0xNTEwMDMxODE3MzZaMFkxCzAJBgNVBAYTAkFVMRMwEQYDVQQIEwpTb21l
LVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQxEjAQBgNV
BAMTCWxvY2FsaG9zdDCBtTANBgkqhkiG9w0BAQEFAAOBowAwgZ8CgZcNdHU/8upi
8hRasgwlUXb6m/sOnVq3pKxZfnaehJCKDlEfkJqq91maQbDNflxd41J+3r6En2x/
eKJUvPJ/Lq5PsGTl127qYGElsgdcnJXpe7e2UZv/9OgJwq2sOtI+RwVpvs9ITPRU
caUPjmdKzyr4VwUEEj3fhBqnRSppz4pKtXK46ha+7UxWqU5pnPLLsssd+6/KIJzj
AgMBAAGjgb4wgbswHQYDVR0OBBYEFO7/91VllQERGVsizu9ijcmQ2q12MIGLBgNV
HSMEgYMwgYCAFO7/91VllQERGVsizu9ijcmQ2q12oV2kWzBZMQswCQYDVQQGEwJB
VTETMBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0
cyBQdHkgTHRkMRIwEAYDVQQDEwlsb2NhbGhvc3SCCQCc6dCX3lRWOjAMBgNVHRME
BTADAQH/MA0GCSqGSIb3DQEBBQUAA4GYAAHFhLoveonNYbSYzJQ8XeyNNdaL/C9Y
U4GP19i6kmq5RN7y0qo9Z0ENpU1q3hZ79ruIQFitbiDXzECaE+YvclDfjUfiwdxe
6XYPvL/zqSwvKLXt47IrfJ+J8K5gi/bgGTJ+mdhfhSOupG3Kaz6LGZAcSh/B+I1J
tBA9k6n620VMNZQMQxooBXmkCYelSXuasRwBRSxMlAo=
-----END CERTIFICATE-----
`

const key = `
-----BEGIN RSA PRIVATE KEY-----
MIICwgIBAAKBlw10dT/y6mLyFFqyDCVRdvqb+w6dWrekrFl+dp6EkIoOUR+Qmqr3
WZpBsM1+XF3jUn7evoSfbH94olS88n8urk+wZOXXbupgYSWyB1yclel7t7ZRm//0
6AnCraw60j5HBWm+z0hM9FRxpQ+OZ0rPKvhXBQQSPd+EGqdFKmnPikq1crjqFr7t
TFapTmmc8suyyx37r8ognOMCAwEAAQKBlwH4qB0geBrbIQRQxdrJ3sbFB7mScHIr
nFzYXITJI2w2wMgBJcgayXQCX+cbtmjDH5kbBYrk2L6sbAxCSr07l6pxS7cpI0UP
rewZ810ZH6iLO8uPkKyBt/6cS8A5TPbA9E0CnD1+bHNrG08ZBejZsNoH6he2LrqO
yWbAJvgCGqha+o/KuWG3lYkEWsvS9/B7r2Pm0T5OwUECTAOyuw1GAH3Rdx5j+khI
LIiW/UEVlGeVqPkEjnwNqdOhpFcOli+S7fOs69w3sqwqo8w0LPXd5gfOFt/Tp3iW
UW9x6ZhhQDUaqAT7ztECTAOjZWZzWN9c5E5VP2/H+NVxCZ3a7LeJ5WntHJWyPTAv
YzBWQ73rz9Xk2vpablardo+26hb/dqYyhgS4TC01EOo7JUM384n1vf7WpXMCTAG3
dDZMGSxOD+IOfH4S6oEYvUP51WJjyQSWReF1ojA3ZwZ2IebBaCzlRrJ5NDnQrSm7
ymbycrWKx3lsUN+bvv9hPBJciiZcUkPF8xECTAJbutHC+RuoCfFgvsMFW513LSWe
kAyUnRmhcgLyy0jdnqzpbfXA0jKyquLXFWimsi6MAYcwxscKPub2U6KGIFXESu4c
aYfGvAZhOlMCTAGC8TM00Rz32cFJ6jGya5ILnfnF560gaKs93LiXNDwgLD2S22jq
25Kvpx/nt++36/RPxrAjnH4J0tJXu3hheqEXM06IhFXUMOBS3Qk=
-----END RSA PRIVATE KEY-----
`

func client() {
certPool := x509.NewCertPool()
if !certPool.AppendCertsFromPEM([]byte(cert)) {
log.Fatal("Could not load parse PEM")
}

tlsConfig := &tls.Config{
RootCAs:    certPool,
ServerName: "localhost",
}

log.Printf("Client starting on %s\n", port)
for {
transport := &http.Transport{
TLSClientConfig: tlsConfig,
}
client := http.Client{Transport: transport}

res, err := client.Get(fmt.Sprintf("https://localhost%s/", port))
if err != nil {
log.Fatal(err)
}

_, err = ioutil.ReadAll(res.Body)
if err != nil {
log.Fatal(err)
}
res.Body.Close()
transport.CloseIdleConnections()
atomic.AddUint64(&numConnections, 1)
}
}

func httpHandler(w http.ResponseWriter, req *http.Request) {
w.Write([]byte(cert))
}

func server(gochan chan interface{}) {
mux := http.NewServeMux()
mux.HandleFunc("/", httpHandler)
srv := &http.Server{Addr: port, Handler: mux}

var err error
tlsConfig := &tls.Config{}
tlsConfig.NextProtos = []string{"http/1.1"}
tlsConfig.Certificates = make([]tls.Certificate, 1)
tlsConfig.Certificates[0], err = tls.X509KeyPair([]byte(cert), []byte(key))
if err != nil {
log.Fatal(err)
}

ln, err := net.Listen("tcp", port)
if err != nil {
log.Fatal(err)
}

tlsListener := tls.NewListener(ln, tlsConfig)

log.Printf("Server starting on %s\n", port)
close(gochan)
srv.Serve(tlsListener)
}

func main() {
gochan := make(chan interface{}, 1)
go server(gochan)
<-gochan
go client()

log.Printf("Sampling every %s\n", sampleInterval)
time.Sleep(sampleInterval)
var m runtime.MemStats
runtime.ReadMemStats(&m)
var startSys = m.Sys

var prevDSys uint64
for {
time.Sleep(sampleInterval)
runtime.ReadMemStats(&m)
dSys := m.Sys - startSys
var change string
if dSys != prevDSys {
change = "***"
} else {
change = ""
}
prevDSys = dSys
n := atomic.LoadUint64(&numConnections)
log.Printf("Alloc: %d\tSys: %d\tdSys: %d\tn: %d\t%s\n", m.Alloc, m.Sys, dSys, n, change)
}
}

James Bardin

unread,
Oct 23, 2014, 11:59:00 AM10/23/14
to golan...@googlegroups.com, cook...@gmail.com, saint....@gmail.com
This seems to be all garbage and heap fragmentation related. If I change your client to get rid of some confounding factors, it runs through millions of requests without increasing Sys.

// allocate a single transport and client for all connections
transport := &http.Transport{
TLSClientConfig: tlsConfig,
}
client := http.Client{Transport: transport}

// don't make any new allocations to read the response
buff := make([]byte, 32*1024)

log.Printf("Client starting on %s\n", port)
for {
res, err := client.Get(fmt.Sprintf("https://127.0.0.1%s/", port))
if err != nil {
log.Fatal(err)
}

_, err = res.Body.Read(buff)
if err != nil && err != io.EOF {
log.Fatal(err)
}
res.Body.Close()
atomic.AddUint64(&numConnections, 1)
}


I only tested fro about 4min, but it looks like this is fairly stable:

2014/10/23 11:58:12 Alloc: 438712 Sys: 3999992 dSys: 0 n: 2051594



Nick Craig-Wood

unread,
Oct 23, 2014, 12:15:59 PM10/23/14
to Rob Napier, Carlos Castillo, golang-nuts, Benjamin Measures
I tried this on my linux amd64 laptop with go 1.3.3 and tip.

With 1.3.3 The RSS according to top keeps ticking up - 100k per second
initally. I kept thinking it was going to stop going up, but even after
45k iterations it still kept ticking up just very slowly. I never once
saw the RSS go down.

With tip I saw the RSS come to a halt eventually (at a much lower level
than 1.3.3) so it looks to me like this is fixed by tip. Perhaps the
fixes to sync.Pool did it? sync.Pool seems to be used in net/http quite
a bit.

NB if you set GOMAXPROCS it leaks a lot quicker!

--
Nick Craig-Wood <ni...@craig-wood.com> -- http://www.craig-wood.com/nick

Nick Craig-Wood

unread,
Oct 23, 2014, 12:19:54 PM10/23/14
to James Bardin, golan...@googlegroups.com, cook...@gmail.com, saint....@gmail.com
On 23/10/14 16:59, James Bardin wrote:
> This seems to be all garbage and heap fragmentation related. If I
change your client to get rid of some confounding factors, it runs
through millions of requests without increasing Sys.

I think what you've shown with your changes (allowing it to re-use https
connections) is that the leak is dependent on making new https
connections rather than anything else.

> I only tested fro about 4min, but it looks like this is fairly stable:
>
> 2014/10/23 11:58:12 Alloc: 438712Sys: 3999992dSys: 0n: 2051594

Looking at the RSS (on linux anyway) seems to be more sensitive - you
can see it tick up quite often whereas the dSys figure hardly ever moves
and the Alloc figure bounces around all over the place.

James Bardin

unread,
Oct 23, 2014, 12:38:23 PM10/23/14
to Nick Craig-Wood, golan...@googlegroups.com, cook...@gmail.com, Benjamin Measures

On Thu, Oct 23, 2014 at 12:19 PM, Nick Craig-Wood <ni...@craig-wood.com> wrote:
I think what you've shown with your changes (allowing it to re-use https
connections) is that the leak is dependent on making new https
connections rather than anything else

Oh, right. I meant to put `DisableKeepAlives: true,` in that transport. 
I also removed the call to the OS's resolver by using 127.0.0.1 instead of localhost to get rid of another variable.

It now does slowly increase, which makes sense since we're making a lot more garbage. I'll let it sit and burn CPU for a while and see what happens. 

James Bardin

unread,
Oct 23, 2014, 1:45:30 PM10/23/14
to Nick Craig-Wood, golan...@googlegroups.com, cook...@gmail.com, Benjamin Measures

On Thu, Oct 23, 2014 at 12:38 PM, James Bardin <j.ba...@gmail.com> wrote:
Oh, right. I meant to put `DisableKeepAlives: true,` in that transport. 
I also removed the call to the OS's resolver by using 127.0.0.1 instead of localhost to get rid of another variable.

It now does slowly increase, which makes sense since we're making a lot more garbage. I'll let it sit and burn CPU for a while and see what happens. 

OK, ran it for 15+min, with multiple client loops and GOMAXPROCS=8 using go tip. Memory had a quick increase at the start, but was stable for the rest of the run. 

Tried again with go1.3 and looks like it would level off eventually; each bump comes at in increasing interval, but would take a while. Watching the GC trace output, the total live objects always stays in the same range with the same number of goroutines, so I doubt there's a leak in the Go code. 


Vincent Batts

unread,
Oct 23, 2014, 3:23:48 PM10/23/14
to James Bardin, Nick Craig-Wood, golang-nuts, cook...@gmail.com, Benjamin Measures

From my experience, it is due to the duplicate instances of http.Transport. By default keeps around idle connections for resuse, so it is not tagged for garbage collection once its immediately out of scope. The Work around is to set DisableKeepAlive to true.

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.

Vincent Batts

unread,
Oct 23, 2014, 3:29:59 PM10/23/14
to James Bardin, cook...@gmail.com, golang-nuts, Nick Craig-Wood, Benjamin Measures

Or just resuse a single instance of http.Transport and fiddle with max idle connections if need (default is 2).

You may also this "leak" expressed in too many open files, since each open connection holds its file descriptor while the Transport instance sticks around.

James Bardin

unread,
Oct 23, 2014, 3:52:27 PM10/23/14
to golan...@googlegroups.com, j.ba...@gmail.com, ni...@craig-wood.com, cook...@gmail.com, saint....@gmail.com


On Thursday, October 23, 2014 3:23:48 PM UTC-4, Vincent Batts wrote:

From my experience, it is due to the duplicate instances of http.Transport. By default keeps around idle connections for resuse, so it is not tagged for garbage collection once its immediately out of scope. The Work around is to set DisableKeepAlive to true.


 Not sure if you intended to respond to me or not, but the version I was experimenting with was sing a single instance if Transport and Client and set DisableKeepAlive to true.

Vincent Batts

unread,
Oct 23, 2014, 5:42:42 PM10/23/14
to James Bardin, Carlos Castillo, Benjamin Measures, golan...@googlegroups.com, ni...@craig-wood.com

Replied to all. You and I are on the same page.

--

qihu...@quantil.com

unread,
Jun 19, 2019, 11:11:56 AM6/19/19
to golang-nuts
I run into this problem now. How do you solve it?
Reply all
Reply to author
Forward
0 new messages