Deadlock with HTTP client

794 views
Skip to first unread message

edward.w...@gmail.com

unread,
Sep 29, 2014, 8:58:47 AM9/29/14
to golan...@googlegroups.com
I'm running into a issue when running high concurrency using net/http.

I'm getting deadlocks, using strace the program invariably blocks on:

futex(0x1282e38, FUTEX_WAIT, 0, NULL...

Essentially I'm getting social metrics for a bunch of domains, there are multiple stages of firing up goroutines. I run with a throttle, and by testing via runtime.NumCPU() all the way upto 128 concurrent workers - all levels of concurrency end in the FUTEX_WAIT

Each worker get's social data for the domain, e.g.:

// Facebook
wg.Add(1)

go func() {
defer wg.Done()

s.FacebookShares, s.FacebookComments = getFacebook(d)
}()

...

Individual logic:

type facebook struct {
Shares   uint64 `json:"shares,omitempty"`
Comments uint64 `json:"comments,omitempty"`
}

const facebookURL = "https://graph.facebook.com/"

func getFacebook(d string) (s, c uint64) {
var wg sync.WaitGroup

for _, url := range []string{"http://" + d, "https://" + d} {
wg.Add(1)

go func(url string) {
defer wg.Done()

b, err := getBody(facebookURL + url)

if err != nil {
return
}

var fb facebook

if err = json.Unmarshal(b, &fb); err != nil {
return
}

atomic.AddUint64(&s, fb.Shares)
atomic.AddUint64(&c, fb.Comments)
}(url)
}

wg.Wait()

return
}

I'm using a global HTTP client:

HTTP = &http.Client{
Timeout: 5 * time.Second,
/*Transport: &http.Transport{
Dial: func(network, addr string) (net.Conn, error) {
return net.DialTimeout(network, addr, dialTimeout)
},
},*/
}

(I've tried several examples online, SetDeadline() etc to no avail.)

And fetching []byte content via:

func getBody(url string) ([]byte, error) {
r, err := HTTP.Get(url)

if err != nil {
return nil, err
}

defer r.Body.Close()

if r.StatusCode != http.StatusOK {
return nil, errRequest
}

return ioutil.ReadAll(r.Body)
}

I'm using a tip build on 64 bit Linux, this behaviour is exhibited on my desktop install and a deployment server.

Has anyone come across these issues using high concurrency + net/http requests, or am I doing something obviously stupid?

James Bardin

unread,
Sep 29, 2014, 9:51:38 AM9/29/14
to golan...@googlegroups.com
Dump a stack trace when your program deadlocks, so you can see exactly what is waiting.
SIGQUIT prints a stack and exits by default.

Dave Cheney

unread,
Sep 29, 2014, 10:07:59 AM9/29/14
to golan...@googlegroups.com


On Monday, 29 September 2014 22:58:47 UTC+10, edward.w...@gmail.com wrote:
I'm running into a issue when running high concurrency using net/http.

I'm getting deadlocks, using strace the program invariably blocks on:

futex(0x1282e38, FUTEX_WAIT, 0, NULL...

Essentially I'm getting social metrics for a bunch of domains, there are multiple stages of firing up goroutines. I run with a throttle, and by testing via runtime.NumCPU() all the way upto 128 concurrent workers - all levels of concurrency end in the FUTEX_WAIT

Each worker get's social data for the domain, e.g.:

// Facebook
wg.Add(1)

go func() {
defer wg.Done()

s.FacebookShares, s.FacebookComments = getFacebook(d)
}()

...

Individual logic:

type facebook struct {
Shares   uint64 `json:"shares,omitempty"`
Comments uint64 `json:"comments,omitempty"`
}

const facebookURL = "https://graph.facebook.com/"

func getFacebook(d string) (s, c uint64) {
var wg sync.WaitGroup

for _, url := range []string{"http://" + d, "https://" + d} {
wg.Add(1)

go func(url string) {
defer wg.Done()

wg.Add must not be called concurrently with wg.Done. http://golang.org/pkg/sync/#WaitGroup.Done 

James Bardin

unread,
Sep 29, 2014, 10:23:55 AM9/29/14
to Dave Cheney, golan...@googlegroups.com

On Mon, Sep 29, 2014 at 10:07 AM, Dave Cheney <da...@cheney.net> wrote:
wg.Add must not be called concurrently with wg.Done. http://golang.org/pkg/sync/#WaitGroup.Done 

I don't think that's a requirement.
Done() is just and alias for Add(-1), and Add is protected by a mutex

edward.w...@gmail.com

unread,
Sep 29, 2014, 10:25:13 AM9/29/14
to golan...@googlegroups.com
Thanks for the response; however I'm not quite sure I follow, my code follows the documentation example almost like-for-like?

Dave Cheney

unread,
Sep 29, 2014, 10:25:33 AM9/29/14
to James Bardin, golan...@googlegroups.com
I am sorry. You are correct, Add() and wait should not be called concurrently.

Dave Cheney

unread,
Sep 29, 2014, 10:30:08 AM9/29/14
to golan...@googlegroups.com
Yup. I was completely wrong, ignore me.

Can you please post the complete panic message you received. It should be pretty easy to figure it out from there.

Also, when stracing go programs, always use -f as goroutines move between threads constantly.

edward.w...@gmail.com

unread,
Sep 29, 2014, 10:34:15 AM9/29/14
to golan...@googlegroups.com
Thanks, I'll have a strace with -f momentarily.

For the record, there's no panic() just the lock on futex(0x1282e38, FUTEX_WAIT, 0, NULL // blocks

Nick Craig-Wood

unread,
Sep 29, 2014, 10:48:07 AM9/29/14
to golan...@googlegroups.com
On 29/09/14 15:34, edward.w...@gmail.com wrote:
> Thanks, I'll have a strace with -f momentarily.
>
> For the record, there's no panic() just the lock on futex(0x1282e38,
> FUTEX_WAIT, 0, NULL // blocks

Did you try the race detector?

--
Nick Craig-Wood <ni...@craig-wood.com> -- http://www.craig-wood.com/nick

Edward Whitman

unread,
Sep 29, 2014, 10:52:23 AM9/29/14
to Nick Craig-Wood, golan...@googlegroups.com
I'll give the race detector a go next, in the meantime, it appears the goroutines are doing something: http://pastebin.com/77SqJhNi - seems to be infinitely firing these whilst there's a lock elsewhere in the chain?


--
You received this message because you are subscribed to a topic in the Google Groups "golang-nuts" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/golang-nuts/uSDwyjC75sc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to golang-nuts...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Dave Cheney

unread,
Sep 29, 2014, 10:54:31 AM9/29/14
to Edward Whitman, Nick Craig-Wood, golang-nuts
That output looks consistent with network io. Why do you think your
program has deadlocked (that word has a rather specific meaning in Go
circles, ie, no panic == no deadlock)

atomly

unread,
Sep 29, 2014, 11:47:53 AM9/29/14
to Edward Whitman, Nick Craig-Wood, golang-nuts
Did you see the comments about the WaitGroup? I'd fix that first, since you're specifically violating the contract of that API and it deals with exactly the thing you're having problems with (sync/race conditions).

:: atomly ::

[ ato...@atomly.com : www.atomly.com  : http://blog.atomly.com/ ...
[ atomiq records : new york city : +1.347.692.8661 ...
[ e-mail atomly-new...@atomly.com for atomly info and updates ...

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.

edward.w...@gmail.com

unread,
Sep 29, 2014, 11:52:57 AM9/29/14
to golan...@googlegroups.com, edward.w...@gmail.com, ni...@craig-wood.com
I'm just trying out 1.3.2.

RE:WaitGroup, sorry I don't understand where the violation is, could you be more explicit? Cheers!

edward.w...@gmail.com

unread,
Sep 29, 2014, 12:24:00 PM9/29/14
to golan...@googlegroups.com, edward.w...@gmail.com, ni...@craig-wood.com
I've just tried 1.3.2 and a -race, watched for some time, no race conditions to be seen.

A description of the behaviour is that there's a throttle channel throttle = make(chan struct{}, 32)

This seems to get saturated on the main thread, which fires up goroutines to collect some metrics for the provided domain. So, the first layer of is limited to 32 goroutines.

To improve performance, for each domain, I fire up further goroutines in an attempt to speed up data collection for the metrics.

The second layer seems to be retrying indefinitely (http://pastebin.com/77SqJhNi) which stops any real work being done, the throttle is filled and there's no progress.

edward.w...@gmail.com

unread,
Sep 29, 2014, 4:28:54 PM9/29/14
to golan...@googlegroups.com, edward.w...@gmail.com, ni...@craig-wood.com
I've switched back to 1.4 tip + race detector and now I'm seeing this:

/bin# ./domainTool 
unexpected fault address 0x20000a79ba80
fatal error: fault
[signal 0xb code=0x1 addr=0x20000a79ba80 pc=0x42891a]

goroutine 93 [running]:
runtime.gothrow(0xc209a08120, 0x0)
/data/software/go/src/runtime/panic.go:477 +0x8e fp=0x7f296edfce00 sp=0x7f296edfcde8
created by code/domain.GetHost
/data/projects/src/code/domain/host.go:123 +0x617

Here's the majority of the code, I've labelled where the panic is: http://pastebin.com/sMgZuiET

Dmitry Vyukov

unread,
Oct 2, 2014, 4:15:54 AM10/2/14
to edward.w...@gmail.com, golang-nuts, Nick Craig-Wood
On Tue, Sep 30, 2014 at 12:28 AM, <edward.w...@gmail.com> wrote:
> I've switched back to 1.4 tip + race detector and now I'm seeing this:
>
> /bin# ./domainTool
> unexpected fault address 0x20000a79ba80
> fatal error: fault
> [signal 0xb code=0x1 addr=0x20000a79ba80 pc=0x42891a]
>
> goroutine 93 [running]:
> runtime.gothrow(0xc209a08120, 0x0)
> /data/software/go/src/runtime/panic.go:477 +0x8e fp=0x7f296edfce00
> sp=0x7f296edfcde8
> created by code/domain.GetHost
> /data/projects/src/code/domain/host.go:123 +0x617


Hi,

Does it work with tip but w/o race detector?

Please run the program with gdb, and post stack trace from gdb.

edward.w...@gmail.com

unread,
Oct 2, 2014, 7:32:29 AM10/2/14
to golan...@googlegroups.com, edward.w...@gmail.com, ni...@craig-wood.com
Works on tip without race detector, works on release with and without. Both show the same behaviour when running without race detector.

I'm not well versed with gdb, but I'm seeing:

Program received signal SIGPIPE, Broken pipe.
[Switching to Thread 0x7fff4bfff700 (LWP 27786)]
0x0000000000512bb4 in syscall.Syscall ()
(gdb) 
Continuing.

Then:

^C
Program received signal SIGINT, Interrupt.
[Switching to Thread 0x7ffff7fef740 (LWP 27668)]
0x0000000000440713 in runtime.futex ()

Finally in backtrace:

(gdb) backtrace
#0  0x0000000000440713 in runtime.futex ()
#1  0x0000000000431287 in runtime.futexsleep ()
#2  0x0000000001275e38 in runtime.m0 ()
#3  0x0000000000000000 in ?? ()

I'm going to try and replicate this in a code example I can share.

Dmitry Vyukov

unread,
Oct 2, 2014, 7:45:10 AM10/2/14
to edward.whitman80, golang-nuts, Nick Craig-Wood
On Thu, Oct 2, 2014 at 3:32 PM, <edward.w...@gmail.com> wrote:
> Works on tip without race detector, works on release with and without. Both
> show the same behaviour when running without race detector.
>
> I'm not well versed with gdb, but I'm seeing:
>
> Program received signal SIGPIPE, Broken pipe.
> [Switching to Thread 0x7fff4bfff700 (LWP 27786)]
> 0x0000000000512bb4 in syscall.Syscall ()
> (gdb)
> Continuing.
>
> Then:
>
> ^C
> Program received signal SIGINT, Interrupt.
> [Switching to Thread 0x7ffff7fef740 (LWP 27668)]
> 0x0000000000440713 in runtime.futex ()
>
> Finally in backtrace:
>
> (gdb) backtrace
> #0 0x0000000000440713 in runtime.futex ()
> #1 0x0000000000431287 in runtime.futexsleep ()
> #2 0x0000000001275e38 in runtime.m0 ()
> #3 0x0000000000000000 in ?? ()
>
> I'm going to try and replicate this in a code example I can share.

This would be very helpful.


> On Thursday, October 2, 2014 9:15:54 AM UTC+1, Dmitry Vyukov wrote:
>>
>> On Tue, Sep 30, 2014 at 12:28 AM, <edward.w...@gmail.com> wrote:
>> > I've switched back to 1.4 tip + race detector and now I'm seeing this:
>> >
>> > /bin# ./domainTool
>> > unexpected fault address 0x20000a79ba80
>> > fatal error: fault
>> > [signal 0xb code=0x1 addr=0x20000a79ba80 pc=0x42891a]
>> >
>> > goroutine 93 [running]:
>> > runtime.gothrow(0xc209a08120, 0x0)
>> > /data/software/go/src/runtime/panic.go:477 +0x8e fp=0x7f296edfce00
>> > sp=0x7f296edfcde8
>> > created by code/domain.GetHost
>> > /data/projects/src/code/domain/host.go:123 +0x617
>>
>>
>> Hi,
>>
>> Does it work with tip but w/o race detector?
>>
>> Please run the program with gdb, and post stack trace from gdb.
>

edward.w...@gmail.com

unread,
Oct 2, 2014, 8:51:50 AM10/2/14
to golan...@googlegroups.com, edward.w...@gmail.com, ni...@craig-wood.com
Here's the Go code: http://pastebin.com/HjuML994

Example, 10K domains list from Alexa: http://pastebin.com/2yyATQF4

This snippet seems to result in the issue for me. Maybe with this code it's glaringly obvious.

Dave Cheney

unread,
Oct 2, 2014, 9:02:04 AM10/2/14
to Edward Whitman, golang-nuts, Nick Craig-Wood

Thanks for your code sample.

Can you please run your program to the point that you think it has stalled, the send it SIGQUIT and paste the entire output, it should be simple to figure it out from there

Dave

You received this message because you are subscribed to a topic in the Google Groups "golang-nuts" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/golang-nuts/uSDwyjC75sc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to golang-nuts...@googlegroups.com.

edward.w...@gmail.com

unread,
Oct 2, 2014, 9:26:58 AM10/2/14
to golan...@googlegroups.com, edward.w...@gmail.com, ni...@craig-wood.com
On second thoughts, my test might be erroneous, please bare with me.

Erik Dubbelboer

unread,
Oct 3, 2014, 2:49:15 AM10/3/14
to golan...@googlegroups.com
Some moths ago I ran into what looks like the same problem when writing my http benchmark tool in go: https://github.com/spotmx/hench

I don't remember everything exactly but I remember my program was also hanging on the futex call. I think I used the pperf package to get stack traces and found out that it was actually the DNS resolver that kept hanging waiting for a response. Go uses it's own DNS resolver which doesn't seem to cache any responses. When the http client was making to many requests my DNS provider seemed to start to rate limit me which made everything slow down to a halt like you seem to be experiencing.

I fixed the problem by implementing my own dialer (https://github.com/spotmx/hench/blob/master/main.go#L122 and https://github.com/spotmx/hench/blob/master/main.go#L339) which uses my own resolver function (https://github.com/spotmx/hench/blob/master/main.go#L65) which caches the responses. After this the problem was completely gone.

I currently don't have enough time to look if you are experiencing the exact same problem, but it seems so similar that I though I'd post this to see if it helps.

Erik Dubbelboer

unread,
Oct 3, 2014, 3:20:17 AM10/3/14
to golan...@googlegroups.com
Here is a simple program to demonstrate the problem: http://play.golang.org/p/Ask2EXAo1-
Run the program until it stop printing out new lines, kill it with Ctrl+C and you'll see that 3 of the 4 routines are waiting for the singleFlight in net.lookupIPMerge while one of the goroutines is hanging on net._C2func_getaddrinfo which is the actual lookup that is hanging.

Example stack trace:

goroutine 5 [syscall]:
net._C2func_getaddrinfo(0x7f71680008c0, 0x0, 0xc21001e1e0, 0xc2100008e8, 0x1c, ...)
        net/_obj/_cgo_defun.c:50 +0x36
net.cgoLookupIPCNAME(0x514e90, 0xa, 0x0, 0x0, 0x0, ...)
        /home/erik/go/src/pkg/net/cgo_unix.go:96 +0x174
net.cgoLookupIP(0x514e90, 0xa, 0x0, 0x0, 0x0, ...)
        /home/erik/go/src/pkg/net/cgo_unix.go:148 +0x69
net.lookupIP(0x514e90, 0xa, 0x0, 0x0, 0x0, ...)
        /home/erik/go/src/pkg/net/lookup_unix.go:64 +0x63
net.func·022(0x61bfe0, 0xc21001e2a0, 0x514e90, 0xa)
        /home/erik/go/src/pkg/net/lookup.go:41 +0x2d
net.(*singleflight).Do(0x61bfe0, 0x514e90, 0xa, 0x7f7182780e80, 0x0, ...)
        /home/erik/go/src/pkg/net/singleflight.go:45 +0x1de
net.lookupIPMerge(0x514e90, 0xa, 0x0, 0x0, 0x0, ...)
        /home/erik/go/src/pkg/net/lookup.go:42 +0xc0
net.LookupIP(0x514e90, 0xa, 0x0, 0x0, 0x0, ...)
        /home/erik/go/src/pkg/net/lookup.go:31 +0x63
main.func·001()
        /home/erik/benchmark/test.go:14 +0x34
created by main.main
        /home/erik/benchmark/test.go:20 +0x33

goroutine 6 [semacquire]:
sync.runtime_Semacquire(0xc2100008f0)
        /home/erik/go/src/pkg/runtime/sema.goc:199 +0x30
sync.(*WaitGroup).Wait(0xc2100c9ac0)
        /home/erik/go/src/pkg/sync/waitgroup.go:127 +0x14b
net.(*singleflight).Do(0x61bfe0, 0x514e90, 0xa, 0x7f718277ee80, 0x0, ...)
        /home/erik/go/src/pkg/net/singleflight.go:37 +0x103
net.lookupIPMerge(0x514e90, 0xa, 0x0, 0x0, 0x0, ...)
        /home/erik/go/src/pkg/net/lookup.go:42 +0xc0
net.LookupIP(0x514e90, 0xa, 0x0, 0x0, 0x0, ...)
        /home/erik/go/src/pkg/net/lookup.go:31 +0x63
main.func·001()
        /home/erik/benchmark/test.go:14 +0x34
created by main.main
        /home/erik/benchmark/test.go:20 +0x33

Mikio Hara

unread,
Oct 3, 2014, 3:23:20 AM10/3/14
to Erik Dubbelboer, golang-nuts

Dmitry Vyukov

unread,
Oct 3, 2014, 5:17:41 AM10/3/14
to edward.w...@gmail.com, golang-nuts, Nick Craig-Wood
I've tried to reproduce the crash with race detector on:
go version devel +87dcbcc9371b Wed Oct 01 16:44:52 2014 -0700 darwin/amd64
and:
go version devel +920cde0a8b2d Fri Oct 03 10:36:54 2014 +1000 linux/amd64

using your domains.txt

I see no crashes.

Do I need to do anything special? What OS do you use? What does 'go
version' say?

Dmitry Vyukov

unread,
Oct 3, 2014, 5:21:53 AM10/3/14
to edward.whitman80, golang-nuts, Nick Craig-Wood
On Thu, Oct 2, 2014 at 3:32 PM, <edward.w...@gmail.com> wrote:
> Works on tip without race detector, works on release with and without. Both
> show the same behaviour when running without race detector.
>
> I'm not well versed with gdb, but I'm seeing:

Run:

$ gdb ./prog_built_with_race

Then in gdb type:

handle SIGPIPE nostop noprint
// this will disable disturbing SIGPIPE signals

then:

run

then wait till program crashes with SIGSEGV or SIGBUS
then type 'bt' to print crash stack and 'info registers' to print
register contents.

edward.w...@gmail.com

unread,
Oct 3, 2014, 12:48:36 PM10/3/14
to golan...@googlegroups.com, edward.w...@gmail.com, ni...@craig-wood.com
Sorry, my test wasn't any good at all.

I have had some luck with Erik's dialler + DNS cache. In fact, HTTP throughput for requests looks as predictable and constant (as predictable and constant network requests to different hosts can be).

One problem does remain, the program continues to run in an active state, however, there seems to be significant CPU churning as the process ages. On a 8 core system it starts at 30% utilisation until eventually it reaches 600%+. Network utilisation remains good though.

I have noticed if I use a *ridiculous* MaxIdleConnsPerHost of 2048 the program behaves impeccably, consuming only 25% CPU in top. This might be down to the address resolving to different IPs via loadbalancers, for instance you will get many different IPs each resolve of graph.facebook.com.

If I keep MaxIdleConnsPerHost at a more sane level, as the process encroaches 600%+ CPU utilisation I see lots of the original:

[pid 28836] sched_yield( <unfinished ...>
[pid 28754] futex(0xc209c220d8, FUTEX_WAKE, 1 <unfinished ...>
[pid 28836] <... sched_yield resumed> ) = 0
[pid 28754] <... futex resumed> )       = 1
[pid 28836] futex(0x1277420, FUTEX_WAKE, 1 <unfinished ...>
[pid 28734] <... futex resumed> )       = 0
[pid 28836] <... futex resumed> )       = 0
[pid 28725] select(0, NULL, NULL, NULL, {0, 10000} <unfinished ...>
[pid 28836] epoll_wait(4, {}, 128, 0)   = 0
[pid 28836] futex(0xc209b475d8, FUTEX_WAIT, 0, NULL <unfinished ...>
[pid 28725] <... select resumed> )      = 0 (Timeout)
[pid 28725] select(0, NULL, NULL, NULL, {0, 10000}) = 0 (Timeout)
[pid 28734] futex(0xc209b475d8, FUTEX_WAKE, 1 <unfinished ...>
[pid 28725] select(0, NULL, NULL, NULL, {0, 10000} <unfinished ...>
[pid 28836] <... futex resumed> )       = 0
[pid 28734] <... futex resumed> )       = 1
[pid 28836] epoll_wait(4, {}, 128, 0)   = 0
[pid 28836] futex(0xc209b475d8, FUTEX_WAIT, 0, NULL <unfinished ...>
[pid 28725] <... select resumed> )      = 0 (Timeout)
[pid 28725] select(0, NULL, NULL, NULL, {0, 10000}) = 0 (Timeout)
[pid 28817] sched_yield( <unfinished ...>
[pid 28754] futex(0xc209b475d8, FUTEX_WAKE, 1 <unfinished ...>
[pid 28817] <... sched_yield resumed> ) = 0
[pid 28725] select(0, NULL, NULL, NULL, {0, 10000} <unfinished ...>
[pid 28817] futex(0x1277420, FUTEX_WAKE, 1) = 0
[pid 28754] <... futex resumed> )       = 1
[pid 28836] <... futex resumed> )       = 0
[pid 28836] epoll_wait(4, {}, 128, 0)   = 0
[pid 28836] futex(0xc209b475d8, FUTEX_WAIT, 0, NULL <unfinished ...>
[pid 28725] <... select resumed> )      = 0 (Timeout)
[pid 28725] select(0, NULL, NULL, NULL, {0, 10000} <unfinished ...>
[pid 11551] <... connect resumed> )     = 0

I'll try my best to create a reduced example that exhibits this issue tomorrow.

edward.w...@gmail.com

unread,
Oct 3, 2014, 12:49:49 PM10/3/14
to golan...@googlegroups.com, edward.w...@gmail.com, ni...@craig-wood.com
I forgot to thank you Erik, even with the outstanding issue it's no longer a blocking bug, thanks!

Andrew Bursavich

unread,
Oct 4, 2014, 12:27:44 AM10/4/14
to golan...@googlegroups.com, edward.w...@gmail.com, ni...@craig-wood.com
I made a package called nett a couple month ago while traveling and silently pushed it to github.  I needed a nice way to transparently add this type of DNS caching and the most logical place to hook it in seemed to be in the Dialer.  Under the covers it steals quite a bit from the standard library and provides a Dialer with matching functionality with some tiny bells and whistles: primarily you can specify your own Resolver.  Check it out -- you may find it useful.

dialer := nett.Dialer{
    // Cache successful DNS lookups for five minutes
    // using nett.DefaultResolver to fill the cache.
    Resolver: nett.NewCacheResolver(nil, 5*time.Minute),
    // If host resolves to multiple IP addresses,
    // dial two concurrently splitting between
    // IPv4 and IPv6 addresses and return the
    // connection that is established first.
    IPFilter: nett.MaxIPFilter(2),
    // Give up after five seconds including DNS resolution.
    Timeout: 5 * time.Second,
}
client := http.Client{
    Transport: &http.Transport{
        // Use the Dialer.
        Dial: dialer.Dial,
    },
}
// ...

Cheers,
Andy
Reply all
Reply to author
Forward
0 new messages