limiting http server connection count

3,515 views
Skip to first unread message

Matt Joiner

unread,
May 23, 2013, 7:04:13 AM5/23/13
to golan...@googlegroups.com
I've rewritten several services in use by my employer in Go as HTTP servers. A common theme with these services are errors of the forms "too many open files", "no such host" and the like. I'm not entirely sure what could cause the DNS errors, but it's my belief the root cause is accepting too many connections, and then not having any available file descriptors remaining to service those requests, as my handlers open file descriptors as part of the handling logic.

Many of the services have grown ad-hoc limiting and pools to manage various resources but these are clunky and there's a lot of guess work. In addition, net/http.Server doesn't provide any built in mechanism to limit active connections, and the logic that wraps of the handling of each client is not exposed (net/http.Server.newConn), making it difficult to manage calling Accept() myself and invoking the usual net/http handling routines for each connection.

What's the best approach here? 

Henry Heikkinen

unread,
May 23, 2013, 8:12:33 AM5/23/13
to Matt Joiner, golan...@googlegroups.com
Hi Matt,

If you are running on Linux, you could try setting higher maximum number of open file descriptors using ulimit[1]. This has solved similar problems for me in the past and should be good at least as a temporary solution.

Regards,
Henry Heikkinen

I've rewritten several services in use by my employer in Go as HTTP servers. A common theme with these services are errors of the forms "too many open files", "no such host" and the like. I'm not entirely sure what could cause the DNS errors, but it's my belief the root cause is accepting too many connections, and then not having any available file descriptors remaining to service those requests, as my handlers open file descriptors as part of the handling logic.

Many of the services have grown ad-hoc limiting and pools to manage various resources but these are clunky and there's a lot of guess work. In addition, net/http.Server doesn't provide any built in mechanism to limit active connections, and the logic that wraps of the handling of each client is not exposed (net/http.Server.newConn), making it difficult to manage calling Accept() myself and invoking the usual net/http handling routines for each connection.

What's the best approach here? 

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Matt Joiner

unread,
May 23, 2013, 8:26:56 AM5/23/13
to golan...@googlegroups.com, Matt Joiner
This is precisely what I want to avoid. The server isn't keeping up with the demand, if the fd limit is raised, it just fills those up to, and then the failures begin as described in OP.

adnaan badr

unread,
May 23, 2013, 8:52:30 AM5/23/13
to golan...@googlegroups.com, Matt Joiner
http://golang.org/doc/effective_go.html#channels

Look for the semaphore example. Google for "golang semaphore"

roger peppe

unread,
May 23, 2013, 9:24:49 AM5/23/13
to Matt Joiner, golang-nuts
I think your best approach is probably to make a custom http.Handler
that limits the number of concurrent connections.

For instance: http://play.golang.org/p/732IdPniVi

You could probably combine that with a custom net.Listener
that only calls Accept when there's free space
(although there's an awkward window after you've returned
the net.Conn, because you really don't want to wrap that
because the type is special-cased in net/http)

Matt Joiner

unread,
May 23, 2013, 9:49:33 AM5/23/13
to golan...@googlegroups.com, Matt Joiner
As mentioned, this isn't clean because the logic isn't well exposed from net/http.Server

Matt Joiner

unread,
May 23, 2013, 9:51:14 AM5/23/13
to golan...@googlegroups.com, Matt Joiner
Thanks rog, but your example limits handling *after* the connection is already accepted. it's preventing the accept in the first place that I was describing.

Henry Heikkinen

unread,
May 23, 2013, 9:57:47 AM5/23/13
to Matt Joiner, golan...@googlegroups.com
Hi Matt,

Quickly looking at the http package it seems like you could do what Roger suggested with http.Server to do the checking before accepting a connection.

Regards,
Henry Heikkinen


2013/5/23 Matt Joiner <anac...@gmail.com>

Gustavo Niemeyer

unread,
May 23, 2013, 10:22:11 AM5/23/13
to Matt Joiner, golan...@googlegroups.com
On Thu, May 23, 2013 at 10:51 AM, Matt Joiner <anac...@gmail.com> wrote:
> Thanks rog, but your example limits handling *after* the connection is
> already accepted. it's preventing the accept in the first place that I was
> describing.

http://play.golang.org/p/r-iWfL8xYr


gustavo @ http://niemeyer.net

Gustavo Niemeyer

unread,
May 23, 2013, 10:25:02 AM5/23/13
to Matt Joiner, golan...@googlegroups.com
Hmm.. not a big deal, but there should be a return on line 30:

http://play.golang.org/p/hy9ouVmtKk


gustavo @ http://niemeyer.net

roger peppe

unread,
May 23, 2013, 10:57:59 AM5/23/13
to Gustavo Niemeyer, Matt Joiner, golang-nuts
Yes, this is what I was alluding to when I said "you don't really
want to wrap [the net.Conn]". There's a special case in
net/http (see conn.closeWriteAndWait) that won't be
triggered if the connection isn't of type *net.TCPConn.
But looking back over it, the issue is so minor that it's
unlikely to be a problem, and it is a nice solution.

BTW technically this example doesn't prevent the
number of connections from growing beyond
the limit, as you can't synchronise on a send on a buffered
channel. I really hope the memory model changes in this
respect, but I'm guessing it probably won't.

Better is to fill the channel first, then use {recv; operation; send}
rather than {send; operation; recv}, I think.

Gustavo Niemeyer

unread,
May 23, 2013, 11:07:16 AM5/23/13
to roger peppe, Matt Joiner, golang-nuts
On Thu, May 23, 2013 at 11:57 AM, roger peppe <rogp...@gmail.com> wrote:
> BTW technically this example doesn't prevent the
> number of connections from growing beyond
> the limit, as you can't synchronise on a send on a buffered
> channel. I really hope the memory model changes in this
> respect, but I'm guessing it probably won't.

If you can put items in a channel concurrently and overflow the
capacity of the channel, then something is broken.


gustavo @ http://niemeyer.net

roger peppe

unread,
May 23, 2013, 11:13:34 AM5/23/13
to Gustavo Niemeyer, Matt Joiner, golang-nuts
It seems that way, doesn't it?
Nonetheless, that's the conclusion I have seen drawn elsewhere.

If I do:

c := make(chan bool, 1)
x := 0
f := func() {
c <- true
x++
<-c
}
go f()
go f()

apparently there is no guarantee that x will end up as 2.

Matt Joiner

unread,
May 23, 2013, 11:31:59 AM5/23/13
to golan...@googlegroups.com, Matt Joiner
That's a very neat implementation. I'd have been happy to fire off a wrapper around the HTTP handler, and signal when it completes to the goroutine that's accepting, since my problem is HTTP specific.

I am concerned that Close may not necessarily be called as rog mentions. There are also special error handling behaviours in net/http.Server.Serve that need to be duplicated. Regardless I might give this variant a spin, it's a very clean abstraction.

Ideally the connection handling stuff in net/http should be exposed, and take Conn and Server interfaces (it already takes a Conn), so as to allow custom Servers. Additionally, upon accepting a connection, an exposed function ServeOne(Conn) should be called instead of the all the unexposed stuff that's currently going on.

On further checking of net/http.Server.serve, I noticed it gradually backs off if temporary errors occur (which include EMFILE and EAGAIN). This should give time for existing requests to fail, freeing up more file descriptors until stuff begins to succeed again.

Matt Joiner

unread,
May 23, 2013, 11:36:23 AM5/23/13
to golan...@googlegroups.com, Gustavo Niemeyer, Matt Joiner
There is no guarantee that x will be 2. I believe it will always be with GOMAXPROCS=1, since the sole thread dedicated to running goroutines in user code hasn't handed over to the scheduler yet. Even if the thread is preempted, another goroutine won't run until the next blocking call (the channel receive following). However with GOMAXPROCS>1, it's definitely possible due to kernel preemptive scheduling, and thus is a race. That said, you shouldn't rely on GOMAXPROCS=1 to enforce expected behaviour.

Gustavo Niemeyer

unread,
May 23, 2013, 11:37:10 AM5/23/13
to roger peppe, Matt Joiner, golang-nuts
On Thu, May 23, 2013 at 12:13 PM, roger peppe <rogp...@gmail.com> wrote:
> On 23 May 2013 16:07, Gustavo Niemeyer <gus...@niemeyer.net> wrote:
>> If you can put items in a channel concurrently and overflow the
>> capacity of the channel, then something is broken.
>
> It seems that way, doesn't it?
> Nonetheless, that's the conclusion I have seen drawn elsewhere.
(...)
> apparently there is no guarantee that x will end up as 2.

That's a completely unrelated issue. You're using the channel as a
lock to synchronize unrelated memory, while the code in the paste is
simply trusting that the channel itself is properly synchronized to
not overflow, which is a pretty sane assumption.


gustavo @ http://niemeyer.net

minux

unread,
May 23, 2013, 11:39:27 AM5/23/13
to Matt Joiner, golan...@googlegroups.com, Gustavo Niemeyer
On Thu, May 23, 2013 at 11:36 PM, Matt Joiner <anac...@gmail.com> wrote:
There is no guarantee that x will be 2. I believe it will always be with GOMAXPROCS=1, since the sole thread dedicated to running goroutines in user code hasn't handed over to the scheduler yet. Even if the thread is preempted, another goroutine won't run until the next blocking call (the channel receive following). However with GOMAXPROCS>1, it's definitely possible due to kernel preemptive scheduling, and thus is a race. That said, you shouldn't rely on GOMAXPROCS=1 to enforce expected behaviour.
preemptive scheduler for Go is in the works, so perhaps this statement about GOMAXPROCS=1 will be incorrect
in Go 1.2. So don't rely on that.

Matt Joiner

unread,
May 23, 2013, 11:43:04 AM5/23/13
to golan...@googlegroups.com, Gustavo Niemeyer, Matt Joiner
rog there's a defered closure in (c *conn) serve() that calls Close on the socket no matter what happens when the function is exited. i think this guarantees that Close will be called for every socket given.

Gustavo Niemeyer

unread,
May 23, 2013, 12:08:03 PM5/23/13
to Matt Joiner, golan...@googlegroups.com
On Thu, May 23, 2013 at 12:36 PM, Matt Joiner <anac...@gmail.com> wrote:
> There is no guarantee that x will be 2. I believe it will always be with
> GOMAXPROCS=1, since the sole thread dedicated to running goroutines in user

This is all unrelated to the listener code pasted. The code trusts on
the fact that this little program won't ever show x=3, which is a fine
assumption:

http://play.golang.org/p/OY8E2n-6nX


gustavo @ http://niemeyer.net

roger peppe

unread,
May 23, 2013, 12:56:55 PM5/23/13
to Gustavo Niemeyer, Matt Joiner, golang-nuts
I'm not entirely sure. Let's change my x++ to something else.
Are the two sleeps guaranteed to run sequentially here?

c := make(chan bool, 1)
f := func() {
c <- true
go time.Sleep(time.Second)

Gustavo Niemeyer

unread,
May 23, 2013, 1:18:49 PM5/23/13
to roger peppe, Matt Joiner, golang-nuts
On Thu, May 23, 2013 at 1:56 PM, roger peppe <rogp...@gmail.com> wrote:
> I'm not entirely sure. Let's change my x++ to something else.
> Are the two sleeps guaranteed to run sequentially here?
(...)
> go time.Sleep(time.Second)

Are you using Roger's laptop while he's away?


gustavo @ http://niemeyer.net

roger peppe

unread,
May 23, 2013, 2:44:57 PM5/23/13
to Gustavo Niemeyer, Matt Joiner, golang-nuts
I know it seems weird, but from the discussions in golang-dev, it
*is* weird stuff.

From what I remember of Dmitry's interpretation, a sufficiently
smart compiler would be entitled to hoist the go statement
above the channel send.

We might not be synchronising memory here, but we are using
send on a buffered channel to synchronise, and I believe the first
"go time.Sleep" is not guaranteed (by the memory
model) to happen _before_ the second one.

I agree it seems a bit bonkers, but I'd like some confirmation
from someone who really knows this stuff that I really am off
the rails here (by the *strict* interpretation of the memory model).

Please tell me I am; I will be happier then. :-)

Hector the cat.

James Bardin

unread,
May 23, 2013, 2:57:36 PM5/23/13
to golan...@googlegroups.com, Gustavo Niemeyer, Matt Joiner
rog,

I don't believe he's arguing that. What he's saying is that is doesn't matter here, since we only care about the limit, not the order.


On a related note, has there been an official request to expose a raw semaphore is sync? I'm just guessing at this, but I exposed it like so, it it's of course faster:

basically adding this to sync:

type Semaphore uint32

func (s *Semaphore) Release() {
    runtime_Semrelease((*uint32)(s))
}

func (s *Semaphore) Acquire() {
    runtime_Semacquire((*uint32)(s))
}

I get:
BenchmarkSemaNoCont-2 50000000        47.3 ns/op
BenchmarkChanNoCont-2 20000000       111 ns/op

-jim

minux

unread,
May 23, 2013, 2:57:55 PM5/23/13
to roger peppe, Gustavo Niemeyer, Matt Joiner, golang-nuts
I agree with you.


even the effective Go document switched implementation of semaphore due
to this very discussion.

Gustavo Niemeyer

unread,
May 23, 2013, 3:02:31 PM5/23/13
to roger peppe, Matt Joiner, golang-nuts
On Thu, May 23, 2013 at 3:44 PM, roger peppe <rogp...@gmail.com> wrote:
> On 23 May 2013 18:18, Gustavo Niemeyer <gus...@niemeyer.net> wrote:
>> On Thu, May 23, 2013 at 1:56 PM, roger peppe <rogp...@gmail.com> wrote:
>>> I'm not entirely sure. Let's change my x++ to something else.
>>> Are the two sleeps guaranteed to run sequentially here?
>> (...)
>>> go time.Sleep(time.Second)
>>
>> Are you using Roger's laptop while he's away?
>
> I know it seems weird, but from the discussions in golang-dev, it
> *is* weird stuff.

I've never seen any questions raised about what I pointed out so far,
and what you just mentioned above is obviously bogus and unrelated to
the listener implementation.

> From what I remember of Dmitry's interpretation, a sufficiently
> smart compiler would be entitled to hoist the go statement
> above the channel send.

Obviously.

> We might not be synchronising memory here, but we are using
> send on a buffered channel to synchronise, and I believe the first
> "go time.Sleep" is not guaranteed (by the memory
> model) to happen _before_ the second one.

I seriously don't get what you mean. "go" means "run this
concurrently". It's self-obvious that there isn't a guarantee of
ordering.

> I agree it seems a bit bonkers, but I'd like some confirmation
> from someone who really knows this stuff that I really am off
> the rails here (by the *strict* interpretation of the memory model).

Seems obvious, rather than bonkers. What I've pointed out so far is
completely unrelated.

http://play.golang.org/p/OY8E2n-6nX

If this shows you x=3, the channel implementation is broken.

> Please tell me I am; I will be happier then. :-)
>
> Hector the cat.

I knew it.


gustavo @ http://niemeyer.net

James Bardin

unread,
May 23, 2013, 3:34:00 PM5/23/13
to golan...@googlegroups.com, roger peppe, Matt Joiner

Answered my own question:

I don't think a sync.Semaphore would be *that* bad for the language. 

These discussions about memory model are really quite difficult for a newcomer to grasp, and hence we have people making incorrect semaphore's out of channels?
(though I still agree with you in this case, gustavo ;) )

-jim

minux

unread,
May 23, 2013, 3:54:44 PM5/23/13
to James Bardin, golan...@googlegroups.com, Gustavo Niemeyer, Matt Joiner
On Fri, May 24, 2013 at 2:57 AM, James Bardin <j.ba...@gmail.com> wrote:
I don't believe he's arguing that. What he's saying is that is doesn't matter here, since we only care about the limit, not the order.
no, the order is very important here (i'm talking about the order between the channel send
and accept).
i know this is very theoretical, but it seems what're arguing about just what MM says, so
it is.

the following message is a verbatim quote from https://groups.google.com/d/msg/golang-dev/ShqsqvCzkWg/9WGj2yPK9xYJ,
please do read the whole thread, and esp. consider the fact that the semaphore example in Effective Go
is changed to use the receiving-as-Lock implementation (and Gustavo's program is using precisely the old
implementation of the semaphore [sending-as-Lock]).

=== Quote from Dmitry begins ===
Well, it's not a semaphore, it's a no-op. Follow with me.

Consider that each goroutine dequeues own message from the sem chan, then there is absolutely no happens-before edges between goroutines. So if there is infinite amount of concurrent calls to handle(), there is infinite amount of concurrent (non ordered by happens-before relation) calls to process(). The semaphore limits nothing.

Consider how an aggressive optimizing (but still conforming) compiler can optimize the following code:

var sem = make(chan int, MaxOutstanding)
func handle(r *Request) {
    sem <- 1    // Wait for active queue to drain.
    process(r)  // May take a long time.
    <-sem       // Done; enable next request to run.
}

Since send to a buffered chan is never a destination of happens-before arc, any operations are allowed to hoist above it. So the code can be safely transformed to:

var sem = make(chan int, MaxOutstanding)
func handle(r *Request) {
    process(r)  // May take a long time.
    sem <- 1    // Wait for active queue to drain.
    <-sem       // Done; enable next request to run.
}

Since recv from a buffered chan is never an origin of happens-before arc, any operation is allowed to sink below it. So the code can be safely transformed to:

var sem = make(chan int, MaxOutstanding)
func handle(r *Request) {
    sem <- 1    // Wait for active queue to drain.
    <-sem       // Done; enable next request to run.
    process(r)  // May take a long time.
}

WLOG, let's assume we choose the second transformation. Now, a sequence of:

buffered_chan <- X
<-buffered_chan

can be transformed to:

wait_until_chan_is_not_full(buffered_chan)  // just to handle the case when the chan is full, and the above sequence must block forever

So we can transform our code to:

var sem = make(chan int, MaxOutstanding)
func handle(r *Request) {
    wait_until_chan_is_not_full(sem)
    process(r)  // May take a long time.
}

If we see all usages of a buffered chan, and it's used only in calls to wait_until_chan_is_not_full(), we can safely remove all that calls and the channel altogether. So we end up with:

func handle(r *Request) {
    process(r)  // May take a long time.
}

James Bardin

unread,
May 23, 2013, 5:10:08 PM5/23/13
to golan...@googlegroups.com, James Bardin, Gustavo Niemeyer, Matt Joiner


On Thursday, May 23, 2013 3:54:44 PM UTC-4, minux wrote:
On Fri, May 24, 2013 at 2:57 AM, James Bardin <j.ba...@gmail.com> wrote:
I don't believe he's arguing that. What he's saying is that is doesn't matter here, since we only care about the limit, not the order.
it is.

the following message is a verbatim quote from https://groups.google.com/d/msg/golang-dev/ShqsqvCzkWg/9WGj2yPK9xYJ,
please do read the whole thread, and esp. consider the fact that the semaphore example in Effective Go
is changed to use the receiving-as-Lock implementation (and Gustavo's program is using precisely the old
implementation of the semaphore [sending-as-Lock]).


No I totally remember that thread, and the change the to the semaphore example, but I'm failing to see how the order matters in *this* case. The worst I can see is that some Accepts may get handled out of order, but no matter what you can't Accept more than the capacity of that channel, correct?

I do think that if we're going to promote a semaphore pattern using a channel, we need to make sure we always stick to the correct version for the sake of consistency. 


Again, I'd like to give a bump to that sync.Semaphore CL I linked to. I would rather see an efficient bounded semaphore that people could abuse, rather than bumping into this confusion on a regular basis. 

Yes, you can make a semaphore with a channel, but why not make it convenient?  There could be a helper function in sync to create and fill the channel, as well as a an empty struct type defined; though once it's there in the std lib, it would probably be logical to make it more efficient just like WaitGroup (which could also be done with channels).




 

Gustavo Niemeyer

unread,
May 23, 2013, 5:11:05 PM5/23/13
to minux, James Bardin, golan...@googlegroups.com, Matt Joiner
On Thu, May 23, 2013 at 4:54 PM, minux <minu...@gmail.com> wrote:
> is changed to use the receiving-as-Lock implementation (and Gustavo's
> program is using precisely the old implementation of the
> semaphore [sending-as-Lock]).

I'm not using it as a lock, as a lock has well defined memory barrier
semantics that a channel does not, and I'm not suggesting that. I'm
actually trusting on its blocking properties, as stated in the
specification:

"""
The capacity, in number of elements, sets the size of the buffer in
the channel. If the capacity is greater than zero, the channel is
asynchronous: communication operations succeed without blocking if the
buffer is not full (sends) or not empty (receives), and elements are
received in the order they are sent.
"""

This implies it blocks if the channel is full. This is also confirmed
in Effective Go:

"""
If the channel has a buffer, the sender blocks only until the value
has been copied to the buffer; if the buffer is full, this means
waiting until some receiver has retrieved a value.
"""

That's what the spec says, but even besides it, it would be extremely
ironic to have all the care the Go designers take to have things being
understandable, and then create such subtle behavior in the core
communication primitive of the language.


gustavo @ http://niemeyer.net

Mikio Hara

unread,
May 23, 2013, 11:55:29 PM5/23/13
to Matt Joiner, golang-nuts
On Fri, May 24, 2013 at 12:31 AM, Matt Joiner <anac...@gmail.com> wrote:

> Ideally the connection handling stuff in net/http should be exposed, and
> take Conn and Server interfaces (it already takes a Conn), so as to allow
> custom Servers. Additionally, upon accepting a connection, an exposed
> function ServeOne(Conn) should be called instead of the all the unexposed
> stuff that's currently going on.

Probably we will also need more support from net package; a method
that sets number of backlogs to the pending queue inside the kernel.
(I have no strong opinion to do that so far, though.)

IIRC, in Go 1.1, we can run multiple connection receptors; I mean, we can
call net.Listener.Accept on the same net.Listener in multiple goroutines with
your rate-limitation stuff which consists of locks, channels or both.

roger peppe

unread,
May 24, 2013, 9:02:41 AM5/24/13
to Gustavo Niemeyer, Matt Joiner, golang-nuts
On 23 May 2013 20:02, Gustavo Niemeyer <gus...@niemeyer.net> wrote:
> On Thu, May 23, 2013 at 3:44 PM, roger peppe <rogp...@gmail.com> wrote:
>> On 23 May 2013 18:18, Gustavo Niemeyer <gus...@niemeyer.net> wrote:
>>> On Thu, May 23, 2013 at 1:56 PM, roger peppe <rogp...@gmail.com> wrote:
>>>> I'm not entirely sure. Let's change my x++ to something else.
>>>> Are the two sleeps guaranteed to run sequentially here?
>>> (...)
>>>> go time.Sleep(time.Second)
>>>
>>> Are you using Roger's laptop while he's away?
>>
>> I know it seems weird, but from the discussions in golang-dev, it
>> *is* weird stuff.
>
> I've never seen any questions raised about what I pointed out so far,
> and what you just mentioned above is obviously bogus and unrelated to
> the listener implementation.
>
>> From what I remember of Dmitry's interpretation, a sufficiently
>> smart compiler would be entitled to hoist the go statement
>> above the channel send.
>
> Obviously.

If that's the case, then I can't see how this program
(very similar in approach to your original example
in this thread) is guaranteed not to panic:

http://play.golang.org/p/UeHAShi_dS

> [...] it would be extremely
> ironic to have all the care the Go designers take to have things being
> understandable, and then create such subtle behavior in the core
> communication primitive of the language.

I agree entirely. I'm concerned that this is an unintended side effect
of the way that the Memory Model was phrased and I'd like
the issue cleared up definitively.

minux

unread,
May 24, 2013, 9:29:43 AM5/24/13
to Gustavo Niemeyer, James Bardin, golan...@googlegroups.com, Matt Joiner
On Fri, May 24, 2013 at 5:11 AM, Gustavo Niemeyer <gus...@niemeyer.net> wrote:
On Thu, May 23, 2013 at 4:54 PM, minux <minu...@gmail.com> wrote:
> is changed to use the receiving-as-Lock implementation (and Gustavo's
> program is using precisely the old implementation of the
> semaphore [sending-as-Lock]).

I'm not using it as a lock, as a lock has well defined memory barrier
semantics that a channel does not, and I'm not suggesting that. I'm
actually trusting on its blocking properties, as stated in the
specification:
how is your use of buffered channel different from the example in my last reply
and the semaphore example in Effective Go?

If the Accept is hoist above the send, then all bets are off, and your program
won't limit concurrent request.

"""
The capacity, in number of elements, sets the size of the buffer in
the channel. If the capacity is greater than zero, the channel is
asynchronous: communication operations succeed without blocking if the
buffer is not full (sends) or not empty (receives), and elements are
received in the order they are sent.
"""

This implies it blocks if the channel is full. This is also confirmed
in Effective Go:

"""
If the channel has a buffer, the sender blocks only until the value
has been copied to the buffer; if the buffer is full, this means
waiting until some receiver has retrieved a value.
"""
i agree the the send will block if buffer is full, but that doesn't mean the Accept
is executed after send. As explained in my quoted reply, the current memory
model doesn't guarantee that.

Or put it another way, the MM docs doesn't guarantee sequential consistency,
and without that, the order of things won't be that obvious.

That's what the spec says, but even besides it, it would be extremely
ironic to have all the care the Go designers take to have things being
understandable, and then create such subtle behavior in the core
communication primitive of the language.
well i agree the problem is very subtle, and the Go designers must not
add that intentionally, but unfortunately, this is how things are now.

the quoted thread happens almost one year ago, and the MM docs remains
still in this regard. And the fact that the semaphore example in effective
Go is changed accordingly means the authors are fully aware of the problem
and accept that it's real (at least in theory).

Gustavo Niemeyer

unread,
May 24, 2013, 9:36:04 AM5/24/13
to minux, James Bardin, golan...@googlegroups.com, Matt Joiner
On Fri, May 24, 2013 at 10:29 AM, minux <minu...@gmail.com> wrote:
> how is your use of buffered channel different from the example in my last
> reply and the semaphore example in Effective Go?

I'm not implementing a semaphore. A semaphore implies memory barrier
semantics that I'm not using.

> i agree the the send will block if buffer is full, but that doesn't mean the
> Accept is executed after send. As explained in my quoted reply, the
> current memory model doesn't guarantee that.
>
> Or put it another way, the MM docs doesn't guarantee sequential consistency,
> and without that, the order of things won't be that obvious.

Yes, it actually does:

"Within a single goroutine, the happens-before order is the order
expressed by the program."

If you reorder things arbitrarily within any program at all, it will break.

> well i agree the problem is very subtle, and the Go designers must not
> add that intentionally, but unfortunately, this is how things are now.

No, it's actually not.

> the quoted thread happens almost one year ago, and the MM docs remains

I agree with the points made in that thread, but it's addressing something else.


gustavo @ http://niemeyer.net

Gustavo Niemeyer

unread,
May 24, 2013, 10:01:15 AM5/24/13
to minux, James Bardin, golan...@googlegroups.com, Matt Joiner
On Fri, May 24, 2013 at 10:36 AM, Gustavo Niemeyer <gus...@niemeyer.net> wrote:
>> i agree the the send will block if buffer is full, but that doesn't mean the
>> Accept is executed after send. As explained in my quoted reply, the
>> current memory model doesn't guarantee that.
>>
>> Or put it another way, the MM docs doesn't guarantee sequential consistency,
>> and without that, the order of things won't be that obvious.
>
> Yes, it actually does:
>
> "Within a single goroutine, the happens-before order is the order
> expressed by the program."
>
> If you reorder things arbitrarily within any program at all, it will break.

Even more clearly:

"""
That is, compilers and processors may reorder the reads and writes
executed within a single goroutine only when the reordering does not
change the behavior within that goroutine as defined by the language
specification.
"""


gustavo @ http://niemeyer.net

roger peppe

unread,
May 24, 2013, 10:27:53 AM5/24/13
to Gustavo Niemeyer, minux, James Bardin, golang-nuts, Matt Joiner
Do you think that pertains to the example I just posted, where
the behaviour is only observable across more than one goroutine?

I'm still wondering if you think that program is guaranteed not to panic.

minux

unread,
May 24, 2013, 10:48:23 AM5/24/13
to Gustavo Niemeyer, James Bardin, golan...@googlegroups.com, Matt Joiner
On Fri, May 24, 2013 at 9:36 PM, Gustavo Niemeyer <gus...@niemeyer.net> wrote:
On Fri, May 24, 2013 at 10:29 AM, minux <minu...@gmail.com> wrote:
> how is your use of buffered channel different from the example in my last
> reply and the semaphore example in Effective Go?

I'm not implementing a semaphore. A semaphore implies memory barrier
semantics that I'm not using.
please reread the semaphore example in Effective Go.
"A buffered channel can be used like a semaphore, for instance to limit throughput. In this example, incoming requests are passed to handle, which receives a value from the channel, processes the request, and then sends a value back to the channel to ready the "semaphore" for the next consumer. The capacity of the channel buffer limits the number of simultaneous calls to process, so during initialization we prime the channel by filling it to capacity."

How is that different from your program?

> the quoted thread happens almost one year ago, and the MM docs remains
I agree with the points made in that thread, but it's addressing something else.
It seems you're contradicting yourself.

In that thread, Dmitry made the statement [1] that this program:

var sem = make(chan int, MaxOutstanding)
func handle(r *Request) {
    sem <- 1    // Wait for active queue to drain.
    process(r)  // May take a long time.
    <-sem       // Done; enable next request to run.
}

doesn't limit the maximum number of concurrent requests at all.
This is equivalent to (*boundListener).Accept in your proposed program [2]:

func NewBoundListener(maxActive int, l net.Listener) net.Listener {
return &boundListener{l, make(chan bool, maxActive)}
}
func (l *boundListener) Accept() (net.Conn, error) {
l.active <- true
c, err := l.Listener.Accept()
if err != nil {
<-l.active
return nil, err
}
return &boundConn{c, l.active}, err
}


Gustavo Niemeyer

unread,
May 24, 2013, 11:17:08 AM5/24/13
to minux, James Bardin, golan...@googlegroups.com, Matt Joiner
On Fri, May 24, 2013 at 11:48 AM, minux <minu...@gmail.com> wrote:
>> I'm not implementing a semaphore. A semaphore implies memory barrier
>> semantics that I'm not using.
(...)
> How is that different from your program?

You already said that, and I already explained. What I implemented
requires only what the specification and memory model define as valid
and correct.

The specification implies a send blocks when the channel is full:

"""
The capacity, in number of elements, sets the size of the buffer in
the channel. If the capacity is greater than zero, the channel is
asynchronous: communication operations succeed without blocking if the
buffer is not full (sends) or not empty (receives), and elements are
received in the order they are sent.
"""

The memory model says you cannot break what the specification says by
reordering stuff arbitrarily:

"""
That is, compilers and processors may reorder the reads and writes
executed within a single goroutine only when the reordering does not
change the behavior within that goroutine as defined by the language
specification.
"""

What you're suggesting disagrees with the implementation, with the
specification, and with what is reasonable.

Feel free to argue with any of them.


gustavo @ http://niemeyer.net

minux

unread,
May 24, 2013, 11:44:06 AM5/24/13
to Gustavo Niemeyer, James Bardin, golan...@googlegroups.com, Matt Joiner
On Fri, May 24, 2013 at 11:17 PM, Gustavo Niemeyer <gus...@niemeyer.net> wrote:
On Fri, May 24, 2013 at 11:48 AM, minux <minu...@gmail.com> wrote:
>> I'm not implementing a semaphore. A semaphore implies memory barrier
>> semantics that I'm not using.
(...)
> How is that different from your program?

You already said that, and I already explained. What I implemented
requires only what the specification and memory model define as valid
and correct.
We can apply the same reasoning for the code I quoted below, but the code below is wrong
according the current MM docs (did you say that you agree the points made in that thread?)
and is fixed in Go 1.1's Effective Go article (see commit below).

var sem = make(chan int, MaxOutstanding)
func handle(r *Request) {
    sem <- 1    // Wait for active queue to drain.
    process(r)  // May take a long time.
    <-sem       // Done; enable next request to run.
}

Feel free to send a CL to revert the following change if you believe you're correct.

changeset:   16204:1a329995118c
user:        Rob Pike <r...@golang.org>
date:        Tue Mar 12 10:53:01 2013 -0700
files:       doc/effective_go.html
description:
effective_go.html: fix semaphore example
It didn't work properly according to the Go memory model.
Fixes issue 5023.

R=golang-dev, dvyukov, adg
CC=golang-dev

Or you can argue that your code is different from the code that was in effective Go.

Steven Blenkinsop

unread,
May 24, 2013, 12:12:49 PM5/24/13
to Gustavo Niemeyer, minux, James Bardin, golan...@googlegroups.com, Matt Joiner
I'm copying this reply from another thread on the topic. It might help show why the guarantees in the memory model and the spec don't imply what you think they imply:

 """
The guarantee that reordering can't change observed behaviour only applies locally. Also, the order of memory accesses observed by one goroutine can be different from the order observed by another goroutine, as long as the ordering guarantees given in the memory model are met in each. Thus, we have to consider the program from the perspective of each goroutine separately:

Because the memory model makes no guarantees about the ordering of memory accesses that locally happen after a send or before a receive on a buffered channel, each of these programs is equivalent to the original you posted from the perspective of their respective goroutines. As you can see, in each case, the accesses to `s` can happen concurrently without exceeding the buffer capacity of the channel.
"""

Gustavo Niemeyer

unread,
May 24, 2013, 12:33:24 PM5/24/13
to Steven Blenkinsop, golan...@googlegroups.com
On Fri, May 24, 2013 at 1:12 PM, Steven Blenkinsop <stev...@gmail.com> wrote:
> I'm copying this reply from another thread on the topic. It might help show
> why the guarantees in the memory model and the spec don't imply what you
> think they imply:
>
> """
> The guarantee that reordering can't change observed behaviour only applies
> locally. Also, the order of memory accesses observed by one goroutine can be
> different from the order observed by another goroutine, as long as the
(...)

This is correct, unrelated, and irrelevant. I did not implement a
semaphore. It's not a memory barrier.


gustavo @ http://niemeyer.net

roger peppe

unread,
May 24, 2013, 12:43:05 PM5/24/13
to Gustavo Niemeyer, golang-nuts
There is no memory barrier required in this program, but
it nonetheless uses a channel as a semaphore.

Do you think it's guaranteed not to panic?

http://play.golang.org/p/UeHAShi_dS

I haven't yet seen a convincing argument we
can launch a new goroutine only after the old
one has finished.

If we do have that guarantee, that's surely a
"happens before" relationship as defined by the
memory model.

Or are you trying to say there are two kinds of
"happens before" relationships in Go?
> --
> You received this message because you are subscribed to the Google Groups "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>

Steven Blenkinsop

unread,
May 24, 2013, 1:03:24 PM5/24/13
to Gustavo Niemeyer, golan...@googlegroups.com
On Friday, May 24, 2013, Gustavo Niemeyer wrote:

This is correct, unrelated, and irrelevant. I did not implement a
semaphore. It's not a memory barrier.

Either:

1) All the guarantees on reads and writes, including their limitations, apply here, in which case the said reorderings can happen according to the memory model and there is no limit on the number of concurrent calls to Accept; or
2) None of the guarantees on reads and writes apply here (because "it's not a memory barrier"), in which case the said reordering can happen without regard to the memory model, and there is no limit on the number of concurrent calls to Accept.

Gustavo Niemeyer

unread,
May 24, 2013, 1:06:07 PM5/24/13
to roger peppe, golang-nuts
On Fri, May 24, 2013 at 1:43 PM, roger peppe <rogp...@gmail.com> wrote:
> There is no memory barrier required in this program, but
> it nonetheless uses a channel as a semaphore.

A semaphore implies memory barrier semantics. It's not a semaphore.

> Do you think it's guaranteed not to panic?

Yes, I believe it is, and the implementation, specification, and
memory model all seem to agree.

> I haven't yet seen a convincing argument we
> can launch a new goroutine only after the old
> one has finished.

This moment when the goroutine finishes running is a different matter
altogether.

> If we do have that guarantee, that's surely a
> "happens before" relationship as defined by the
> memory model.

Happens-before requires memory barrier semantics. If there was such a
guarantee, the implementation I pasted would be an actual semaphore,
but it's not. It relies solely on the fact the channel blocks as per
the spec, and on the fact the compiler cannot reorder statements in a
way that would change the meaning of the program, per the memory
model.

(...)
> Or are you trying to say there are two kinds of
> "happens before" relationships in Go?

That's pretty disingenuous.


gustavo @ http://niemeyer.net

Gustavo Niemeyer

unread,
May 24, 2013, 1:15:42 PM5/24/13
to Steven Blenkinsop, golan...@googlegroups.com
On Fri, May 24, 2013 at 2:03 PM, Steven Blenkinsop <stev...@gmail.com> wrote:
> 1) All the guarantees on reads and writes, including their limitations,
> apply here, in which case the said reorderings can happen according to the
> memory model and there is no limit on the number of concurrent calls to
> Accept; or

"""
That is, compilers and processors may reorder the reads and writes
executed within a single goroutine only when the reordering does not
change the behavior within that goroutine as defined by the language
specification.
"""


gustavo @ http://niemeyer.net

minux

unread,
May 24, 2013, 1:20:16 PM5/24/13
to Gustavo Niemeyer, roger peppe, golang-nuts
On Sat, May 25, 2013 at 1:06 AM, Gustavo Niemeyer <gus...@niemeyer.net> wrote:
On Fri, May 24, 2013 at 1:43 PM, roger peppe <rogp...@gmail.com> wrote:
> There is no memory barrier required in this program, but
> it nonetheless uses a channel as a semaphore.

A semaphore implies memory barrier semantics. It's not a semaphore.
Why semaphore implies memory barrier semantics?

In computer science, a semaphore is a variable or abstract data type that provides a simple but useful
abstraction for controlling access by multiple processes to a common resource in a parallel programming
or multi user environment.

Semaphore limits access to a common resource, and in your case, the common resource is network
connections.

> If we do have that guarantee, that's surely a
> "happens before" relationship as defined by the
> memory model.

Happens-before requires memory barrier semantics. If there was such a
guarantee, the implementation I pasted would be an actual semaphore,
but it's not. It relies solely on the fact the channel blocks as per
the spec, and on the fact the compiler cannot reorder statements in a
way that would change the meaning of the program, per the memory
model.
the meaning of a program should still be abided by the memory model.

for example,
a = 1; b = 2;
is the meaning of the program: first set a to 1 and then set b to 2?
no.

you're implying the meaning of this code snippet:
    sem <- 1    // Wait for active queue to drain.
    process(r)  // May take a long time.
is that the process() call is done after sem <- 1. but why that's true?
the MM docs doesn't say that.

Steven Blenkinsop

unread,
May 24, 2013, 1:27:53 PM5/24/13
to Gustavo Niemeyer, golan...@googlegroups.com
On Friday, 24 May 2013, Gustavo Niemeyer wrote:
"""
That is, compilers and processors may reorder the reads and writes
executed within a single goroutine only when the reordering does not
change the behavior within that goroutine as defined by the language
specification.
"""

And how would such a reordering change the behaviour within the goroutine?  Also, the other guarantees in the memory *add to* that guarantee, they don't subtract from it, so saying you don't think they apply can in no way *strenghen* the guarantees given in the memory model. And it doesn't even make sense why those other guarantees wouldn't apply if that one does anyways, since they're all dealing with the ordering of reads and writes.

Nico

unread,
May 24, 2013, 1:37:21 PM5/24/13
to golan...@googlegroups.com
On 24/05/13 17:43, roger peppe wrote:
> There is no memory barrier required in this program, but
> it nonetheless uses a channel as a semaphore.
>
> Do you think it's guaranteed not to panic?
>
> http://play.golang.org/p/UeHAShi_dS
>
> I haven't yet seen a convincing argument we
> can launch a new goroutine only after the old
> one has finished.
>
> If we do have that guarantee, that's surely a
> "happens before" relationship as defined by the
> memory model.
>
> Or are you trying to say there are two kinds of
> "happens before" relationships in Go?


For channel communication, the memory model only states 3 cases of
"happens before":

1) "A send on a channel happens before the corresponding receive from
that channel completes."

2) "The closing of a channel happens before a receive that returns a
zero value because the channel is closed."

3) "A receive from an unbuffered channel happens before the send on that
channel completes."


I would argue that case 3 should be rephrased to include also the case
of buffered channels:

"A receive from a channel happens before the send on that full channel
completes."

Does it make sense? I know it's not brilliantly written.

Nico

unread,
May 24, 2013, 1:40:24 PM5/24/13
to golan...@googlegroups.com
"A receive from an unbuffered or a full channel happens before the send
on that channel completes."

Gustavo Niemeyer

unread,
May 24, 2013, 1:42:18 PM5/24/13
to minux, roger peppe, golang-nuts
On Fri, May 24, 2013 at 2:20 PM, minux <minu...@gmail.com> wrote:
>> A semaphore implies memory barrier semantics. It's not a semaphore.
>
> Why semaphore implies memory barrier semantics?
>
> From http://en.wikipedia.org/wiki/Semaphore_(programming):

Have a look at the actual document from Dijkstra from 1965 and try to
convince yourself that a memory barrier is not required:

http://www.cs.utexas.edu/users/EWD/ewd01xx/EWD123.PDF

> is that the process() call is done after sem <- 1. but why that's true?

We've already covered this.


gustavo @ http://niemeyer.net

Gustavo Niemeyer

unread,
May 24, 2013, 1:56:53 PM5/24/13
to roger peppe, golang-nuts
On Fri, May 24, 2013 at 10:02 AM, roger peppe <rogp...@gmail.com> wrote:
>>> From what I remember of Dmitry's interpretation, a sufficiently
>>> smart compiler would be entitled to hoist the go statement
>>> above the channel send.
>>
>> Obviously.
>
> If that's the case, then I can't see how this program

I probably misunderstood what you meant at first. A sufficiently smart
compiler cannot reorder statements in a way that changes what the spec
says about them.

> I agree entirely. I'm concerned that this is an unintended side effect
> of the way that the Memory Model was phrased and I'd like
> the issue cleared up definitively.

I wouldn't worry about it. Besides the fact the spec and memory model
agree, it would be completely nuts to do that kind of reordering.

That's all unrelated to the example you pointed out here, though:

https://groups.google.com/d/msg/golang-nuts/qt3ABSpKjzM/cqEi0IUFPJgJ

This is incorrect code.


gustavo @ http://niemeyer.net

mjy

unread,
May 24, 2013, 1:58:53 PM5/24/13
to golan...@googlegroups.com
Put Varnish or nginx in front of your servers, then it is very simple to limit the number of connections. As an additional bonus, you will be able to add caching at very little cost (just set Expires: headers etc.).

Gustavo Niemeyer

unread,
May 24, 2013, 2:18:41 PM5/24/13
to roger peppe, minux, James Bardin, golang-nuts, Matt Joiner
On Fri, May 24, 2013 at 11:27 AM, roger peppe <rogp...@gmail.com> wrote:
> Do you think that pertains to the example I just posted, where
> the behaviour is only observable across more than one goroutine?

No, that's unrelated. Reordering statements inside a single goroutine
cannot change the meaning of the program according to the
specification, no matter what the other goroutine is observing.

> I'm still wondering if you think that program is guaranteed not to panic.

I've already answered that.

(there's a minor detail about the fact the atomic package is in fact
not part of the memory model, but let's ignore that distraction for now)


gustavo @ http://niemeyer.net

minux

unread,
May 24, 2013, 2:21:14 PM5/24/13
to Gustavo Niemeyer, roger peppe, golang-nuts
On Sat, May 25, 2013 at 1:42 AM, Gustavo Niemeyer <gus...@niemeyer.net> wrote:
On Fri, May 24, 2013 at 2:20 PM, minux <minu...@gmail.com> wrote:
>> A semaphore implies memory barrier semantics. It's not a semaphore.
> Why semaphore implies memory barrier semantics?
> From http://en.wikipedia.org/wiki/Semaphore_(programming):

Have a look at the actual document from Dijkstra from 1965 and try to
convince yourself that a memory barrier is not required:

http://www.cs.utexas.edu/users/EWD/ewd01xx/EWD123.PDF
note: what're not talking about implementing semaphore, we're talking about using
semaphore. (even implementing semaphore doesn't always need memory barriers
from a hardware engineer's view point, but that's a different story entirely.)

if you only consider the shared resource to be memory, memory barrier is needed,
but the shared resource need not be memory at all.

> is that the process() call is done after sem <- 1. but why that's true?
We've already covered this.
but apparently some people don't agree with you on this matter.

In fact, whether you're using semaphore or not is irrelevant for this problem (you
can of course define semaphore to be always connected with memory barriers), the
problem is whether you accept that the old example in Effective Go is not working
properly and whether your program is using the same fundamental assumptions with
that example.

Gustavo Niemeyer

unread,
May 24, 2013, 2:25:10 PM5/24/13
to minux, James Bardin, golan...@googlegroups.com, Matt Joiner
On Fri, May 24, 2013 at 12:44 PM, minux <minu...@gmail.com> wrote:
> We can apply the same reasoning for the code I quoted below, but the code
> below is wrong according the current MM docs (did you say that you agree
> the points made in that thread?) and is fixed in Go 1.1's Effective Go
> article (see commit below).

That's because it calls it a semaphore. See the only comment from Russ
on that thread:

https://groups.google.com/d/msg/golang-dev/ShqsqvCzkWg/Kg30VPN4QmUJ

If you want a lock/semaphore, with memory barrier semantics, you need
the happens-before guarantee, and that's what it takes.

> Feel free to send a CL to revert the following change if you believe you're
> correct.

I believe I'm correct, and I believe that commit is also correct.


gustavo @ http://niemeyer.net

Gustavo Niemeyer

unread,
May 24, 2013, 2:30:16 PM5/24/13
to minux, roger peppe, golang-nuts
On Fri, May 24, 2013 at 3:21 PM, minux <minu...@gmail.com> wrote:
> if you only consider the shared resource to be memory, memory barrier is needed,
> but the shared resource need not be memory at all.

Heh.. okay. We're done I think.


gustavo @ http://niemeyer.net

minux

unread,
May 24, 2013, 2:34:19 PM5/24/13
to Gustavo Niemeyer, James Bardin, golan...@googlegroups.com, Matt Joiner
these two statement contradict with each other.

I don't understand why this program is incorrect (as you believe the commit is correct):
var sem = make(chan int, MaxOutstanding)
func handle(r *Request) {
    sem <- 1    // Wait for active queue to drain.
    process(r)  // May take a long time.
    <-sem       // Done; enable next request to run.
}

while your version below is correct:
func NewBoundListener(maxActive int, l net.Listener) net.Listener {
return &boundListener{l, make(chan bool, maxActive)}
}
func (l *boundListener) Accept() (net.Conn, error) {
l.active <- true
c, err := l.Listener.Accept()
if err != nil {
<-l.active
return nil, err
}
return &boundConn{c, l.active}, err
}

both programs are relying on the assumptions:
1. sending to a full (buffered) channel will block
2. the call to a function (be it process() or Accept()) will happen after the channel send.

you could argue that effective Go is using the wrong word (semaphore), but that's an entirely
different issue.

Gustavo Niemeyer

unread,
May 24, 2013, 2:36:37 PM5/24/13
to minux, James Bardin, golan...@googlegroups.com, Matt Joiner
On Fri, May 24, 2013 at 3:34 PM, minux <minu...@gmail.com> wrote:
> these two statement contradict with each other.

We've already covered all of that. I'm done with this thread as it
stopped being useful for anyone, including ourselves.


gustavo @ http://niemeyer.net

minux

unread,
May 24, 2013, 2:58:56 PM5/24/13
to Gustavo Niemeyer, golan...@googlegroups.com
On Sat, May 25, 2013 at 2:36 AM, Gustavo Niemeyer <gus...@niemeyer.net> wrote:
On Fri, May 24, 2013 at 3:34 PM, minux <minu...@gmail.com> wrote:
> these two statement contradict with each other.
We've already covered all of that. I'm done with this thread as it
stopped being useful for anyone, including ourselves.
OK, I understand your feeling. Thank you.

We could have led the discussion to changes to the memory model documentation
as it's too strict and subtle in some common cases like this one.
Reply all
Reply to author
Forward
0 new messages