go's memory model

1,001 views
Skip to first unread message

Nico

unread,
May 24, 2013, 3:21:04 PM5/24/13
to golan...@googlegroups.com
In a previous post ( https://groups.google.com/d/msg/golang-nuts/qt3ABSpKjzM/c9uLkKLZ05QJ ), Rog pointed out that he hadn't seen any convincing argument that would guarantee that the following program doesn't panic:


Go's memory model for channel communications only states 3 "happens before" cases:

1) "A send on a channel happens before the corresponding receive from 
that channel completes." 

2) "The closing of a channel happens before a receive that returns a 
zero value because the channel is closed." 

3) "A receive from an unbuffered channel happens before the send on that 
channel completes." 

My understanding is that buffered channels also need a "happens before" relationship along the following lines:

4) "A receive from an full channel happens before the send on that 
channel completes."

If this relationship wouldn't hold, one could imaging multiple goroutines sending to a channel beyond its capacity.

Note also that case 4) would also ensure that Rog's program doesn't panic.


Does the document "Go's memory model" need amendment?

roger peppe

unread,
May 24, 2013, 3:29:55 PM5/24/13
to Nico, golang-nuts
In particular, the race detector reports a race on
http://play.golang.org/p/Jj81pgAjkR
but not on http://play.golang.org/p/yA-8_Z1hgI which is unintuitive, I'd say.

I'd like to see an amendment to the memory model to make the semantics
less subtle (channels are there to make our life easier, after all).
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>

Gustavo Niemeyer

unread,
May 24, 2013, 3:42:29 PM5/24/13
to Nico, golan...@googlegroups.com
On Fri, May 24, 2013 at 4:21 PM, Nico <nicolas...@gmail.com> wrote:
> My understanding is that buffered channels also need a "happens before"
> relationship along the following lines:
>
> 4) "A receive from an full channel happens before the send on that
> channel completes."

That sounds like a nice property. I'd suggest the wording:

"A receive from a buffered channel that is full happens before the
send on that channel completes."

> If this relationship wouldn't hold, one could imaging multiple goroutines
> sending to a channel beyond its capacity.

That's a different statement. We can say the channel blocks while full
without offering all the guarantees of happens-before. We've just had
a long thread about this without reaching consensus, though, so I
suggest simply avoiding getting into this topic again. We can try to
reach agreement on the prior changes without this.


gustavo @ http://niemeyer.net

Gustavo Niemeyer

unread,
May 24, 2013, 3:49:44 PM5/24/13
to Nico, golan...@googlegroups.com
On Fri, May 24, 2013 at 4:42 PM, Gustavo Niemeyer <gus...@niemeyer.net> wrote:
> That sounds like a nice property. I'd suggest the wording:
>
> "A receive from a buffered channel that is full happens before the
> send on that channel completes."

Actually, I take that back. I can't see why this property would be
beneficial. It would mean having to memory fences between sends and
completely unrelated receives.

I'm personally happy with the current wording in the memory model.


gustavo @ http://niemeyer.net

Nico

unread,
May 24, 2013, 3:56:29 PM5/24/13
to golan...@googlegroups.com, Nico


On Friday, May 24, 2013 8:42:29 PM UTC+1, Gustavo Niemeyer wrote:
That sounds like a nice property. I'd suggest the wording:

"A receive from a buffered channel that is full happens before the
send on that channel completes."


yes, that's a much better wording.
 
> If this relationship wouldn't hold, one could imaging multiple goroutines
> sending to a channel beyond its capacity.

That's a different statement. We can say the channel blocks while full
without offering all the guarantees of happens-before. We've just had
a long thread about this without reaching consensus, though, so I
suggest simply avoiding getting into this topic again. We can try to
reach agreement on the prior changes without this. 

yes, I think it's better to keep that discussion in the other thread.

Nico

unread,
May 24, 2013, 4:03:06 PM5/24/13
to golan...@googlegroups.com, Nico


On Friday, May 24, 2013 8:49:44 PM UTC+1, Gustavo Niemeyer wrote:
Actually, I take that back. I can't see why this property would be
beneficial. It would mean having to memory fences between sends and
completely unrelated receives.

I'm personally happy with the current wording in the memory model.

As Rog pointed out, it would simplify the use of channel. It is too subtle that the two versions of his program behave different.

We could also implement a semaphore without the subtlety of whether we first fill it up or not.
 

Nico

unread,
May 24, 2013, 4:06:20 PM5/24/13
to golan...@googlegroups.com, Nico



We could also implement a semaphore without the subtlety of whether we first fill it up or not.


sorry, again poor wording here.

With this change to the memory model, we would be able to implement a semaphore using a buffered channel without the subtlety of whether we first fill it up or not.

Gustavo Niemeyer

unread,
May 24, 2013, 4:07:41 PM5/24/13
to Nico, golan...@googlegroups.com
On Fri, May 24, 2013 at 5:03 PM, Nico <nicolas...@gmail.com> wrote:
> As Rog pointed out, it would simplify the use of channel. It is too subtle
> that the two versions of his program behave different.

I agree it's subtle, disagree it'd simplify the general use of
channels, and suggest not trusting on either side of that behavior.
Communicate by using channels, and use a real lock if you really want
to share memory.

> We could also implement a semaphore without the subtlety of whether we first
> fill it up or not.

I'd prefer to have a proper semaphore in sync instead of abusing
channels for that.


gustavo @ http://niemeyer.net

roger peppe

unread,
May 24, 2013, 4:43:17 PM5/24/13
to Gustavo Niemeyer, Nico, golang-nuts
On 24 May 2013 21:07, Gustavo Niemeyer <gus...@niemeyer.net> wrote:
> Communicate by using channels, and use a real lock if you really want
> to share memory.

This seems like an odd statement to me, given Go's
"share memory by communicating" maxim.

Nico

unread,
May 24, 2013, 4:45:13 PM5/24/13
to golan...@googlegroups.com, Nico


On Friday, May 24, 2013 9:07:41 PM UTC+1, Gustavo Niemeyer wrote:

Communicate by using channels, and use a real lock if you really want
to share memory.


I think this argument has won me over.
It's better not to abuse the use of channels.

  
I'd prefer to have a proper semaphore in sync instead of abusing
channels for that.



I'd like to see this too.

 

Gustavo Niemeyer

unread,
May 24, 2013, 4:55:36 PM5/24/13
to roger peppe, Nico, golang-nuts
Yes, share memory communicating *via* channels, and use a real lock if
you really want to share memory *without* channels.

When you do "count--" in your example, you're not sharing memory by
communicating. You're simply using a channel as a lock to share memory
without the channel. I see no reason to improve such use cases.


gustavo @ http://niemeyer.net

Dmitry Vyukov

unread,
May 25, 2013, 9:26:15 AM5/25/13
to Nico, golang-nuts
On Fri, May 24, 2013 at 11:21 PM, Nico <nicolas...@gmail.com> wrote:
In a previous post ( https://groups.google.com/d/msg/golang-nuts/qt3ABSpKjzM/c9uLkKLZ05QJ ), Rog pointed out that he hadn't seen any convincing argument that would guarantee that the following program doesn't panic:


Go's memory model for channel communications only states 3 "happens before" cases:

1) "A send on a channel happens before the corresponding receive from 
that channel completes." 

2) "The closing of a channel happens before a receive that returns a 
zero value because the channel is closed." 

3) "A receive from an unbuffered channel happens before the send on that 
channel completes." 

My understanding is that buffered channels also need a "happens before" relationship along the following lines:

4) "A receive from an full channel happens before the send on that 
channel completes."

If this relationship wouldn't hold, one could imaging multiple goroutines sending to a channel beyond its capacity.


The chan itself is always consistent (i.e. max capacity is respected), but it does not mean that the operations synchronize any auxiliary data.

If you are familiar with C/C++ memory model, the analog of the operations is:

void send(struct chan *c) {
  int sz = atomic_load_explicit(&c->sz, memory_order_relaxed);
  for(;;) {
    if(sz == c->cap)
      continue;
    if(atomic_compare_exchange_weak_explicit(&c->sz, &sz, sz+1, memory_order_release))
      break;
  }
}

void recv(struct chan *c) {
  int sz = atomic_load_explicit(&c->sz, memory_order_relaxed);
  for(;;) {
    if(sz == 0)
      continue;
    if(atomic_compare_exchange_weak_explicit(&c->sz, &sz, sz-1, memory_order_acquire))
      break;
  }
}

As you see c->sz is always properly maintained within [0..cap), but a recv() never happens before a send().


 

Note also that case 4) would also ensure that Rog's program doesn't panic.


Does the document "Go's memory model" need amendment?

Nico

unread,
May 25, 2013, 3:33:24 PM5/25/13
to golan...@googlegroups.com


On Saturday, May 25, 2013 2:26:15 PM UTC+1, Dmitry Vyukov wrote:
If this relationship wouldn't hold, one could imaging multiple goroutines sending to a channel beyond its capacity.


The chan itself is always consistent (i.e. max capacity is respected), but it does not mean that the operations synchronize any auxiliary data.


I didn't want to question that.

What I meant is that as a consequence of chan itself been consistent, one could also conclude that sends from any goroutine cannot happen while a chan is full. This, if my reasoning is not flawed, is equivalent to saying goroutines are bound by a "happens before" relationship with the receive that stops that channel from being full.

That's why I proposed adding a 4th case to go's memory model document.

But now, I feel my position it's closer to Gustavo's side, and I think people should be discouraged from using channels to synchronise any other data than that going through the channel itself.

To be clear, I still believe that case 4) holds implicitly, but people shouldn't rely on it for synchronising data.

Dave Cheney

unread,
May 25, 2013, 9:18:50 PM5/25/13
to Nico, golan...@googlegroups.com
To be clear, I still believe that case 4) holds implicitly, but people shouldn't rely on it for synchronising data.

I don't understand how something can be true, but not sufficiently true enough to be relied upon. 

Dmitry Vyukov

unread,
May 26, 2013, 4:01:00 AM5/26/13
to Nico, golang-nuts
On Sat, May 25, 2013 at 11:33 PM, Nico <nicolas...@gmail.com> wrote:


On Saturday, May 25, 2013 2:26:15 PM UTC+1, Dmitry Vyukov wrote:
If this relationship wouldn't hold, one could imaging multiple goroutines sending to a channel beyond its capacity.


The chan itself is always consistent (i.e. max capacity is respected), but it does not mean that the operations synchronize any auxiliary data.


I didn't want to question that.

What I meant is that as a consequence of chan itself been consistent, one could also conclude that sends from any goroutine cannot happen while a chan is full.

Yes, sends cannot happen while a chan is full. But this does not establish happens-before. Happens-before relation is transitive and so it orders other operations. This "sends cannot happen while a chan is full" is not transitive, it does not affect any other operations.

Yes, absence of "sequential consistency" is confusing.

 
This, if my reasoning is not flawed, is equivalent to saying goroutines are bound by a "happens before" relationship with the receive that stops that channel from being full.

That's why I proposed adding a 4th case to go's memory model document.


Then there is 5-th case that says that all close() operations on chans ordered by happens before, and it will make the following code legal:

x := 0
y := 0
c := make(chan bool)
go func() {
  defer func() {
    if err := recover(); err != nil {
      println(y)
    }
  }()
  x = 42
  close(c)
}()
go func() {
  defer func() {
    if err := recover(); err != nil {
      println(x)
    }
  }()
  y = 42
  close(c)
}()

Are you sure you want this code to be legal?



 
But now, I feel my position it's closer to Gustavo's side, and I think people should be discouraged from using channels to synchronise any other data than that going through the channel itself.

To be clear, I still believe that case 4) holds implicitly, but people shouldn't rely on it for synchronising data.

--

Nico

unread,
May 26, 2013, 5:02:48 AM5/26/13
to golan...@googlegroups.com, Nico


On Sunday, May 26, 2013 2:18:50 AM UTC+1, Dave Cheney wrote:
To be clear, I still believe that case 4) holds implicitly, but people shouldn't rely on it for synchronising data.

I don't understand how something can be true, but not sufficiently true enough to be relied upon. 


I thought "shouldn't" was the right choice of word there to signify that although it is possible (in the same way that it's possible to implement a WaitGroup using channels), but it isn't idiomatic in Go. 
 
I need to go through Dmitry's post yet. He argues my reasoning is flawed and there is no "happens before" relationship.

roger peppe

unread,
May 26, 2013, 5:10:44 AM5/26/13
to Dmitry Vyukov, Nico, golang-nuts
Personally, I would, yes.

I think it's as reasonable for a close operation to be "before"
another close operation that panics because of the previous one.

As you say 'absence of "sequential consistency" is confusing';
given that the whole point of channels is to make concurrent
behaviour less confusing, I'd prefer it if we made the behaviour
less so.

I might be the only one that thinks so at this point, though.

Nico

unread,
May 26, 2013, 5:37:06 AM5/26/13
to golan...@googlegroups.com, Nico


On Sunday, May 26, 2013 9:01:00 AM UTC+1, Dmitry Vyukov wrote:
Yes, sends cannot happen while a chan is full. But this does not establish happens-before. Happens-before relation is transitive and so it orders other operations. This "sends cannot happen while a chan is full" is not transitive, it does not affect any other operations.


It does affect all the operations after the send that was initially blocked.

Gustavo mentioned that in the "Effective Go" tutorial, the implementation of a semaphore was changed so that synchronisation only relied on channel receives. Here's a question: was this change prompted because the implementation was actually found to fail or because it couldn't be proved using the document "Go's memory model"? 

Because what I'm arguing is that any go implementation that stops a buffered channel from overloading is implicitly also doing a synchronisation as described in case 4).

Is it possible to come up with an example using buffered channels where case 4) doesn't hold? That would prove my reasoning is flawed somehow.


Nico

PS: I'll go through the rest of your email later.

Dan Kortschak

unread,
May 26, 2013, 5:47:35 AM5/26/13
to Nico, golan...@googlegroups.com
On Sun, 2013-05-26 at 02:37 -0700, Nico wrote:
> Is it possible to come up with an example using buffered channels
> where case 4) doesn't hold? That would prove my reasoning is flawed
> somehow.

You may want to read https://groups.google.com/d/topic/golang-nuts/Ug1DhZGGqTk/

Nico

unread,
May 26, 2013, 5:55:42 AM5/26/13
to golan...@googlegroups.com, Nico
55 posts! I hope is not very dense. What is the gist of it? 

Dan Kortschak

unread,
May 26, 2013, 5:57:14 AM5/26/13
to Nico, golan...@googlegroups.com
Yes, it is dense. tl;dr? The world is not a nice place.

Nico

unread,
May 26, 2013, 6:11:23 AM5/26/13
to golan...@googlegroups.com, Dmitry Vyukov, Nico


On Sunday, May 26, 2013 10:10:44 AM UTC+1, rog wrote:
> Are you sure you want this code to be legal?

Personally, I would, yes.


I'm with Rog here. The program compiles and runs with the current playground implementation.
See a slightly modified version:


 
I think it's as reasonable for a close operation to be "before"
another close operation that panics because of the previous one.


I agree. Why wouldn't we want the above to hold?
Are there any implications that we are missing?
 

I might be the only one that thinks so at this point, though.


you're not the only one :), but I'd also like to hear what are the drawbacks.

Dan Kortschak

unread,
May 26, 2013, 6:25:30 AM5/26/13
to Nico, golan...@googlegroups.com, Dmitry Vyukov
I'm there, though I understand that there are optimisation benefits from
not expecting to eat before you feel full.

roger peppe

unread,
May 26, 2013, 6:52:09 AM5/26/13
to Dan Kortschak, Nico, golang-nuts, Dmitry Vyukov
On 26 May 2013 11:25, Dan Kortschak <dan.ko...@adelaide.edu.au> wrote:
> I'm there, though I understand that there are optimisation benefits from
> not expecting to eat before you feel full.

That's my only misgiving too. Are those potential optimisation benefits
perhaps worth the awkward-to-explain semantics?

Dan Kortschak

unread,
May 26, 2013, 7:03:35 AM5/26/13
to roger peppe, Nico, golang-nuts, Dmitry Vyukov
Yes. My sentiments are expressed in the linked thread. Queasy is the
only way to describe it.

Dan Kortschak

unread,
May 26, 2013, 7:04:36 AM5/26/13
to roger peppe, Nico, golang-nuts, Dmitry Vyukov
To add. Sometime travel sickness is something you need to suffer to get
where you need to go.

Nico

unread,
May 27, 2013, 7:55:18 AM5/27/13
to golan...@googlegroups.com, Nico


On Saturday, May 25, 2013 2:26:15 PM UTC+1, Dmitry Vyukov wrote:

The chan itself is always consistent (i.e. max capacity is respected), but it does not mean that the operations synchronize any auxiliary data.

If you are familiar with C/C++ memory model, the analog of the operations is:

void send(struct chan *c) {
  int sz = atomic_load_explicit(&c->sz, memory_order_relaxed);
  for(;;) {
    if(sz == c->cap)
      continue;
    if(atomic_compare_exchange_weak_explicit(&c->sz, &sz, sz+1, memory_order_release))
      break;
  }
}

I'm trying to understand the code above.
If sz equals c->cap, send(c) seems to enter a infinite loop because sz and c->cap don't get updated before iterating again.
Shouldn't sz get updated before iterating again, as it happens in the atomic_compare_exchange_weak()? 

Dmitry Vyukov

unread,
May 27, 2013, 9:26:27 AM5/27/13
to Nico, golang-nuts
On Sun, May 26, 2013 at 1:37 PM, Nico <nicolas...@gmail.com> wrote:


On Sunday, May 26, 2013 9:01:00 AM UTC+1, Dmitry Vyukov wrote:
Yes, sends cannot happen while a chan is full. But this does not establish happens-before. Happens-before relation is transitive and so it orders other operations. This "sends cannot happen while a chan is full" is not transitive, it does not affect any other operations.


It does affect all the operations after the send that was initially blocked.


No, they are not blocked.
E.g. if there is a load from a a global var after the send, it's possible to execute it before the send.

 

Gustavo mentioned that in the "Effective Go" tutorial, the implementation of a semaphore was changed so that synchronisation only relied on channel receives. Here's a question: was this change prompted because the implementation was actually found to fail or because it couldn't be proved using the document "Go's memory model"? 

Because what I'm arguing is that any go implementation that stops a buffered channel from overloading is implicitly also doing a synchronisation as described in case 4).

Is it possible to come up with an example using buffered channels where case 4) doesn't hold? That would prove my reasoning is flawed somehow.


Yes, it is. 
The simplest thing is just to execute some operations, that are after the send in program order, before the send.

 


Nico

PS: I'll go through the rest of your email later.

--

Dmitry Vyukov

unread,
May 27, 2013, 9:27:52 AM5/27/13
to Nico, golang-nuts
atomic_compare_exchange_weak_explicit() updates comparand on failure.



--

Gustavo Niemeyer

unread,
May 27, 2013, 9:37:58 AM5/27/13
to Dmitry Vyukov, Nico, golang-nuts
On Mon, May 27, 2013 at 10:26 AM, Dmitry Vyukov <dvy...@google.com> wrote:
> Yes, it is.
> The simplest thing is just to execute some operations, that are after the
> send in program order, before the send.

Agreed, but as long as it doesn't break the specification, which
implies that sends block if the channel is full.

In other words, doing

ch <- v
s, err := l.Accept()

As

s, err := l.Accept()
ch <- v

Is breaking the specification, and is also nuts if we expect people to
know what their programs do.

> No, they are not blocked.
> E.g. if there is a load from a a global var after the send, it's possible to
> execute it before the send.

Definitely agreed. That would imply memory synchronization in terms of
happens-before, which isn't guaranteed.


gustavo @ http://niemeyer.net

Dmitry Vyukov

unread,
May 27, 2013, 9:45:12 AM5/27/13
to Gustavo Niemeyer, Nico, golang-nuts
On Mon, May 27, 2013 at 5:37 PM, Gustavo Niemeyer <gus...@niemeyer.net> wrote:
On Mon, May 27, 2013 at 10:26 AM, Dmitry Vyukov <dvy...@google.com> wrote:
> Yes, it is.
> The simplest thing is just to execute some operations, that are after the
> send in program order, before the send.

Agreed, but as long as it doesn't break the specification, which
implies that sends block if the channel is full.

In other words, doing

    ch <- v
    s, err := l.Accept()

As

    s, err := l.Accept()
    ch <- v

Is breaking the specification, and is also nuts if we expect people to
know what their programs do.

It's possible to hoist above chan send statements that does not affect externally observable behavior (e.g. memory accesses).

Gustavo Niemeyer

unread,
May 27, 2013, 9:50:02 AM5/27/13
to Dmitry Vyukov, Nico, golang-nuts
On Mon, May 27, 2013 at 10:45 AM, Dmitry Vyukov <dvy...@google.com> wrote:
> It's possible to hoist above chan send statements that does not affect
> externally observable behavior (e.g. memory accesses).

Agreed, thanks for clarifying that.


gustavo @ http://niemeyer.net

Dmitry Vyukov

unread,
May 27, 2013, 1:42:54 PM5/27/13
to kevinc...@gmail.com, golang-nuts, Nico
On Mon, May 27, 2013 at 8:36 PM, <kevinc...@gmail.com> wrote:

void send(struct chan *c) {
  int sz = atomic_load_explicit(&c->sz, memory_order_relaxed);
  for(;;) {
    if(sz == c->cap)
      continue;
    if(atomic_compare_exchange_weak_explicit(&c->sz, &sz, sz+1, memory_order_release))
      break;
  }
}

On Mon, May 27, 2013 at 3:55 PM, Nico <nicolas...@gmail.com> wrote:
I'm trying to understand the code above.
If sz equals c->cap, send(c) seems to enter a infinite loop because sz and c->cap don't get updated before iterating again.
Shouldn't sz get updated before iterating again, as it happens in the atomic_compare_exchange_weak()

On Monday, May 27, 2013 9:27:52 AM UTC-4, Dmitry Vyukov wrote:
atomic_compare_exchange_weak_explicit() updates comparand on failure.
 
sz is thread local. c->Cap is constant. some form of non-local non-constant read inside the loop before the continue statement,
or you have an infinite loop. you are either missing a line, or that first line is really supposed to be inside the loop.



Yes, it's required to reload sz... and it's probably required to do lots of other things to make it an actual queue.

N. Riesco - GMail

unread,
May 28, 2013, 7:34:46 AM5/28/13
to golan...@googlegroups.com
After some more thinking I've realised that the case 4) I proposed is
inconsistent (as in not alike) with case 1) of Go's memory model.

The synchronisation properties of buffered and unbuffered channels are
completely opposite. Whereas for buffered channels, a send "happens
before" the corresponding receive, for unbuffered channels, the receive
"happens before" the send completes.
Reply all
Reply to author
Forward
0 new messages