Do chans provide full sequential consistency or just what MM says?

961 views
Skip to first unread message

Dmitry Vyukov

unread,
Jul 15, 2012, 2:56:26 PM7/15/12
to golang-dev
Consider the following example:

        v1 := 0
        v2 := 0
        c := make(chan int, 1)
        go func() {
                v1 = 1
                select {
                case c <- 1:
                default:
                        v2 = 2
                }
        }()
        go func() {
                v2 = 1
                select {
                case c <- 1:
                default:
                        v1 = 2
                }
        }()

Here chan is used basically as non-blocking semaphore. If chans provide full seq cst, then one send in this example must synchronize with another send and here is no data race. However, the MM does not say this.
I can see arguments both way. Namely, if the main synchronization primitive provides seq cst, then it greatly simplifies reasoning about programs, however on the other hand it welcomes ugly and non-idiomatic code.
There are even uglier examples. For example, 2 goroutines execute close(c) with defer recover(). If your close() panics, then you know that it happens-after then other guy. That's how POSIX works, you do can figure out such things.

So what is the intention?

Ian Lance Taylor

unread,
Jul 15, 2012, 9:31:58 PM7/15/12
to Dmitry Vyukov, golang-dev
On Sun, Jul 15, 2012 at 11:56 AM, Dmitry Vyukov <dvy...@google.com> wrote:
> Consider the following example:
>
> v1 := 0
> v2 := 0
> c := make(chan int, 1)
> go func() {
> v1 = 1
> select {
> case c <- 1:
> default:
> v2 = 2
> }
> }()
> go func() {
> v2 = 1
> select {
> case c <- 1:
> default:
> v1 = 2
> }
> }()
>
> Here chan is used basically as non-blocking semaphore. If chans provide full
> seq cst, then one send in this example must synchronize with another send
> and here is no data race. However, the MM does not say this.

My personal understanding is that sends and receives on a buffered
channel do not provide any synchronization guarantees.

Ian

David Symonds

unread,
Jul 15, 2012, 9:36:47 PM7/15/12
to Ian Lance Taylor, Dmitry Vyukov, golang-dev
On Mon, Jul 16, 2012 at 11:31 AM, Ian Lance Taylor <ia...@google.com> wrote:

> My personal understanding is that sends and receives on a buffered
> channel do not provide any synchronization guarantees.

... unless you count happens-before, which applies to order the send
and receive of a particular piece of data.

Dmitry Vyukov

unread,
Jul 16, 2012, 4:17:01 AM7/16/12
to Ian Lance Taylor, golang-dev
Then sending pointers/interfaces/strings/maps via buffered chans is a bug...

The MM says "A send on a channel happens before the corresponding receive from that channel completes".

roger peppe

unread,
Jul 16, 2012, 7:53:15 AM7/16/12
to Dmitry Vyukov, Ian Lance Taylor, golang-dev
There is no channel receive in the code above, hence no happens-before
relationship.

Russ Cox

unread,
Jul 16, 2012, 10:33:07 AM7/16/12
to Dmitry Vyukov, Ian Lance Taylor, golang-dev
On Monday, July 16, 2012 at 4:17 AM, Dmitry Vyukov wrote:
> sending pointers/interfaces/strings/maps via buffered chans is a bug...

No, it's not. I'm not sure why you say that.

Russ


Dmitry Vyukov

unread,
Jul 16, 2012, 10:37:02 AM7/16/12
to Russ Cox, Ian Lance Taylor, golang-dev
That was reply to "My personal understanding is that sends and receives on a buffered

roger peppe

unread,
Jul 16, 2012, 11:17:12 AM7/16/12
to Dmitry Vyukov, Ian Lance Taylor, golang-dev
I said that, but I've just realised that I believe the below code is safe, but
I can't see anywhere in the memory model that implies it.
Maybe it's not.

package main

func main() {
shared := 0
c := make(chan bool, 1)
go try(c, &shared)
go try(c, &shared)
}

func try(c chan bool, shared *int) {
select {
case c <- true:
*shared++
default:
}
}

Dmitry Vyukov

unread,
Jul 16, 2012, 11:37:46 AM7/16/12
to roger peppe, Ian Lance Taylor, golang-dev
One goroutine ever accesses shared (except main), so here is nothing to synchronize.

Dmitry Vyukov

unread,
Jul 19, 2012, 2:09:00 PM7/19/12
to golang-dev
OK, I will stick only to what MM says for the data race detector. Using chans as semaphores or mutexes will be illegal.

Kyle Lemons

unread,
Jul 19, 2012, 2:27:55 PM7/19/12
to Dmitry Vyukov, golang-dev
On Thu, Jul 19, 2012 at 11:09 AM, Dmitry Vyukov <dvy...@google.com> wrote:
OK, I will stick only to what MM says for the data race detector. Using chans as semaphores or mutexes will be illegal.
I'm not sure I understand how the MM disallows using them as semaphores/mutexes.


package main

import "fmt"

type Mutex chan bool
func (m Mutex) Lock() { m <- true }
func (m Mutex) Unlock() { <-m }

func Exclusive(m Mutex, i *int) {
m.Lock()
defer m.Unlock()
*i++
}

func main() {
lock := make(Mutex, 1)
val := 0
N := 10
done := make(chan bool, N)
for i := 0; i < N; i++ {
go func() {
defer func(){ done <- true }()
Exclusive(lock, &val)
}()
}
for i := 0; i < N; i++ {
<-done
}
fmt.Println(val)
}

Dmitry Vyukov

unread,
Jul 19, 2012, 2:40:03 PM7/19/12
to Kyle Lemons, golang-dev
This one is broken according to the MM. And that's what I meant when I said that if chans provide sequential consistency (CS) then reasoning about program behavior would be simpler. This code is correct provided CS, but chans do not provide it.

For a correct mutex you need unlock() to synchronize-with the subsequent lock(). For unlock() you have chan recv, for lock() - chan send, they do not synchronize-with each other on buffered chans.

I believe it's possible to build some synchronization primitives on top of chans, but one must be very careful with the algorithms.

Kyle Lemons

unread,
Jul 19, 2012, 3:04:27 PM7/19/12
to Dmitry Vyukov, golang-dev
On Thu, Jul 19, 2012 at 11:40 AM, Dmitry Vyukov <dvy...@google.com> wrote:
This one is broken according to the MM. And that's what I meant when I said that if chans provide sequential consistency (CS) then reasoning about program behavior would be simpler. This code is correct provided CS, but chans do not provide it.

For a correct mutex you need unlock() to synchronize-with the subsequent lock(). For unlock() you have chan recv, for lock() - chan send, they do not synchronize-with each other on buffered chans.

I believe it's possible to build some synchronization primitives on top of chans, but one must be very careful with the algorithms.

It can be used to establish a happens-before relationship, can't it?  Let's consider two concurrent calls to Exclusive.

A1: Sends on bufchan.
A2: The channel buffer is empty, so it completes.
B1: Sends on Mutex.  The buffer is full, so it blocks.
A3: Pointer deference and increment.
A4: Receives on bufchan.
B2: Send on bufchan completes.
B3: Pointer dereference and increment.
B4: Receives on Mutex.

I reason that 2 always happens before its corresponding 3, and 3 happens before its corresponding 4.  Maybe where it falls apart is that the MM does not state any sort of relationship between B2 and A4, but because it is buffered(1) and A1 happens before B1 (in the above scenario), B2 cannot happen before A4 (thus B2 happens before A4).  If all of that is correct, then A3 provably happens before B3.  No?

Dmitry Vyukov

unread,
Jul 19, 2012, 3:16:36 PM7/19/12
to Kyle Lemons, golang-dev
On Thu, Jul 19, 2012 at 11:04 PM, Kyle Lemons <kev...@google.com> wrote:
On Thu, Jul 19, 2012 at 11:40 AM, Dmitry Vyukov <dvy...@google.com> wrote:
This one is broken according to the MM. And that's what I meant when I said that if chans provide sequential consistency (CS) then reasoning about program behavior would be simpler. This code is correct provided CS, but chans do not provide it.

For a correct mutex you need unlock() to synchronize-with the subsequent lock(). For unlock() you have chan recv, for lock() - chan send, they do not synchronize-with each other on buffered chans.

I believe it's possible to build some synchronization primitives on top of chans, but one must be very careful with the algorithms.

It can be used to establish a happens-before relationship, can't it?  Let's consider two concurrent calls to Exclusive.

A1: Sends on bufchan.
A2: The channel buffer is empty, so it completes.
B1: Sends on Mutex.  The buffer is full, so it blocks.
A3: Pointer deference and increment.
A4: Receives on bufchan.
B2: Send on bufchan completes.
B3: Pointer dereference and increment.
B4: Receives on Mutex.

I reason that 2 always happens before its corresponding 3, and 3 happens before its corresponding 4.  Maybe where it falls apart is that the MM does not state any sort of relationship between B2 and A4, but because it is buffered(1) and A1 happens before B1 (in the above scenario), B2 cannot happen before A4 (thus B2 happens before A4).  If all of that is correct, then A3 provably happens before B3.  No?


Your reasoning is based on assumption of sequential consistency. It is not the case.
The statements like "B2 cannot happen before A4 (thus B2 happens before A4)" are not valid provided relaxed consistency. A happens-before B iff it is explicitly stated by MM. Everything else is unordered.
Since a goroutine never receives a messages from another goroutine, all actions in different goroutines are completely unordered in this example. The output "10" is not guaranteed by language spec.

Two actions in different goroutines can race even if they are not subsequent in any possible execution. It is not intuitive? Yes, it is. It is absence of sequential consistency.

Dmitry Vyukov

unread,
Jul 19, 2012, 3:17:52 PM7/19/12
to Kyle Lemons, golang-dev
On Thu, Jul 19, 2012 at 11:16 PM, Dmitry Vyukov <dvy...@google.com> wrote:
On Thu, Jul 19, 2012 at 11:04 PM, Kyle Lemons <kev...@google.com> wrote:
On Thu, Jul 19, 2012 at 11:40 AM, Dmitry Vyukov <dvy...@google.com> wrote:
This one is broken according to the MM. And that's what I meant when I said that if chans provide sequential consistency (CS) then reasoning about program behavior would be simpler. This code is correct provided CS, but chans do not provide it.

For a correct mutex you need unlock() to synchronize-with the subsequent lock(). For unlock() you have chan recv, for lock() - chan send, they do not synchronize-with each other on buffered chans.

I believe it's possible to build some synchronization primitives on top of chans, but one must be very careful with the algorithms.

It can be used to establish a happens-before relationship, can't it?  Let's consider two concurrent calls to Exclusive.

A1: Sends on bufchan.
A2: The channel buffer is empty, so it completes.
B1: Sends on Mutex.  The buffer is full, so it blocks.
A3: Pointer deference and increment.
A4: Receives on bufchan.
B2: Send on bufchan completes.
B3: Pointer dereference and increment.
B4: Receives on Mutex.

I reason that 2 always happens before its corresponding 3, and 3 happens before its corresponding 4.  Maybe where it falls apart is that the MM does not state any sort of relationship between B2 and A4, but because it is buffered(1) and A1 happens before B1 (in the above scenario), B2 cannot happen before A4 (thus B2 happens before A4).  If all of that is correct, then A3 provably happens before B3.  No?


Your reasoning is based on assumption of sequential consistency. It is not the case.
The statements like "B2 cannot happen before A4 (thus B2 happens before A4)" are not valid provided relaxed consistency. A happens-before B iff it is explicitly stated by MM. Everything else is unordered.
Since a goroutine never receives a messages from another goroutine, all actions in different goroutines are completely unordered in this example. The output "10" is not guaranteed by language spec.

Two actions in different goroutines can race even if they are not subsequent in any possible execution. It is not 

I guess I meant adjacent  instead of subsequent.

Christoph Hack

unread,
Jul 19, 2012, 4:04:10 PM7/19/12
to golan...@googlegroups.com
I think it depends what future optimization we want to apply to those channels.

Currently I think of Go's channel implementation as a very general queue with a lot
of features that work well in almost all cases and is extremely simple to reason
about. If that's the main goal, channels should definitely provide sequential consistency,
because the performance gain of using a more relaxed memory model would be
probably negligible and it would just cause a lot of unnecessary confusion. Users
who are interested in getting those performance and scale-ability benefits would
have to use a different channel implementation anyway.

On the other hand, future compilers might do enough compile-time (or even run-time)
analyses to replace Go's current channel implementation automatically with
another, more specialized one if the compiler is able to prove that the channel is never
used in a select statement. Those more specialized implementations might be
drastically faster and a relaxed memory model might start make sense.

I'm not sure if such optimization as mentioned in the last paragraph are even possible.
But if they are, it would be definitely nice to keep the memory model relaxed to keep Go's
internal channels competitive with other solutions.

Kyle Lemons

unread,
Jul 19, 2012, 4:48:46 PM7/19/12
to Dmitry Vyukov, golang-dev
On Thu, Jul 19, 2012 at 12:16 PM, Dmitry Vyukov <dvy...@google.com> wrote:
On Thu, Jul 19, 2012 at 11:04 PM, Kyle Lemons <kev...@google.com> wrote:
On Thu, Jul 19, 2012 at 11:40 AM, Dmitry Vyukov <dvy...@google.com> wrote:
This one is broken according to the MM. And that's what I meant when I said that if chans provide sequential consistency (CS) then reasoning about program behavior would be simpler. This code is correct provided CS, but chans do not provide it.

For a correct mutex you need unlock() to synchronize-with the subsequent lock(). For unlock() you have chan recv, for lock() - chan send, they do not synchronize-with each other on buffered chans.

I believe it's possible to build some synchronization primitives on top of chans, but one must be very careful with the algorithms.

It can be used to establish a happens-before relationship, can't it?  Let's consider two concurrent calls to Exclusive.

A1: Sends on bufchan.
A2: The channel buffer is empty, so it completes.
B1: Sends on Mutex.  The buffer is full, so it blocks.
A3: Pointer deference and increment.
A4: Receives on bufchan.
B2: Send on bufchan completes.
B3: Pointer dereference and increment.
B4: Receives on Mutex.

I reason that 2 always happens before its corresponding 3, and 3 happens before its corresponding 4.  Maybe where it falls apart is that the MM does not state any sort of relationship between B2 and A4, but because it is buffered(1) and A1 happens before B1 (in the above scenario), B2 cannot happen before A4 (thus B2 happens before A4).  If all of that is correct, then A3 provably happens before B3.  No?


Your reasoning is based on assumption of sequential consistency. It is not the case.
The statements like "B2 cannot happen before A4 (thus B2 happens before A4)" are not valid provided relaxed consistency. A happens-before B iff it is explicitly stated by MM. Everything else is unordered.
Since a goroutine never receives a messages from another goroutine, all actions in different goroutines are completely unordered in this example. The output "10" is not guaranteed by language spec.

Two actions in different goroutines can race even if they are not subsequent in any possible execution. It is not intuitive? Yes, it is. It is absence of sequential consistency.

I guess that means that I vote for sequential consistency.  I worry that if I haven't figured this out in the year+ that I've been writing Go (and have probably passed on my misunderstanding via the mailing list and IRC), it's not something that's widely understood among the community in general.

Patrick Mylund Nielsen

unread,
Jul 19, 2012, 4:53:16 PM7/19/12
to Kyle Lemons, Dmitry Vyukov, golang-dev
Interestingly, a semaphore is an example in effective Go, under "Channels":  http://golang.org/doc/effective_go.html 

Dmitry Vyukov

unread,
Jul 19, 2012, 7:20:27 PM7/19/12
to Patrick Mylund Nielsen, Kyle Lemons, golang-dev
On Fri, Jul 20, 2012 at 12:53 AM, Patrick Mylund Nielsen <pat...@patrickmylund.com> wrote:
Interestingly, a semaphore is an example in effective Go, under "Channels":  http://golang.org/doc/effective_go.html 


Well, it's not a semaphore, it's a no-op. Follow with me.

Consider that each goroutine dequeues own message from the sem chan, then there is absolutely no happens-before edges between goroutines. So if there is infinite amount of concurrent calls to handle(), there is infinite amount of concurrent (non ordered by happens-before relation) calls to process(). The semaphore limits nothing.

Consider how an aggressive optimizing (but still conforming) compiler can optimize the following code:

var sem = make(chan int, MaxOutstanding)
func handle(r *Request) {
    sem <- 1    // Wait for active queue to drain.
    process(r)  // May take a long time.
    <-sem       // Done; enable next request to run.
}

Since send to a buffered chan is never a destination of happens-before arc, any operations are allowed to hoist above it. So the code can be safely transformed to:

var sem = make(chan int, MaxOutstanding)
func handle(r *Request) {
    process(r)  // May take a long time.
    sem <- 1    // Wait for active queue to drain.
    <-sem       // Done; enable next request to run.
}

Since recv from a buffered chan is never an origin of happens-before arc, any operation is allowed to sink below it. So the code can be safely transformed to:

var sem = make(chan int, MaxOutstanding)
func handle(r *Request) {
    sem <- 1    // Wait for active queue to drain.
    <-sem       // Done; enable next request to run.
    process(r)  // May take a long time.
}

WLOG, let's assume we choose the second transformation. Now, a sequence of:

buffered_chan <- X
<-buffered_chan

can be transformed to:

wait_until_chan_is_not_full(buffered_chan)  // just to handle the case when the chan is full, and the above sequence must block forever

So we can transform our code to:

var sem = make(chan int, MaxOutstanding)
func handle(r *Request) {
    wait_until_chan_is_not_full(sem)
    process(r)  // May take a long time.
}

If we see all usages of a buffered chan, and it's used only in calls to wait_until_chan_is_not_full(), we can safely remove all that calls and the channel altogether. So we end up with:

func handle(r *Request) {
    process(r)  // May take a long time.
}

QED

Any flaws in my reasoning?

andrey mirtchovski

unread,
Jul 19, 2012, 7:38:17 PM7/19/12
to Dmitry Vyukov, Patrick Mylund Nielsen, Kyle Lemons, golang-dev
> Any flaws in my reasoning?

I think the flaw is in the assumption that a goroutine always dequeues
its own message from a channel. In the presence of multiple goroutines
_and_ external interrupts this may not hold true. I believe what
you're describing is a variation of the faulty case described by Pike
et al in "Process Sleep and Wakeup on a Shared-memory Multiprocessor".
To quote:

"Under our assumption that there is only one interrupt, the bug cannot
occur, but in the more general case of multiple interrupts
synchronising through the same condition function and rendezvous, the
process and interrupt can enter a peculiar state."

I believe the "peculiar state" in your description is that a goroutine
may not necessarily get to remove its own message. I.e., the channel
buffer can accumulate entries, making it impossible to remove
wait_until_chan_is_not_full() altogether.

my humble $0.02

roger peppe

unread,
Jul 20, 2012, 6:31:36 AM7/20/12
to Dmitry Vyukov, Patrick Mylund Nielsen, Kyle Lemons, golang-dev
On 20 July 2012 00:20, Dmitry Vyukov <dvy...@google.com> wrote:
> On Fri, Jul 20, 2012 at 12:53 AM, Patrick Mylund Nielsen
> <pat...@patrickmylund.com> wrote:
>>
>> Interestingly, a semaphore is an example in effective Go, under
>> "Channels": http://golang.org/doc/effective_go.html
>
>
>
> Well, it's not a semaphore, it's a no-op. Follow with me.
>
> Consider that each goroutine dequeues own message from the sem chan, then
> there is absolutely no happens-before edges between goroutines. So if there
> is infinite amount of concurrent calls to handle(), there is infinite amount
> of concurrent (non ordered by happens-before relation) calls to process().
> The semaphore limits nothing.
>
> Consider how an aggressive optimizing (but still conforming) compiler can
> optimize the following code:
>
> var sem = make(chan int, MaxOutstanding)
> func handle(r *Request) {
> sem <- 1 // Wait for active queue to drain.
> process(r) // May take a long time.
> <-sem // Done; enable next request to run.
> }
>
> Since send to a buffered chan is never a destination of happens-before arc,

Is that actually true?

"Each send on a particular channel is matched to a corresponding receive
from that channel, usually in a different goroutine. A send on a channel
happens before the corresponding receive from that channel completes."

This seems to apply to buffered channels - it's in addition to the
statement that "a receive from an unbuffered channel happens before the
send on that channel completes".

Within a single goroutine, the happens-before relationship is the order
of the statements, so the channel send happens before the call to process,
which happens before the receive from the channel.

If we add another goroutine, I can see two cases:
1) we can send to the buffered channel, in which case we proceed as for
the single-goroutine case.
2) the buffered channel is full, so we must block waiting for a receive
operation to correspond with our send operation. When that receive
operation takes place, it establishes that the send happened before it,
which means that the second goroutine's call to process must happen
after the first goroutine's call completed.

The only thing that's missing from the spec that I can see is that with
a buffered channel, a send is not *always* matched to a corresponding
receive - the channel can be abandoned with values still in it.

Does that make sense?

Dmitry Vyukov

unread,
Jul 20, 2012, 7:30:41 AM7/20/12
to andrey mirtchovski, Patrick Mylund Nielsen, Kyle Lemons, golang-dev
On Fri, Jul 20, 2012 at 3:38 AM, andrey mirtchovski <mirtc...@gmail.com> wrote:
> Any flaws in my reasoning?

I think the flaw is in the assumption that a goroutine always dequeues
its own message from a channel. In the presence of multiple goroutines
_and_ external interrupts this may not hold true. I believe what
you're describing is a variation of the faulty case described by Pike
et al in "Process Sleep and Wakeup on a Shared-memory Multiprocessor".
To quote:


I do not get your point. I did not say that it must happen always.
Any bogus or no-op semaphore may occasionally limit number of concurrent requests, but in order to prove that it's correct one must provide that it provides limiting in *all* cases. One the other hand, to prove that it's incorrect one must show at least one case where it fails to limit.
That one case is when goroutines receive their own messages. If that happens in your program, then nuclear missiles spuriously launched or something.
Moreover, if semaphore limit is 1, then it's always the case.
Well, actually, if goroutines receive messages from other goroutines, nothing changes. There will be some HB arcs, but I think they will be in the wrong places. It's just more difficult to example than the case with no sync at all.

Also, consider my example with possible compiler optimization. Since "goroutine receives own message" is allowed behavior, the compiler is allowed to enforce it, so that the compiled program will always behave as if  "goroutine receives own message".

Dmitry Vyukov

unread,
Jul 20, 2012, 7:39:10 AM7/20/12
to roger peppe, Patrick Mylund Nielsen, Kyle Lemons, golang-dev
On Fri, Jul 20, 2012 at 2:31 PM, roger peppe <rogp...@gmail.com> wrote:
> Well, it's not a semaphore, it's a no-op. Follow with me.
>
> Consider that each goroutine dequeues own message from the sem chan, then
> there is absolutely no happens-before edges between goroutines. So if there
> is infinite amount of concurrent calls to handle(), there is infinite amount
> of concurrent (non ordered by happens-before relation) calls to process().
> The semaphore limits nothing.
>
> Consider how an aggressive optimizing (but still conforming) compiler can
> optimize the following code:
>
> var sem = make(chan int, MaxOutstanding)
> func handle(r *Request) {
>     sem <- 1    // Wait for active queue to drain.
>     process(r)  // May take a long time.
>     <-sem       // Done; enable next request to run.
> }
>
> Since send to a buffered chan is never a destination of happens-before arc,

Is that actually true?

Yes.
 

"Each send on a particular channel is matched to a corresponding receive
from that channel, usually in a different goroutine. A send on a channel
happens before the corresponding receive from that channel completes."

If send happens-before something, then it is *origin* of HB arc, not *destination*.

 

This seems to apply to buffered channels - it's in addition to the
statement that "a receive from an unbuffered channel happens before the
send on that channel completes".

That applies only to unbuffered chans.
 


Within a single goroutine, the happens-before relationship is the order
of the statements, so the channel send happens before the call to process,
which happens before the receive from the channel.

If we add another goroutine, I can see two cases:
1) we can send to the buffered channel, in which case we proceed as for
the single-goroutine case.
2) the buffered channel is full, so we must block waiting for a receive
operation to correspond with our send operation. When that receive
operation takes place, it establishes that the send happened before it,

Ooops, it's not the case for buffered chans. Send never happens before recv for buffered chans.

Dmitry Vyukov

unread,
Jul 20, 2012, 7:51:03 AM7/20/12
to roger peppe, Patrick Mylund Nielsen, Kyle Lemons, golang-dev
You also reason from sequentially consistent point of view. The CS means that one may reason about a program behavior based on a simple interleaving of all actions of all goroutines. That is, all actions form a total order. It makes reasoning a way more simpler.

If it's the case, then the following program is correct as well, and must non-deterministically print either 42 or 43 (it is what you will necessary conclude if you reason based on single interleaving of actions of all goroutines). But people in this thread suggest that it is *not* a correct program, 2 close's on a single chan do not synchronize with each other. And then the program contains a data race and has undefined behavior.

package main
import "runtime"
func main() {
x, y := 0, 0
c := make(chan bool)
go func() {
defer func() {
if recover() != nil {
println(y)
}
}()
x = 42
close(c)
}()
go func() {
defer func() {
if recover() != nil {
println(x)
}
}()
y = 43
close(c)
}()
runtime.Goexit()
}

roger peppe

unread,
Jul 20, 2012, 8:06:56 AM7/20/12
to Dmitry Vyukov, Patrick Mylund Nielsen, Kyle Lemons, golang-dev
On 20 July 2012 12:39, Dmitry Vyukov <dvy...@google.com> wrote:
>> "Each send on a particular channel is matched to a corresponding receive
>> from that channel, usually in a different goroutine. A send on a channel
>> happens before the corresponding receive from that channel completes."
>
> If send happens-before something, then it is *origin* of HB arc, not
> *destination*.

Sorry for my naive reading of this (I'm am not familiar with the territory,
and appreciate that you are!), but how does this matter?
If send happens before something, then the something happens after the
send, which is the property we wish to verify, no?

>> This seems to apply to buffered channels - it's in addition to the
>> statement that "a receive from an unbuffered channel happens before the
>> send on that channel completes".
>
> That applies only to unbuffered chans.

Absolutely - I should have emphasised that.

>> Within a single goroutine, the happens-before relationship is the order
>> of the statements, so the channel send happens before the call to process,
>> which happens before the receive from the channel.
>>
>> If we add another goroutine, I can see two cases:
>> 1) we can send to the buffered channel, in which case we proceed as for
>> the single-goroutine case.
>> 2) the buffered channel is full, so we must block waiting for a receive
>> operation to correspond with our send operation. When that receive
>> operation takes place, it establishes that the send happened before it,
>
>
> Ooops, it's not the case for buffered chans. Send never happens before recv
> for buffered chans.

How is that last statement compatible with the memory model's assertion that
"a send on a channel happens before the corresponding receive from that
channel completes"?

Dmitry Vyukov

unread,
Jul 20, 2012, 8:30:38 AM7/20/12
to roger peppe, Patrick Mylund Nielsen, Kyle Lemons, golang-dev
On Fri, Jul 20, 2012 at 4:06 PM, roger peppe <rogp...@gmail.com> wrote:
On 20 July 2012 12:39, Dmitry Vyukov <dvy...@google.com> wrote:
>> "Each send on a particular channel is matched to a corresponding receive
>> from that channel, usually in a different goroutine. A send on a channel
>> happens before the corresponding receive from that channel completes."
>
> If send happens-before something, then it is *origin* of HB arc, not
> *destination*.

Sorry for my naive reading of this (I'm am not familiar with the territory,
and appreciate that you are!), but how does this matter?
If send happens before something, then the something happens after the
send, which is the property we wish to verify, no?


I just said that it's allowed for a compier to hoist any non-synchronization-related operations above send on a buffered chan.



>> This seems to apply to buffered channels - it's in addition to the
>> statement that "a receive from an unbuffered channel happens before the
>> send on that channel completes".
>
> That applies only to unbuffered chans.

Absolutely - I should have emphasised that.

>> Within a single goroutine, the happens-before relationship is the order
>> of the statements, so the channel send happens before the call to process,
>> which happens before the receive from the channel.
>>
>> If we add another goroutine, I can see two cases:
>> 1) we can send to the buffered channel, in which case we proceed as for
>> the single-goroutine case.
>> 2) the buffered channel is full, so we must block waiting for a receive
>> operation to correspond with our send operation. When that receive
>> operation takes place, it establishes that the send happened before it,
>
>
> Ooops, it's not the case for buffered chans. Send never happens before recv
> for buffered chans.

How is that last statement compatible with the memory model's assertion that
"a send on a channel happens before the corresponding receive from that
channel completes"?

You are right. I misread your statement.
But the correspondence relation is different, it is based on whom message recv receives, it is not related to unblocking.  So the recv that unblocks us corresponds to some previous send.

In any case, I do not see how your point (1) proves correctness of the sem. In order to prove that it's correct one must prove that:
given a partial HB order on operations, prove that there is no set of more than N process() calls not ordered by HB (concurrent).

Consider your point (1), if you have infinite amount of independent chains of the form send->process->recv, you have infinite amount of concurrent calls to process(). You need some arcs between that independent chains to limit number of concurrent calls.

Kyle Lemons

unread,
Jul 20, 2012, 11:53:28 AM7/20/12
to Dmitry Vyukov, roger peppe, Patrick Mylund Nielsen, golang-dev
Out of curiosity, what would be required to add sequential consistency to the memory model?  It sounds like it's what we all expect.

You say that the compiler can hoist non-synch-related statements before the channel send, but the MM says this:
"Within a single goroutine, the happens-before order is the order expressed by the program."
The way I read this is that the compiler cannot reorder anything that might change behavior, and to me this includes channel sends, goroutine creation, and function calls.

Dmitry Vyukov

unread,
Jul 20, 2012, 12:03:23 PM7/20/12
to Kyle Lemons, roger peppe, Patrick Mylund Nielsen, golang-dev
On Fri, Jul 20, 2012 at 7:53 PM, Kyle Lemons <kev...@google.com> wrote:
Out of curiosity, what would be required to add sequential consistency to the memory model?

It is a difficult question. I do not have a readymade answer. I think that sequential consistency does not need happens-before relation at all. It needs total global order over all operations. So, in order to rule out data races it needs to say that conflicting operations must be adjacent (in the total order) in any possible execution.


 
It sounds like it's what we all expect.

You say that the compiler can hoist non-synch-related statements before the channel send, but the MM says this:
"Within a single goroutine, the happens-before order is the order expressed by the program."

HB is not directly observable. E.g. if you have
i++
j++
they can actually be executed in any order.

 
The way I read this is that the compiler cannot reorder anything that might change behavior, and to me this includes channel sends, goroutine creation, and function calls.

Yes, compiler must not change observable behavior.
Hoisting something above chan send on buffered chan is non-observable (provided that send can be eventually executed, I mean that if chan is not always full).

Russ Cox

unread,
Jul 20, 2012, 4:14:55 PM7/20/12
to Dmitry Vyukov, Kyle Lemons, roger peppe, Patrick Mylund Nielsen, golang-dev
[I'll be back online for real next week.]

It is true, from the memory model, that you should not expect this to work:

type lock chan bool
func newLock() lock { return make(lock, 1) }
func (l lock) Lock() { l <- true }
func (l lock) Unlock() { <-l }

You want the property that an Unlock happens before the next Lock, and there is no value being communicated between those two operations. Unlock is moving one value out of the buffer to make room for a different value in Lock. The two are, strictly speaking, unrelated operations. I believe this is Dmitry's point.

However, you can expect this to work:

type lock chan bool
func newLock() lock { l := make(lock, 1); l <- true; return l }
func (l lock) Lock() { <-l }
func (l lock) Unlock() { l <- true }

Here, Lock is receiving the value sent by Unlock, so there's an explicit communication of a value, so the Unlock is guaranteed to happen before the Lock.

If you really care, use the second form. This all seems pretty hypothetical.

Russ


Dmitry Vyukov

unread,
Jul 20, 2012, 5:59:28 PM7/20/12
to Russ Cox, Kyle Lemons, roger peppe, Patrick Mylund Nielsen, golang-dev
On Sat, Jul 21, 2012 at 12:14 AM, Russ Cox <r...@golang.org> wrote:
[I'll be back online for real next week.]

It is true, from the memory model, that you should not expect this to work:

type lock chan bool
func newLock() lock { return make(lock, 1) }
func (l lock) Lock() { l <- true }
func (l lock) Unlock() { <-l }

You want the property that an Unlock happens before the next Lock, and there is no value being communicated between those two operations. Unlock is moving one value out of the buffer to make room for a different value in Lock. The two are, strictly speaking, unrelated operations. I believe this is Dmitry's point.

Yup. And that's what semaphore in Effective Go use.
 

However, you can expect this to work:

type lock chan bool
func newLock() lock { l := make(lock, 1); l <- true; return l }
func (l lock) Lock() { <-l }
func (l lock) Unlock() { l <- true }

Here, Lock is receiving the value sent by Unlock, so there's an explicit communication of a value, so the Unlock is guaranteed to happen before the Lock.

Yes. So the proper semaphore would be:

var sem = make(chan int, MaxOutstanding)
func init() {
for i := 0; i < MaxOutstanding; i++ {
sem <- 1
}
}
func handle(r *Request) {
    <-sem
    process(r)
    sem <- 1
}



If you really care, use the second form. This all seems pretty hypothetical.

Well, there are several points.
It blurs the line between the standard and a particular implementation. It's like what we have with C++ now, when people write as if portable C++, but actually rely on particular compiler version and hardware arch.
Some such code will end up actually broken on ARM/POWER with more optimized chan impl (when more ops fast pathed w/o mutex, e.g. it's easy to fast path non-blocked send to a full chan, it only needs 1 atomic fenceless load). And, you know, I would not vouch for correct compilation of an incorrect program with my life for gccgo -lto -O3.
If one writes such code, then automatic data race detection tools will be of no help, because they reason formally.
If one allows such code, then it's difficult to draw the line between what is allowed and what is still not; what can break hypothetically and what can break for real.

I like the following piece of code:
Reply all
Reply to author
Forward
0 new messages