OK, I will stick only to what MM says for the data race detector. Using chans as semaphores or mutexes will be illegal.
This one is broken according to the MM. And that's what I meant when I said that if chans provide sequential consistency (CS) then reasoning about program behavior would be simpler. This code is correct provided CS, but chans do not provide it.For a correct mutex you need unlock() to synchronize-with the subsequent lock(). For unlock() you have chan recv, for lock() - chan send, they do not synchronize-with each other on buffered chans.I believe it's possible to build some synchronization primitives on top of chans, but one must be very careful with the algorithms.
A3: Pointer deference and increment.B1: Sends on Mutex. The buffer is full, so it blocks.
B2: Send on bufchan completes.B3: Pointer dereference and increment.B4: Receives on Mutex.
On Thu, Jul 19, 2012 at 11:40 AM, Dmitry Vyukov <dvy...@google.com> wrote:This one is broken according to the MM. And that's what I meant when I said that if chans provide sequential consistency (CS) then reasoning about program behavior would be simpler. This code is correct provided CS, but chans do not provide it.For a correct mutex you need unlock() to synchronize-with the subsequent lock(). For unlock() you have chan recv, for lock() - chan send, they do not synchronize-with each other on buffered chans.I believe it's possible to build some synchronization primitives on top of chans, but one must be very careful with the algorithms.It can be used to establish a happens-before relationship, can't it? Let's consider two concurrent calls to Exclusive.A1: Sends on bufchan.A2: The channel buffer is empty, so it completes.A3: Pointer deference and increment.B1: Sends on Mutex. The buffer is full, so it blocks.A4: Receives on bufchan.B2: Send on bufchan completes.B3: Pointer dereference and increment.B4: Receives on Mutex.I reason that 2 always happens before its corresponding 3, and 3 happens before its corresponding 4. Maybe where it falls apart is that the MM does not state any sort of relationship between B2 and A4, but because it is buffered(1) and A1 happens before B1 (in the above scenario), B2 cannot happen before A4 (thus B2 happens before A4). If all of that is correct, then A3 provably happens before B3. No?
On Thu, Jul 19, 2012 at 11:04 PM, Kyle Lemons <kev...@google.com> wrote:On Thu, Jul 19, 2012 at 11:40 AM, Dmitry Vyukov <dvy...@google.com> wrote:This one is broken according to the MM. And that's what I meant when I said that if chans provide sequential consistency (CS) then reasoning about program behavior would be simpler. This code is correct provided CS, but chans do not provide it.For a correct mutex you need unlock() to synchronize-with the subsequent lock(). For unlock() you have chan recv, for lock() - chan send, they do not synchronize-with each other on buffered chans.I believe it's possible to build some synchronization primitives on top of chans, but one must be very careful with the algorithms.It can be used to establish a happens-before relationship, can't it? Let's consider two concurrent calls to Exclusive.A1: Sends on bufchan.A2: The channel buffer is empty, so it completes.A3: Pointer deference and increment.B1: Sends on Mutex. The buffer is full, so it blocks.A4: Receives on bufchan.B2: Send on bufchan completes.B3: Pointer dereference and increment.B4: Receives on Mutex.I reason that 2 always happens before its corresponding 3, and 3 happens before its corresponding 4. Maybe where it falls apart is that the MM does not state any sort of relationship between B2 and A4, but because it is buffered(1) and A1 happens before B1 (in the above scenario), B2 cannot happen before A4 (thus B2 happens before A4). If all of that is correct, then A3 provably happens before B3. No?Your reasoning is based on assumption of sequential consistency. It is not the case.The statements like "B2 cannot happen before A4 (thus B2 happens before A4)" are not valid provided relaxed consistency. A happens-before B iff it is explicitly stated by MM. Everything else is unordered.Since a goroutine never receives a messages from another goroutine, all actions in different goroutines are completely unordered in this example. The output "10" is not guaranteed by language spec.Two actions in different goroutines can race even if they are not subsequent in any possible execution. It is not
On Thu, Jul 19, 2012 at 11:04 PM, Kyle Lemons <kev...@google.com> wrote:On Thu, Jul 19, 2012 at 11:40 AM, Dmitry Vyukov <dvy...@google.com> wrote:This one is broken according to the MM. And that's what I meant when I said that if chans provide sequential consistency (CS) then reasoning about program behavior would be simpler. This code is correct provided CS, but chans do not provide it.For a correct mutex you need unlock() to synchronize-with the subsequent lock(). For unlock() you have chan recv, for lock() - chan send, they do not synchronize-with each other on buffered chans.I believe it's possible to build some synchronization primitives on top of chans, but one must be very careful with the algorithms.It can be used to establish a happens-before relationship, can't it? Let's consider two concurrent calls to Exclusive.A1: Sends on bufchan.A2: The channel buffer is empty, so it completes.A3: Pointer deference and increment.B1: Sends on Mutex. The buffer is full, so it blocks.A4: Receives on bufchan.B2: Send on bufchan completes.B3: Pointer dereference and increment.B4: Receives on Mutex.I reason that 2 always happens before its corresponding 3, and 3 happens before its corresponding 4. Maybe where it falls apart is that the MM does not state any sort of relationship between B2 and A4, but because it is buffered(1) and A1 happens before B1 (in the above scenario), B2 cannot happen before A4 (thus B2 happens before A4). If all of that is correct, then A3 provably happens before B3. No?Your reasoning is based on assumption of sequential consistency. It is not the case.The statements like "B2 cannot happen before A4 (thus B2 happens before A4)" are not valid provided relaxed consistency. A happens-before B iff it is explicitly stated by MM. Everything else is unordered.Since a goroutine never receives a messages from another goroutine, all actions in different goroutines are completely unordered in this example. The output "10" is not guaranteed by language spec.Two actions in different goroutines can race even if they are not subsequent in any possible execution. It is not intuitive? Yes, it is. It is absence of sequential consistency.
Interestingly, a semaphore is an example in effective Go, under "Channels": http://golang.org/doc/effective_go.html
> Any flaws in my reasoning?I think the flaw is in the assumption that a goroutine always dequeues
its own message from a channel. In the presence of multiple goroutines
_and_ external interrupts this may not hold true. I believe what
you're describing is a variation of the faulty case described by Pike
et al in "Process Sleep and Wakeup on a Shared-memory Multiprocessor".
To quote:
> Well, it's not a semaphore, it's a no-op. Follow with me.Is that actually true?
>
> Consider that each goroutine dequeues own message from the sem chan, then
> there is absolutely no happens-before edges between goroutines. So if there
> is infinite amount of concurrent calls to handle(), there is infinite amount
> of concurrent (non ordered by happens-before relation) calls to process().
> The semaphore limits nothing.
>
> Consider how an aggressive optimizing (but still conforming) compiler can
> optimize the following code:
>
> var sem = make(chan int, MaxOutstanding)
> func handle(r *Request) {
> sem <- 1 // Wait for active queue to drain.
> process(r) // May take a long time.
> <-sem // Done; enable next request to run.
> }
>
> Since send to a buffered chan is never a destination of happens-before arc,
"Each send on a particular channel is matched to a corresponding receive
from that channel, usually in a different goroutine. A send on a channel
happens before the corresponding receive from that channel completes."
This seems to apply to buffered channels - it's in addition to the
statement that "a receive from an unbuffered channel happens before the
send on that channel completes".
Within a single goroutine, the happens-before relationship is the order
of the statements, so the channel send happens before the call to process,
which happens before the receive from the channel.
If we add another goroutine, I can see two cases:
1) we can send to the buffered channel, in which case we proceed as for
the single-goroutine case.
2) the buffered channel is full, so we must block waiting for a receive
operation to correspond with our send operation. When that receive
operation takes place, it establishes that the send happened before it,
On 20 July 2012 12:39, Dmitry Vyukov <dvy...@google.com> wrote:Sorry for my naive reading of this (I'm am not familiar with the territory,
>> "Each send on a particular channel is matched to a corresponding receive
>> from that channel, usually in a different goroutine. A send on a channel
>> happens before the corresponding receive from that channel completes."
>
> If send happens-before something, then it is *origin* of HB arc, not
> *destination*.
and appreciate that you are!), but how does this matter?
If send happens before something, then the something happens after the
send, which is the property we wish to verify, no?
>> This seems to apply to buffered channels - it's in addition to theAbsolutely - I should have emphasised that.
>> statement that "a receive from an unbuffered channel happens before the
>> send on that channel completes".
>
> That applies only to unbuffered chans.
How is that last statement compatible with the memory model's assertion that
>> Within a single goroutine, the happens-before relationship is the order
>> of the statements, so the channel send happens before the call to process,
>> which happens before the receive from the channel.
>>
>> If we add another goroutine, I can see two cases:
>> 1) we can send to the buffered channel, in which case we proceed as for
>> the single-goroutine case.
>> 2) the buffered channel is full, so we must block waiting for a receive
>> operation to correspond with our send operation. When that receive
>> operation takes place, it establishes that the send happened before it,
>
>
> Ooops, it's not the case for buffered chans. Send never happens before recv
> for buffered chans.
"a send on a channel happens before the corresponding receive from thatchannel completes"?
Out of curiosity, what would be required to add sequential consistency to the memory model?
It sounds like it's what we all expect.You say that the compiler can hoist non-synch-related statements before the channel send, but the MM says this:"Within a single goroutine, the happens-before order is the order expressed by the program."
The way I read this is that the compiler cannot reorder anything that might change behavior, and to me this includes channel sends, goroutine creation, and function calls.
[I'll be back online for real next week.]
It is true, from the memory model, that you should not expect this to work:
type lock chan bool
func newLock() lock { return make(lock, 1) }
func (l lock) Lock() { l <- true }
func (l lock) Unlock() { <-l }
You want the property that an Unlock happens before the next Lock, and there is no value being communicated between those two operations. Unlock is moving one value out of the buffer to make room for a different value in Lock. The two are, strictly speaking, unrelated operations. I believe this is Dmitry's point.
However, you can expect this to work:
type lock chan bool
func newLock() lock { l := make(lock, 1); l <- true; return l }
func (l lock) Lock() { <-l }
func (l lock) Unlock() { l <- true }
Here, Lock is receiving the value sent by Unlock, so there's an explicit communication of a value, so the Unlock is guaranteed to happen before the Lock.
If you really care, use the second form. This all seems pretty hypothetical.