That's a curious little hole in the memory model. It saysA receive from an unbuffered channel happens before the send on that channel completes.
but I'd like it to sayA receive from an unbuffered (or buffered, but full) channel happens before the send on that channel completes.
I found the discussion, it was on golang-dev *sigh*If you want your brain to melt a little, read the whole thread from top to bottom. :)
On Wednesday, 6 March 2013, Kyle Lemons wrote:
I found the discussion, it was on golang-dev *sigh*If you want your brain to melt a little, read the whole thread from top to bottom. :)That conversation didn't so much melt my brain as break my heart :(.The required rule is basically that, given a channel of buffer capacity N >= 0, the (M-N)th receive happens before the Mth send completes for all M > N. This rule is already true for N == 0, but not otherwise.
The thing is, this ordering is required in order for "send on a buffered channel will block until there is room in the buffer" to have any meaning.
On Wednesday, March 6, 2013 6:15:10 PM UTC-6, Steven Blenkinsop wrote:On Wednesday, 6 March 2013, Kyle Lemons wrote:I found the discussion, it was on golang-dev *sigh*If you want your brain to melt a little, read the whole thread from top to bottom. :)That conversation didn't so much melt my brain as break my heart :(.The required rule is basically that, given a channel of buffer capacity N >= 0, the (M-N)th receive happens before the Mth send completes for all M > N. This rule is already true for N == 0, but not otherwise.The thing is, this ordering is required in order for "send on a buffered channel will block until there is room in the buffer" to have any meaning. If the Mth send can happen before the (M-N)th receive, then the buffer size is greater than its capacity. Which means you have to assume that buffer capacity is meaningless in order to be able to do the reordering Dmitry does in that thread :\. It's not as surprising that buffer capacity ends up being irrelevant later on in his analysis when you realize that he implicitly made this assumption up front.
This is only relevant in the case of receiver and sender in the same goroutine though, correct?
On Wednesday, 6 March 2013, Kyle Lemons wrote:I think that's covered under the slightly-easier-to-understand rule:Each send on a particular channel is matched to a corresponding receive from that channel, usually in a different goroutine.
;-)
No, because the Mth send is matched to the Mth receive, and the Mth send happens before the Mth receive, and in the case of unbufferred channels, the Mth receive happens before the Mth send completes (i.e. N == 0). The N == 0 case is already covered, yes, which was my point, since the N == 0 rule can be considered a special case of the N >= 0 rule I stated above which doesn't exist in the memory model.
Was going off list intentional?
On Wednesday, 6 March 2013, Kyle Lemons wrote:
Oh, I think what you're stating is not actually the case.Yes. I'm stating a rule that doesn't exist in the memory model. I'm stating it because it's the rule that would be required in order to be able to guarantee that Matt's semaphore will work.The only happens before relationship applies to the value. If a value goes into a channel and then comes out of a channel, the operation that put it in happens before the operation that takes it out.The other happens before relationship is:A receive from an unbuffered channel happens before the [corresponding] send on that channel completes.
This rule exists so that unbufferred channels are actually unbufferred. If a send completed before its corresponding receive happened, then the value sent on it would be buffered, so the memory model disallows that.My point was that the same logic applies to buffered channels as well. If the Mth send completes before the (M-N)th receive happens, where N is the buffer capacity, then there are N+1 buffered values, which would exceed the buffer capacity. If the spec requires that sends to buffered channels on conforming implementations block until there is room in the buffer for the value, then the (M-N)th receive has to happen before the Mth send completes. This rule is not stated in the memory model, however.
After reading this thread, I don't know what I understand are really right. Following is my understand:
In http://play.golang.org/p/rMpnZRtCwt, when channel is full, the Unlock() can't be guaranteed to happen before the subsequent Lock(), because Unlock() is reading from buffer channel while Lock() is sending on buffer channel.
No, that's not true. It's perfectly fine to allow any code to hoistabove chan send, or to sink below chan receive.
So, what you are
saying regarding chans is true, but it won't limit number of
concurrently executing goroutines in any way.
Why do you think that the program is allowed to print only ab or ba?
This program contains a data race, so the behavior is undefined.
On Fri, Mar 8, 2013 at 12:41 PM, Dmitry Vyukov <dvy...@google.com> wrote:
Why do you think that the program is allowed to print only ab or ba?Because, as I stated, if it printed something else that would imply a violation of the spec. E.g. if it printed "a" or "b" then we would have to assume that the statements s += "a" and s += "b" ran concurrently, meaning that two values were buffered on the chan which has only capacity for one. Where is the flaw in this reasoning?
OK, it makes sense now. Thanks for the explanation. I see now that this is already covered pretty well in http://golang.org/ref/mem . I should have given that another read. But your example was a little more enlightening.
You may be not realizing what it actually means to provide completeand consistent sequential consistency for chan operations.
Consider the following two programs:
http://play.golang.org/p/8eWCIhWpPv
http://play.golang.org/p/JRxL3JTtX1
If chan operations provide complete and consistent sequential
consistency, then the programs are legal (should not print FAIL). But
I am sure whether we actually want to make such use cases legal.
Basically, under sequential consistency any operation that reveals
internal state in any way, shape or form (e.g. panic from double close
says you that somebody else has already close it) synchronizes with
any operation that modifies internal state in any observable way (e.g.
chan close makes chan send panic).
And there is more to it. Under sequential consistency the following
program should not fail. Goroutine 1 modifies state of chan 1 in some
way. Concurrently G2 modifies state of chan 2 in some way. G3 and G4
independently query modifications on chan 1 and 2. G3 and G4 expect to
observe the modifications in the same order.
The chan-based semaphore/mutex falls into the same category. It tries
to observe secondary side effects of some operations in some strange
way, and then infer some inter-goroutine relations form that based on
sequential consistency assumption. You do not want to go there.
Moreover, it can actually make implementations slower on architectures
with relaxed memory model (ARM/POWER) and especially on DSM
(distributed shared memory) systems where any synchronization is
extremely expensive.
Does all this mean that the examples in the memory model document (http://golang.org/ref/mem#tmp_6) are illegal since the assignment to a is not directly involving the chan operation?
On Fri, Mar 8, 2013 at 5:38 AM, Gordon Klaus <gordon...@gmail.com> wrote:
> That makes sense. Thanks for the explanation. I guess what Russ meant was
> that the two operations (reading from and writing to a full buffered
> channel) are unrelated insofar as the purpose of channels is concerned:
> Channels are for communicating data; they also provide synchronization, but
> only enough to enable this communication.
>
> As you pointed out, the spec does in fact ensure that the write to a full
> channel will not proceed before a read makes space for it. So the proposed
> WaitGroup limiting should work in any correct implementation of the
> language.
No, that's not true. It's perfectly fine to allow any code to hoist
above chan send, or to sink below chan receive. So, what you are
saying regarding chans is true, but it won't limit number of
concurrently executing goroutines in any way.
> Sure enough, the optimization you described is present.
> http://play.golang.org/p/78ycD3sPZj On the playground, it's totally
> aggressive, completely emptying the channel before accepting any more
> values. Interestingly, the same code on my machine and the same version
> (1.0.3) is not as aggressive, in fact sometimes it perfectly interleaves the
> sends and receives.
>
>
> On Thu, Mar 7, 2013 at 6:53 PM, Kyle Lemons <kev...@google.com> wrote:
>>
I can roll my own... but does anyone else think a limited workgroup would be useful. i.e. block on Add if the workgroup has reached a size limit? This can be useful when for instance each thread in a pool generates a lot of in data memory; and you know you are going to start running up against hardware limits.Curious to know how others have handled similar scenarios...
for _, v := range urls {for wg.Size() > int32(9) {time.Sleep(500 * time.Millisecond)}go f(v)}
Is there any standard and safe way to do this outside of using bool channels? Secondly, if channels are the only standard way to achieve this, should WaitGroup still be used in conjuction, or is it safe enough to wait/exit based on len(chan)?