Faster channels design doc

1,595 views
Skip to first unread message

Dmitry Vyukov

unread,
Feb 3, 2014, 10:42:01 AM2/3/14
to golang-dev

Dmitry Vyukov

unread,
Feb 3, 2014, 10:43:52 AM2/3/14
to golang-dev
On Mon, Feb 3, 2014 at 7:42 PM, Dmitry Vyukov <dvy...@google.com> wrote:
> Here is what I would like to implement for Go1.3:
>
> https://docs.google.com/a/google.com/document/d/1yIAYmbvL3JxOKOjuCyon7JhW4cSv1wy5hC0ApeGMV9s/pub

This one must work:
https://docs.google.com/document/d/1yIAYmbvL3JxOKOjuCyon7JhW4cSv1wy5hC0ApeGMV9s/pub

roger peppe

unread,
Feb 3, 2014, 12:26:54 PM2/3/14
to Dmitry Vyukov, golang-dev
One immediate reaction to:

: Non-goals:
: - make contended synchronous channel operations faster (use buffering!)

It is not always possible to use buffering. For example,
synchronous channels are considerably easier to reason
about (buffering channels can exponentially increase the
state space of a system), and sometimes it's crucial
that we know that a message has arrived.

There have been plenty of occasions when I would have liked to make things
go faster, but where adding buffering has not been an option.

So if it were possible (I'm not saying that it is), it would be nice to make
contended synchronous channel operations faster too.

Synchronous channels are the default in Go, after all.

cheers,
rog.
> --
>
> ---
> You received this message because you are subscribed to the Google Groups "golang-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.

Dmitry Vyukov

unread,
Feb 3, 2014, 12:41:35 PM2/3/14
to roger peppe, golang-dev
On Mon, Feb 3, 2014 at 9:26 PM, roger peppe <rogp...@gmail.com> wrote:
> One immediate reaction to:
>
> : Non-goals:
> : - make contended synchronous channel operations faster (use buffering!)
>
> It is not always possible to use buffering. For example,
> synchronous channels are considerably easier to reason
> about (buffering channels can exponentially increase the
> state space of a system), and sometimes it's crucial
> that we know that a message has arrived.
>
> There have been plenty of occasions when I would have liked to make things
> go faster, but where adding buffering has not been an option.


Please provide few concrete examples.
If you are using sync channels, you also have buffering, but it's
implicit (wait list of blocked goroutines), and it's harder to reason
about (and it's not FIFO).
I do not understand what do you mean "know that a message has
arrived". If you are blocked on an empty async chan you will also get
to know this immediately. And if there are some messages buffered
(either in the buffer or in wait list), or if the consumer has not yet
get to receive expression, you don't know. I do not see how you can do
anything useful with it.


> So if it were possible (I'm not saying that it is), it would be nice to make
> contended synchronous channel operations faster too.

I would like to do it, but, yes, it is not trivial (screw you,
select!). When/if we get this into, we can try to speedup wait queues
as well, and it will equally benefit sync channels, async channels and
semaphores.


> Synchronous channels are the default in Go, after all.

Just don't it. It's easy to specify capacity.

roger peppe

unread,
Feb 3, 2014, 1:15:59 PM2/3/14
to Dmitry Vyukov, golang-dev
On 3 February 2014 17:41, Dmitry Vyukov <dvy...@google.com> wrote:
> On Mon, Feb 3, 2014 at 9:26 PM, roger peppe <rogp...@gmail.com> wrote:
>> One immediate reaction to:
>>
>> : Non-goals:
>> : - make contended synchronous channel operations faster (use buffering!)
>>
>> It is not always possible to use buffering. For example,
>> synchronous channels are considerably easier to reason
>> about (buffering channels can exponentially increase the
>> state space of a system), and sometimes it's crucial
>> that we know that a message has arrived.
>>
>> There have been plenty of occasions when I would have liked to make things
>> go faster, but where adding buffering has not been an option.
>
>
> Please provide few concrete examples.

Here's one simplified example that demonstrates a fairly
common pattern:

http://play.golang.org/p/qy8gFoIbF2

When you manage to send a request on the requests channel,
you know that the goroutine has received and is processing it,
and that a reply will arrive in due course.

With a buffered channel, you don't get that guarantee,
and the code becomes harder to reason about.

> If you are using sync channels, you also have buffering, but it's
> implicit (wait list of blocked goroutines), and it's harder to reason
> about (and it's not FIFO).

My experience of using the Spin protocol checker was that
buffered channels add considerably to the number of reachable
states of a program. I remember a model that, with synchronous channels only
would be checked completely in milliseconds - adding buffering could
cause it to take hours for only a probabilistic verification.

My hunch is that that reduced state space actually corresponds pretty
closely to ease of human understanding and increased test suite coverage.

That's why, unless they're algorithmically necessary, I tend to
treat the use of buffered channels as an optimisation,
to be used only after due care and profiling.

Rob Pike

unread,
Feb 3, 2014, 1:48:53 PM2/3/14
to roger peppe, Dmitry Vyukov, golang-dev
I too am dismayed to see that synchronous channels are getting short
shrift. Your claim that buffered channels are easier to reason about
baffles me. Roger has it right when he says that buffered channels
create more reachable states; almost by definition that makes them
harder to reason about.

Buffered and unbuffered channels are both important. We should not
improve the implementation of one at the expense of the other.

-rob

Ingo Oeser

unread,
Feb 3, 2014, 4:19:00 PM2/3/14
to golan...@googlegroups.com
Why is the "lap" per element needed?
This at least doubles the size of each channel now.

What about an ever increasing head/tail modulo size of ring aka the "classic" lock free fifo?
Is it too slow? Is it still to slow after putting head/tail in different cache lines?

Ian Lance Taylor

unread,
Feb 3, 2014, 4:40:12 PM2/3/14
to Ingo Oeser, golang-dev
On Mon, Feb 3, 2014 at 1:19 PM, Ingo Oeser <night...@googlemail.com> wrote:
>
> Why is the "lap" per element needed?
> This at least doubles the size of each channel now.

The "lap" is not per-element. There is one lap for sending and one
for receiving, so an additional 64 bits per channel. He needs it so
that he can use a single atomic operation to determine both where the
buffer pointer is and whether the buffer is full.

Ian

Ingo Oeser

unread,
Feb 3, 2014, 4:53:25 PM2/3/14
to golan...@googlegroups.com, Ingo Oeser
hmm, 

Hchan->buf is semantically an array of size cap and of element type struct Elem.

struct Elem {
        // current lap,
        // the element is ready for writing on laps 0, 2, 4, ...
        // for reading -- on laps 1, 3, 5, ...
        uint32 lap;
        T      val;  // user data
};

This is the part I cannot reason about.

"hdr.elemcopy(hdr.elemsize-sizeof(uintgo), elem+1, ep);" in the actual implementation shows this, too.

Ian Lance Taylor

unread,
Feb 3, 2014, 5:04:00 PM2/3/14
to Ingo Oeser, golang-dev
On Mon, Feb 3, 2014 at 1:53 PM, Ingo Oeser <night...@googlemail.com> wrote:
>
> Hchan->buf is semantically an array of size cap and of element type struct
> Elem.
>
> struct Elem {
> // current lap,
> // the element is ready for writing on laps 0, 2, 4, ...
> // for reading -- on laps 1, 3, 5, ...
> uint32 lap;
> T val; // user data
> };
>
> This is the part I cannot reason about.
>
> "hdr.elemcopy(hdr.elemsize-sizeof(uintgo), elem+1, ep);" in the actual
> implementation shows this, too.

Oh, sorry, you're right. I see why he is doing it: so that the code
can know when the value has been updated using only atomic
instructions. There is one step to reserve the slot, and another step
to indicate that the slot is available. But the extra space does seem
a bit troubling. Since there is really no need for 32 different lap
values, perhaps it could be done by using two bits per lap in the
send/receive fields.

Ian



>
> On Monday, February 3, 2014 10:40:12 PM UTC+1, Ian Lance Taylor wrote:
>>
>> On Mon, Feb 3, 2014 at 1:19 PM, Ingo Oeser <night...@googlemail.com>
>> wrote:
>> >
>> > Why is the "lap" per element needed?
>> > This at least doubles the size of each channel now.
>>
>> The "lap" is not per-element. There is one lap for sending and one
>> for receiving, so an additional 64 bits per channel. He needs it so
>> that he can use a single atomic operation to determine both where the
>> buffer pointer is and whether the buffer is full.
>>
>> Ian
>

Dmitry Vyukov

unread,
Feb 4, 2014, 2:36:32 AM2/4/14
to roger peppe, golang-dev
On Mon, Feb 3, 2014 at 10:15 PM, roger peppe <rogp...@gmail.com> wrote:
> On 3 February 2014 17:41, Dmitry Vyukov <dvy...@google.com> wrote:
>> On Mon, Feb 3, 2014 at 9:26 PM, roger peppe <rogp...@gmail.com> wrote:
>>> One immediate reaction to:
>>>
>>> : Non-goals:
>>> : - make contended synchronous channel operations faster (use buffering!)
>>>
>>> It is not always possible to use buffering. For example,
>>> synchronous channels are considerably easier to reason
>>> about (buffering channels can exponentially increase the
>>> state space of a system), and sometimes it's crucial
>>> that we know that a message has arrived.
>>>
>>> There have been plenty of occasions when I would have liked to make things
>>> go faster, but where adding buffering has not been an option.
>>
>>
>> Please provide few concrete examples.
>
> Here's one simplified example that demonstrates a fairly
> common pattern:
>
> http://play.golang.org/p/qy8gFoIbF2
>
> When you manage to send a request on the requests channel,
> you know that the goroutine has received and is processing it,
> and that a reply will arrive in due course.
>
> With a buffered channel, you don't get that guarantee,
> and the code becomes harder to reason about.

Please be more specific. What is that "something else"? Is it a timeout?
In the most common case you just do:
http://play.golang.org/p/DRU0V1UufP


>> If you are using sync channels, you also have buffering, but it's
>> implicit (wait list of blocked goroutines), and it's harder to reason
>> about (and it's not FIFO).
>
> My experience of using the Spin protocol checker was that
> buffered channels add considerably to the number of reachable
> states of a program. I remember a model that, with synchronous channels only
> would be checked completely in milliseconds - adding buffering could
> cause it to take hours for only a probabilistic verification.
>
> My hunch is that that reduced state space actually corresponds pretty
> closely to ease of human understanding and increased test suite coverage.
>
> That's why, unless they're algorithmically necessary, I tend to
> treat the use of buffered channels as an optimisation,
> to be used only after due care and profiling.

Then the default SPIN abstraction for modelling sync channels is not
suitable for modelling sync Go channels. You need to write your own,
which is will cause the same state space explosion. I've looked at Go
sync chan implementation, the buffer is there, and it's unbounded and
unordered (as per specification).

Dmitry Vyukov

unread,
Feb 4, 2014, 2:49:56 AM2/4/14
to Ian Lance Taylor, Ingo Oeser, golang-dev
On Tue, Feb 4, 2014 at 2:04 AM, Ian Lance Taylor <ia...@golang.org> wrote:
> On Mon, Feb 3, 2014 at 1:53 PM, Ingo Oeser <night...@googlemail.com> wrote:
>>
>> Hchan->buf is semantically an array of size cap and of element type struct
>> Elem.
>>
>> struct Elem {
>> // current lap,
>> // the element is ready for writing on laps 0, 2, 4, ...
>> // for reading -- on laps 1, 3, 5, ...
>> uint32 lap;
>> T val; // user data
>> };
>>
>> This is the part I cannot reason about.
>>
>> "hdr.elemcopy(hdr.elemsize-sizeof(uintgo), elem+1, ep);" in the actual
>> implementation shows this, too.
>
> Oh, sorry, you're right. I see why he is doing it: so that the code
> can know when the value has been updated using only atomic
> instructions. There is one step to reserve the slot, and another step
> to indicate that the slot is available. But the extra space does seem
> a bit troubling. Since there is really no need for 32 different lap
> values, perhaps it could be done by using two bits per lap in the
> send/receive fields.

Two bits in sendx/recvx fields won't work, because of the ABA problem
-- a thread can successfully CAS sendx/recvx which is already off by 2
laps.
Do you mean two bits in each element? This could work... probably...
however it will make the algorithm considerably more difficult to
reason about.
When I've mentioned this design to Russ and Keith, the consensus was
that one additional word per element is fine because channels are not
main means to store data in Go programs. The trade off is between
"better storage" and "better communication primitive".

Dmitry Vyukov

unread,
Feb 4, 2014, 3:15:46 AM2/4/14
to Rob Pike, roger peppe, golang-dev
I have no intention to make sync channels worse. I've only said that
this redesign does not have a goal of making sync channels faster.
Because I don't know how to do it. When we figure out how to speedup
wait queues, we can do it and it will equally benefit both sync, async
and semaphore channels. And I believe that the proposed select design
won't be road blocker. I mean that we can add further improvements on
top of the proposed design (and not redesign the whole thing from
scratch again).

Regarding particular degradations on sync channels.
You can see that they are quite moderate. Both ChanSync and
ChanProdCons0 are not very interesting, because goroutines are not
doing any useful work, they are just hammering a single channel.

BenchmarkChanSync 127 127 +0.00%
BenchmarkChanSync-2 395 393 -0.51%
BenchmarkChanSync-4 358 340 -5.03%
BenchmarkChanSync-8 293 312 +6.48%
BenchmarkChanSync-16 353 361 +2.27%
BenchmarkChanSync-32 216 220 +1.85%

BenchmarkChanProdCons0 134 140 +4.48%
BenchmarkChanProdCons0-2 407 573 +40.79%
BenchmarkChanProdCons0-4 667 745 +11.69%
BenchmarkChanProdCons0-8 934 883 -5.46%
BenchmarkChanProdCons0-16 760 762 +0.26%
BenchmarkChanProdCons0-32 700 759 +8.43%

BenchmarkChanProdConsWork0 730 691 -5.34%
BenchmarkChanProdConsWork0-2 427 487 +14.05%
BenchmarkChanProdConsWork0-4 767 855 +11.47%
BenchmarkChanProdConsWork0-8 1096 1062 -3.10%
BenchmarkChanProdConsWork0-16 906 965 +6.51%
BenchmarkChanProdConsWork0-32 858 916 +6.76%


The degradations are solely due to added fast path for non-blocking
cases, the overall implementation of sync channels is not changed.
That's the same fast paths that give you:

BenchmarkChanNonblocking 24 8 -66.53%
BenchmarkChanNonblocking-2 92 4 -95.57%
BenchmarkChanNonblocking-4 104 2 -97.95%
BenchmarkChanNonblocking-8 114 1 -99.02%
BenchmarkChanNonblocking-16 92 0 -99.37%
BenchmarkChanNonblocking-32 82 0 -99.46%

BenchmarkSelectNonblock 104 34 -66.92%
BenchmarkSelectNonblock-2 51 17 -66.60%
BenchmarkSelectNonblock-4 25 8 -64.56%
BenchmarkSelectNonblock-8 12 4 -63.44%
BenchmarkSelectNonblock-16 6 2 -63.22%
BenchmarkSelectNonblock-32 5 2 -56.15%

BenchmarkSelectProdCons 1180 1031 -12.63%
BenchmarkSelectProdCons-2 880 683 -22.39%
BenchmarkSelectProdCons-4 1213 433 -64.30%
BenchmarkSelectProdCons-8 1613 578 -64.17%
BenchmarkSelectProdCons-16 1298 805 -37.98%
BenchmarkSelectProdCons-32 1289 773 -40.03%

---

I am not sure whether non-blocking send/recv on sync channels is
particularly important:

select {
case sync_chan <- v:
default:
}

The only reasonable use case I see is polling of stop channel
(essentially using chan as atomic flag):

loop: for {
select {
case <-sync_chan_stop:
break loop
default:
}
// do something useful
}

But the fast paths are definitely important for selects:

select {
case r := <-reqc:
handle r
case <-closec:
...
case <-someOtherUsuallyNonReadyChan:
...
}

I could remove the fast paths from send/recv, and leave them only in
selects. But unfortunately with the current C compiler it's impossible
to do w/o either significant code duplication or additional overheads
for function calls...

Andrew Gerrand

unread,
Feb 4, 2014, 3:38:05 AM2/4/14
to Dmitry Vyukov, Rob Pike, roger peppe, golang-dev
My main concern is the directive to "use buffered channels!"

Buffered and unbuffered channels are typically used for different purposes, and it is rare that one can simply add a buffer to a channel and preserve the code's behavior.

I haven't thought about it too deeply, but it is possible that many of the performance-critical uses of channels use buffers, in which case this is a welcome change. Can you show the numbers for the net/http benchmarks, for example? (net/http uses a mixture of buffered and unbuffered channels, so it seems like a good test case)

But I am generally afraid this might lead to people using buffered channels (and thus making their code more complicated) for speed purposes.

FWIW, these searches show uses of both kinds of channels in the stdlib:

unbuffered:
buffered:


roger peppe

unread,
Feb 4, 2014, 4:02:44 AM2/4/14
to Dmitry Vyukov, golang-dev
It might be a timeout. It might be some other event,
such as a stop request. In neither case is your replacement
code sufficient - unless the channel is full, we will wait
until all buffered messages and the message itself
have been processed. http://play.golang.org/p/u_94B3OHH-

>>> If you are using sync channels, you also have buffering, but it's
>>> implicit (wait list of blocked goroutines), and it's harder to reason
>>> about (and it's not FIFO).
>>
>> My experience of using the Spin protocol checker was that
>> buffered channels add considerably to the number of reachable
>> states of a program. I remember a model that, with synchronous channels only
>> would be checked completely in milliseconds - adding buffering could
>> cause it to take hours for only a probabilistic verification.
>>
>> My hunch is that that reduced state space actually corresponds pretty
>> closely to ease of human understanding and increased test suite coverage.
>>
>> That's why, unless they're algorithmically necessary, I tend to
>> treat the use of buffered channels as an optimisation,
>> to be used only after due care and profiling.
>
> Then the default SPIN abstraction for modelling sync channels is not
> suitable for modelling sync Go channels. You need to write your own,
> which is will cause the same state space explosion. I've looked at Go
> sync chan implementation, the buffer is there, and it's unbounded and
> unordered (as per specification).

The "buffer" you're talking about corresponds, I believe, to a set
of processes in well known states (waiting on the channel). With a
buffered channel, each process is more often free to continue running,
giving rise to a larger set of possible program states as it continues
asynchronously from the others.

Dmitry Vyukov

unread,
Feb 4, 2014, 4:04:24 AM2/4/14
to Andrew Gerrand, Rob Pike, roger peppe, golang-dev
What's the concern?! There are 229 usages of buffered channels and only 228 usages of sync channels :)


Dmitry Vyukov

unread,
Feb 4, 2014, 4:05:34 AM2/4/14
to Andrew Gerrand, Rob Pike, roger peppe, golang-dev
Seriously, I will just delete that remark. This design does not specifically make sync channels slower, and in fact it makes some usages of sync channels much faster.


Andrew Gerrand

unread,
Feb 4, 2014, 5:02:11 AM2/4/14
to Dmitry Vyukov, Rob Pike, roger peppe, golang-dev

On 4 February 2014 10:05, Dmitry Vyukov <dvy...@google.com> wrote:
Seriously, I will just delete that remark. This design does not specifically make sync channels slower, and in fact it makes some usages of sync channels much faster.

SGTM

Ian Lance Taylor

unread,
Feb 4, 2014, 9:45:19 AM2/4/14
to Dmitry Vyukov, Ingo Oeser, golang-dev
On Mon, Feb 3, 2014 at 11:49 PM, Dmitry Vyukov <dvy...@google.com> wrote:
> On Tue, Feb 4, 2014 at 2:04 AM, Ian Lance Taylor <ia...@golang.org> wrote:
>> On Mon, Feb 3, 2014 at 1:53 PM, Ingo Oeser <night...@googlemail.com> wrote:
>>>
>>> Hchan->buf is semantically an array of size cap and of element type struct
>>> Elem.
>>>
>>> struct Elem {
>>> // current lap,
>>> // the element is ready for writing on laps 0, 2, 4, ...
>>> // for reading -- on laps 1, 3, 5, ...
>>> uint32 lap;
>>> T val; // user data
>>> };
>>>
>>> This is the part I cannot reason about.
>>>
>>> "hdr.elemcopy(hdr.elemsize-sizeof(uintgo), elem+1, ep);" in the actual
>>> implementation shows this, too.
>>
>> Oh, sorry, you're right. I see why he is doing it: so that the code
>> can know when the value has been updated using only atomic
>> instructions. There is one step to reserve the slot, and another step
>> to indicate that the slot is available. But the extra space does seem
>> a bit troubling. Since there is really no need for 32 different lap
>> values, perhaps it could be done by using two bits per lap in the
>> send/receive fields.
>
> Two bits in sendx/recvx fields won't work, because of the ABA problem
> -- a thread can successfully CAS sendx/recvx which is already off by 2
> laps.

No, I was thinking that all we need to know is the state of the
element at the send index--who cares what the lap values are for the
other elements. But now I think that that doesn't work because we
don't know when the buffer is full.


> Do you mean two bits in each element? This could work... probably...
> however it will make the algorithm considerably more difficult to
> reason about.
> When I've mentioned this design to Russ and Keith, the consensus was
> that one additional word per element is fine because channels are not
> main means to store data in Go programs. The trade off is between
> "better storage" and "better communication primitive".

I suppose that is reasonable.

Ian

Kevin Gillette

unread,
Feb 5, 2014, 12:54:31 PM2/5/14
to golan...@googlegroups.com, roger peppe
On Monday, February 3, 2014 10:41:35 AM UTC-7, Dmitry Vyukov wrote:
If you are using sync channels, you also have buffering, but it's
implicit (wait list of blocked goroutines), and it's harder to reason
about (and it's not FIFO).

Not remarking on buffered vs unbuffered, but just addressing a technical point: afaict, unless we had true infinitely-buffered channels, buffered channels have the same characteristics as unbuffered channels when full on write or empty on read. In that case, if an unbuffered channel isn't FIFO with respect to the wait list of blocked goroutines, then in the worst case buffered channels are also non-FIFO as well; it's rather conventional in computer science to consider the worst case behavior to be an algorithm's primary characteristic..

Dmitry Vyukov

unread,
Feb 8, 2014, 11:16:34 AM2/8/14
to Andrew Gerrand, Rob Pike, roger peppe, golang-dev
On Tue, Feb 4, 2014 at 1:05 PM, Dmitry Vyukov <dvy...@google.com> wrote:
> On Tue, Feb 4, 2014 at 1:04 PM, Dmitry Vyukov <dvy...@google.com> wrote:
>>
>> On Tue, Feb 4, 2014 at 12:38 PM, Andrew Gerrand <a...@golang.org> wrote:
>>>
>>> My main concern is the directive to "use buffered channels!"
>>>
>>> Buffered and unbuffered channels are typically used for different
>>> purposes, and it is rare that one can simply add a buffer to a channel and
>>> preserve the code's behavior.
>>>
>>> I haven't thought about it too deeply, but it is possible that many of
>>> the performance-critical uses of channels use buffers, in which case this is
>>> a welcome change. Can you show the numbers for the net/http benchmarks, for
>>> example? (net/http uses a mixture of buffered and unbuffered channels, so it
>>> seems like a good test case)

It won't affect net/http, channel operations are below noise in
net/http benchmarks. I can see it in cpuprof here:
http://goperfd.appspot.com/perfdetail?commit=a2fe41181fe19de339397c8f96aa73e17b179805&commit0=bf5dc5852b542dd147f705e70457dacceca872f6&kind=benchmark&builder=windows-amd64&benchmark=http

I would expect it to be beneficial for e.g. modeling systems or, say,
parallel computations with a need for frequent synchronization.






>>> But I am generally afraid this might lead to people using buffered
>>> channels (and thus making their code more complicated) for speed purposes.
>>>
>>> FWIW, these searches show uses of both kinds of channels in the stdlib:
>>>
>>> unbuffered:
>>> http://golang.org/search?q=make%5C%28chan%5B%5E%2C%5D%2B%5C%29
>>> buffered:
>>> http://golang.org/search?q=make%5C%28chan%5B%5E%29%5D%2B%2C.%2B%5C%29
>>
>>
>>
>> What's the concern?! There are 229 usages of buffered channels and only
>> 228 usages of sync channels :)
>>
>
> Seriously, I will just delete that remark.

Done.

Dmitry Vyukov

unread,
Feb 8, 2014, 11:35:50 AM2/8/14
to roger peppe, golang-dev
There are always options for concrete cases, e.g.:
http://play.golang.org/p/clJEM1FZVM
And if you want something very complex, then you may need to pay for
it with performance.


>>>> If you are using sync channels, you also have buffering, but it's
>>>> implicit (wait list of blocked goroutines), and it's harder to reason
>>>> about (and it's not FIFO).
>>>
>>> My experience of using the Spin protocol checker was that
>>> buffered channels add considerably to the number of reachable
>>> states of a program. I remember a model that, with synchronous channels only
>>> would be checked completely in milliseconds - adding buffering could
>>> cause it to take hours for only a probabilistic verification.
>>>
>>> My hunch is that that reduced state space actually corresponds pretty
>>> closely to ease of human understanding and increased test suite coverage.
>>>
>>> That's why, unless they're algorithmically necessary, I tend to
>>> treat the use of buffered channels as an optimisation,
>>> to be used only after due care and profiling.
>>
>> Then the default SPIN abstraction for modelling sync channels is not
>> suitable for modelling sync Go channels. You need to write your own,
>> which is will cause the same state space explosion. I've looked at Go
>> sync chan implementation, the buffer is there, and it's unbounded and
>> unordered (as per specification).
>
> The "buffer" you're talking about corresponds, I believe, to a set
> of processes in well known states (waiting on the channel). With a
> buffered channel, each process is more often free to continue running,
> giving rise to a larger set of possible program states as it continues
> asynchronously from the others.


OK, I agree that sync channels have less states (and simpler to
reason) if lots of goroutines are blocked (on sync chans).
But that's exactly the simplicity that costs you performance. Less
runnable goroutines means less parallelism and constant need to
block/unblock/switch goroutines. I cannot magically solve it by
changing sync chan implementation, that's required by sync chan
semantics.

Russ Cox

unread,
Feb 12, 2014, 2:04:00 PM2/12/14
to Dmitry Vyukov, roger peppe, golang-dev
This is fine to talk about but it is too late to be doing such a dramatic change for Go 1.3. This is not on the Go 1.3 planning list (golang.org/s/go13todo). Let's focus on what's there.

Oleku Konko

unread,
Feb 13, 2014, 6:47:14 AM2/13/14
to golan...@googlegroups.com, Dmitry Vyukov, roger peppe
Hi Russ,

- It's not code freeze yet and the design does not specifically make sync channels slower, but makes it faster in most cases.

- The go13todo list says "This list is only aspirational; it is not a promise or a contract." not sure why you intend to limit this contribution to the list 

- I believe think this should be considered based on its merits and if there are any issues, attempt to fix it before concluding it's not feasible for go1.3
 
Thanks 

Russ Cox

unread,
Feb 13, 2014, 10:10:35 AM2/13/14
to Oleku Konko, golang-dev, Dmitry Vyukov, roger peppe
On Thu, Feb 13, 2014 at 6:47 AM, Oleku Konko <oleku...@gmail.com> wrote:
Hi Russ,

- It's not code freeze yet and the design does not specifically make sync channels slower, but makes it faster in most cases.

Code freeze on March 1 does not mean "break everything the day before March 1". Channels are a major part of the runtime, and this is a major rewrite of the channel implementation. I would like more than the remaining two weeks to think about the implications of the changes and whether they are what we want. The change was not on the golang.org/s/go13todo list. The only mention I can find anywhere was that in one email Dmitriy once said something along the lines of "maybe I will think about lock-free channels".

I'm not opposed to the change per se, but we need to be winding down development. There is plenty that still needs to be done before March 1 in the runtime alone - for example this morning the garbage collector is broken in at least three different ways that we know about - not to mention the rest of the tree. We did not plan to have bandwidth for such a big change, and consequently we do not. I have no doubt something like this will be in 1.4.

Russ

Oleku Konko

unread,
Feb 13, 2014, 2:33:03 PM2/13/14
to golan...@googlegroups.com, Oleku Konko, Dmitry Vyukov, roger peppe
Thanks for the clarification ... well done and keep up the good work. 

Andrew Gerrand

unread,
Feb 13, 2014, 6:03:42 PM2/13/14
to Oleku Konko, golang-dev, Dmitry Vyukov, roger peppe
Another way of thinking about this: it may be too late for 1.3, but it is good and early for 1.4. We're trying to speed up the release cycle over time, so missing a release won't be such a big deal.



--

voidl...@gmail.com

unread,
Feb 22, 2014, 12:09:10 PM2/22/14
to golan...@googlegroups.com, Dmitry Vyukov, Rob Pike, roger peppe
>>it is possible that many of the performance-critical uses of channels use buffers, in which case this is a welcome change.

This is anecdotal, but I looked through my work/professional (large, 2+ years old, multiple contributors, full-time Go) code base and my personal projects and I found this statement was indeed true.
 
>>Another way of thinking about this: it may be too late for 1.3, but it is good and early for 1.4. We're trying to speed up the release cycle over time, so missing a release won't be such a big deal.

No harm in planning ahead of time- right?

shka...@gmail.com

unread,
Jun 6, 2014, 1:14:15 PM6/6/14
to golan...@googlegroups.com
Will this change land in GO 1.4?
Reply all
Reply to author
Forward
0 new messages