Allocating chan buffers lazily

314 views
Skip to first unread message

Dmitry Vyukov

unread,
Jun 1, 2013, 5:03:58 PM6/1/13
to golang-dev
I've sent the CL:
https://codereview.appspot.com/9730043
but I think it requires a discussion here.

The CL makes chans allocated in 2 allocations: one for Hchan (chan
descriptor) and a separate allocation for the buffer.
There are several reasons:
1. Simplifies GC code: now chan is not a special case, but just a
struct with a pointer to an array.
2. Allows bitmask-based GC: currently chan is not a struct nor an
array (struct followed by an array in a single memory block), so it's
difficult to handle in a generic way.
3. Allows lazy allocation of chan buffer.

That last point I want to describe in more detail.
I've seen several mentions of "unbounded channels". People buffer
elements in unbounded lists in an intermediate goroutine, which is not
very elegant. While unbounded queues are evil, queues with very large
capacity can be useful in e.g. (1) publisher-subscriber system, where
slow subscribers must not block publishers or (2) in actor-based
systems where there are lots of actors and each actor needs
potentially very large queue, but very few agents will actually have
lots of messages queued, most agents will have at most few elements in
the queue.

So the idea is to allocate chan buffer lazily along the lines of:
- if chan cap <= C (say 16), allocate it eagerly as now.
- then double allocation on overflow until max capacity.

I think it will allow new interesting patterns implemented with chans.

What do you think?

Kevin Gillette

unread,
Jun 1, 2013, 8:43:40 PM6/1/13
to golan...@googlegroups.com
On Saturday, June 1, 2013 3:03:58 PM UTC-6, Dmitry Vyukov wrote:
- if chan cap <= C (say 16), allocate it eagerly as now.
- then double allocation on overflow until max capacity.

Could you clarify what you mean by chan cap? Is it the "effective" cap of a (partially) lazily allocated buffered chan, which is <= cap(ch)? Is max capacity the channel cap specified in make, or a runtime/platform constant holding the a maximum capacity for pseudo-unbounded chans?

Dmitry Vyukov

unread,
Jun 2, 2013, 4:11:52 AM6/2/13
to Kevin Gillette, golang-dev
chan cap is specified in make:
c := make(chan T, chancap)

I am *not* proposing any language change, e.g. "unbounded chans".

If you write:
c := make(chan T, 16)
it works as now -- whole buffer for 16 elements is preallocated.

However, if you write:
c := make(chan T, 1024)
Currently it also preallocates buffer for 1024 elements.
Under my proposal, it allocates buffer for, say, 4 elements. And then
grows it as 8, 16, 32 ... 1024 if needed.

Maxim Khitrov

unread,
Jun 2, 2013, 12:06:23 PM6/2/13
to Dmitry Vyukov, Kevin Gillette, golang-dev
On Sun, Jun 2, 2013 at 4:11 AM, Dmitry Vyukov <dvy...@google.com> wrote:
> On Sun, Jun 2, 2013 at 4:43 AM, Kevin Gillette
> <extempor...@gmail.com> wrote:
>> On Saturday, June 1, 2013 3:03:58 PM UTC-6, Dmitry Vyukov wrote:
>>>
>>> - if chan cap <= C (say 16), allocate it eagerly as now.
>>> - then double allocation on overflow until max capacity.
>>
>>
>> Could you clarify what you mean by chan cap? Is it the "effective" cap of a
>> (partially) lazily allocated buffered chan, which is <= cap(ch)? Is max
>> capacity the channel cap specified in make, or a runtime/platform constant
>> holding the a maximum capacity for pseudo-unbounded chans?
>
>
> chan cap is specified in make:
> c := make(chan T, chancap)
>
> I am *not* proposing any language change, e.g. "unbounded chans".

What's the argument against unbounded channels? There are no bounds
(other than the amount of memory) on how much you can append to a
slice. This seems like it wouldn't be much different.

I really like the idea of lazy allocation for channels, but my concern
is that if I write something like c := make(chan T, 1<<16), then one
compiler might only allocate space for 4 elements, while another will
allocate the whole thing at once. Unbounded channels, which would be a
language change, solve this problem by not giving the compiler any
concrete number for how much space will be required. Writing something
like c := make(chan T, -1) would require lazy allocation and create a
channel that never blocks on a send.

Dmitry Vyukov

unread,
Jun 2, 2013, 2:19:15 PM6/2/13
to Maxim Khitrov, Kevin Gillette, golang-dev
On Sun, Jun 2, 2013 at 8:06 PM, Maxim Khitrov <m...@mxcrypt.com> wrote:
> On Sun, Jun 2, 2013 at 4:11 AM, Dmitry Vyukov <dvy...@google.com> wrote:
>> On Sun, Jun 2, 2013 at 4:43 AM, Kevin Gillette
>> <extempor...@gmail.com> wrote:
>>> On Saturday, June 1, 2013 3:03:58 PM UTC-6, Dmitry Vyukov wrote:
>>>>
>>>> - if chan cap <= C (say 16), allocate it eagerly as now.
>>>> - then double allocation on overflow until max capacity.
>>>
>>>
>>> Could you clarify what you mean by chan cap? Is it the "effective" cap of a
>>> (partially) lazily allocated buffered chan, which is <= cap(ch)? Is max
>>> capacity the channel cap specified in make, or a runtime/platform constant
>>> holding the a maximum capacity for pseudo-unbounded chans?
>>
>>
>> chan cap is specified in make:
>> c := make(chan T, chancap)
>>
>> I am *not* proposing any language change, e.g. "unbounded chans".
>
> What's the argument against unbounded channels?

They are always a bad idea.
While a chan operates within an expected capacity, both bounded and
unbounded chans behave the same. When a chan grows beyond an expected
capacity unbounded chans degrade ungracefully and uncontrollably;
while bounded chans degrade gracefully by starting propagating the
overload condition to the producer (blocking), then to the producer of
the producer which stops reading from the socket, socket buffer
overflows and the overload condition propagates to remote producers.


> There are no bounds
> (other than the amount of memory) on how much you can append to a
> slice. This seems like it wouldn't be much different.

It is radically different.
For slices size is dictated by the algorithm. For chans size is
dictated by relative speeds of various asynchronous agents. Consider
the following code:

c := make(chan T, unbounded)
go func() {
for {
c <- compute()
}
}()
go func() {
for v := range c {
writeToDisk(v)
}
}()

What will be maximum size of the chan? What if you change your disk?
What if you run on a different machine? What if you optimize the
compute() function?


> I really like the idea of lazy allocation for channels, but my concern
> is that if I write something like c := make(chan T, 1<<16), then one
> compiler might only allocate space for 4 elements, while another will
> allocate the whole thing at once. Unbounded channels, which would be a
> language change, solve this problem by not giving the compiler any
> concrete number for how much space will be required. Writing something
> like c := make(chan T, -1) would require lazy allocation and create a
> channel that never blocks on a send.

There always will be QoI differences. In fact, I am pretty sure that
you can have program today that runs fine with gc, but fails with
gccgo.

Maxim Khitrov

unread,
Jun 2, 2013, 3:25:27 PM6/2/13
to Dmitry Vyukov, Kevin Gillette, golang-dev
What if you have an agent that is known to produce a bounded number of
items, but that bound isn't known when the channel is created? In
addition, suppose that this agent doesn't care whether there is anyone
receiving from the channel or how quickly. It has some limited amount
of work to do, the output should be sent to a channel without
blocking, and then the agent should exit. If there is no other agent
receiving, then the channel and all buffered items should be garbage
collected.

I've run into this pattern a few times and couldn't use channels as
the solution precisely because I couldn't guarantee the sender's
ability to finish its work. My IMAP package is one example. I
originally wanted to use channels for delivering command responses,
but since the user may have no interest in the responses to a
particular command (the command was executed for its side-effects, yet
the responses still have to go somewhere), I had to use a slice/append
implementation instead.

minux

unread,
Jun 2, 2013, 4:01:21 PM6/2/13
to Maxim Khitrov, Dmitry Vyukov, Kevin Gillette, golang-dev
On Mon, Jun 3, 2013 at 3:25 AM, Maxim Khitrov <m...@mxcrypt.com> wrote:
What if you have an agent that is known to produce a bounded number of
items, but that bound isn't known when the channel is created? In
addition, suppose that this agent doesn't care whether there is anyone
receiving from the channel or how quickly. It has some limited amount
of work to do, the output should be sent to a channel without
blocking, and then the agent should exit. If there is no other agent
receiving, then the channel and all buffered items should be garbage
collected.

I've run into this pattern a few times and couldn't use channels as
the solution precisely because I couldn't guarantee the sender's
ability to finish its work. My IMAP package is one example. I
originally wanted to use channels for delivering command responses,
but since the user may have no interest in the responses to a
particular command (the command was executed for its side-effects, yet
the responses still have to go somewhere), I had to use a slice/append
implementation instead.
how about this:

ch := make(chan []Result, 1) // the exact cap is not important, as long as the channel is buffered.

in your producer() (the only sender of ch), do something like this:
   // assume ret is the Result you want to send into the channel
   select {
   case ch <- []Result{ret}: // ok
   case old := <- ch: 
      ch <- append(old, ret)
   }
i think one could guarantee that the select won't block and it will emulate
a channel with unbounded capacity (in the one sender case).

your consumer just need to do another range over the value received from
the channel.

Maxim Khitrov

unread,
Jun 2, 2013, 4:30:01 PM6/2/13
to minux, Dmitry Vyukov, Kevin Gillette, golang-dev
Not very pretty, but that's a good idea :)

Kevin Gillette

unread,
Jun 2, 2013, 4:53:40 PM6/2/13
to golan...@googlegroups.com, Maxim Khitrov, Dmitry Vyukov, Kevin Gillette
On Sunday, June 2, 2013 2:01:21 PM UTC-6, minux wrote:
ch := make(chan []Result, 1) // the exact cap is not important, as long as the channel is buffered.

in your producer() (the only sender of ch), do something like this:
   // assume ret is the Result you want to send into the channel
   select {
   case ch <- []Result{ret}: // ok
   case old := <- ch: 
      ch <- append(old, ret)
   }
i think one could guarantee that the select won't block and it will emulate
a channel with unbounded capacity (in the one sender case).

your consumer just need to do another range over the value received from
the channel.

With Dmitry's proposed optimization, the "unbounded channels in Go" pattern that you demonstrated would be, in both memory efficiency and semantics, equivalent to:

ch := make(chan Result, MaxInt)

// in producer
ch <- ret

If ret isn't a value of a zero-sized type, you'd run out of memory before or at the time that the buffer fills up, just as you would with the equivalent number of items in the buffered slice. A MaxInt buffer effectively is unbounded, and you'd likely run into the issues with unbounded chans before you filled it up; without this proposal, it just happens to be impractical to declare such a buffered channel.

Since (AFAIK) the current behavior is implementation-guaranteed to never produce an out-of-memory panic on a channel send, but the change could, I'd suggest that for compatibility, currently valid calls to make preallocate the full buffer, while passing a negative cap do lazy preallocation. The reason for this is that it's preferable for long-running systems to fail at initialization if they're to fail at all -- preallocations would continue to have that effect where availability of memory is concerned (it's helpful to not need to defensively program against another place where panics could occur in non-trivial systems).

Another question: similar to split/copying stacks, would lazily allocated channels every be able to incrementally deallocate themselves? I'm venturing a guess that because of the concurrent nature of channels, unlike growth, shrinking would be non-trivial.
 

Brad Fitzpatrick

unread,
Jun 2, 2013, 5:06:16 PM6/2/13
to Kevin Gillette, Maxim Khitrov, Dmitry Vyukov, golan...@googlegroups.com

The language already allocates all over the place.

The "allocations panicing at runtime breaks compatibility" argument isn't a great one.

--
 
---
You received this message because you are subscribed to the Google Groups "golang-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Kevin Gillette

unread,
Jun 2, 2013, 5:07:55 PM6/2/13
to golan...@googlegroups.com, Kevin Gillette, Maxim Khitrov, Dmitry Vyukov
On Sunday, June 2, 2013 3:06:16 PM UTC-6, Brad Fitzpatrick wrote:

The language already allocates all over the place.

The "allocations panicing at runtime breaks compatibility" argument isn't a great one.


Fair enough. 

Dave Cheney

unread,
Jun 2, 2013, 5:18:56 PM6/2/13
to Kevin Gillette, golang-dev, Maxim Khitrov, Dmitry Vyukov
I have several concerns about this proposed addition to the
<strike>language</strike> implementation.

1. I think there are three important channel sizes, 0, 1 and lots.
This proposal would seem to favor the lots case at the cost of another
indirect load in the 1 case.

2. In the lots case (where lots is in the other of a dozen to a few
hundred), will these allocations form a linked list, or will they be
rolled up, append style ?

3. I don't think we should optimize at all for the lots > few hundred
case, minux and dmitry have already shown how an 'infinite' channel
can be created without this addition.

4. Is this going to make our already fragile Allocs benchmark tests
even less reliable ?

In summary, this proposal started as a suggestion to make chan structs
more regular from the POV of the GC and somehow mutated into a new
implementation feature. I don't think merging the two ideas together
is a good idea.

Dmitry Vyukov

unread,
Jun 3, 2013, 12:32:24 AM6/3/13
to Maxim Khitrov, Kevin Gillette, golang-dev
What if that bound turned out to be 1e15 later?


> In
> addition, suppose that this agent doesn't care whether there is anyone
> receiving from the channel or how quickly.


It does not work that way.
If the number of messages is bounded, just create the channel with the
bound, and potentially use non-blocking send and check that it always
succeeds.
If you can not put any bound on chan capacity, then you need to create
a bounded chan anyway and deal with overloads somehow. Blocking sends
provide a good default way to deal with overloads -- throttling. If a
[malicious, or misbehaving] IMAP client sends an infinite sequence of
queries w/o reading the results, it may be a good idea to disconnect
it if the chan overflows; and do not wait when it will kill your
server with swapping death.



> It has some limited amount
> of work to do, the output should be sent to a channel without
> blocking, and then the agent should exit. If there is no other agent
> receiving, then the channel and all buffered items should be garbage
> collected.
>
> I've run into this pattern a few times and couldn't use channels as
> the solution precisely because I couldn't guarantee the sender's
> ability to finish its work. My IMAP package is one example. I
> originally wanted to use channels for delivering command responses,
> but since the user may have no interest in the responses to a
> particular command (the command was executed for its side-effects, yet
> the responses still have to go somewhere), I had to use a slice/append
> implementation instead.

You seem to implicitly assume that there is some upper bound on chan
size anyway. OK, will it be fine if the client does not read responses
and chan size will grow to 1e6? 1e9? 1e12? 1e15? You also seem to
assume that an unbounded chan will somehow magically solve that
situation for you. It won't. It will just break badly.
You can say "but it will never ever grow beyond 1e6". OK, then just
put that bound on chan capacity. And then you will ask yourself "and
what if a misbehaving client sends me an infinite sequence of
requests?". Now you are thinking in the right direction.

Dmitry Vyukov

unread,
Jun 3, 2013, 12:34:51 AM6/3/13
to Kevin Gillette, golang-dev, Maxim Khitrov
It is trivial with current mutex-based implementation. It may be
non-trivial with a future lock-free implementation.
But my idea is to not shrink chans, otherwise it can provoke the same
"split stack" problem.

Dmitry Vyukov

unread,
Jun 3, 2013, 12:35:42 AM6/3/13
to Brad Fitzpatrick, Kevin Gillette, Maxim Khitrov, golang-dev
On Mon, Jun 3, 2013 at 1:06 AM, Brad Fitzpatrick <brad...@golang.org> wrote:
> The language already allocates all over the place.
>
> The "allocations panicing at runtime breaks compatibility" argument isn't a
> great one.


+1

Does "i := 0" allocate?

Dmitry Vyukov

unread,
Jun 3, 2013, 12:44:39 AM6/3/13
to Dave Cheney, Kevin Gillette, golang-dev, Maxim Khitrov
On Mon, Jun 3, 2013 at 1:18 AM, Dave Cheney <da...@cheney.net> wrote:
> I have several concerns about this proposed addition to the
> <strike>language</strike> implementation.
>
> 1. I think there are three important channel sizes, 0, 1 and lots.
> This proposal would seem to favor the lots case at the cost of another
> indirect load in the 1 case.


There is indirect load already. You need to load enqueue/dequeue
position from chan first, only then you can calculate the address in
buffer.


> 2. In the lots case (where lots is in the other of a dozen to a few
> hundred), will these allocations form a linked list, or will they be
> rolled up, append style ?

No linked lists. It will be an array.


> 3. I don't think we should optimize at all for the lots > few hundred
> case, minux and dmitry have already shown how an 'infinite' channel
> can be created without this addition.

Yes, it's doable, but a bit ugly and slow now.

> 4. Is this going to make our already fragile Allocs benchmark tests
> even less reliable ?

It's possible, but it will be "true positives". We will need to update
the upper bounds on allocations in tests.


> In summary, this proposal started as a suggestion to make chan structs
> more regular from the POV of the GC and somehow mutated into a new
> implementation feature. I don't think merging the two ideas together
> is a good idea.

Yes, probably I should not mention the CL here.
Let's discuss only lazy chan buffers here.

Brad Fitzpatrick

unread,
Jun 3, 2013, 1:19:44 AM6/3/13
to Dmitry Vyukov, Kevin Gillette, Maxim Khitrov, golang-dev
Does "i := 0" allocate?

func foo() {
   i := 0
   go func() {
      _ = i
   }()
}

Keith Randall

unread,
Jun 3, 2013, 2:05:27 AM6/3/13
to Brad Fitzpatrick, Dmitry Vyukov, Kevin Gillette, Maxim Khitrov, golang-dev
func fallingtree(sound chan bool) {
sound <- true
}
func person(sound chan bool) {
if(<-sound) {
                fmt.Println("sound!")
        }
}
func forest(npeople int) {
sound := make(chan bool)
go fallingtree(sound)
        for i := 0; i < npeople; i++ {
       go person(sound)
        }
}
func main() {
forest(0)
}



Dmitry Vyukov

unread,
Jun 3, 2013, 2:40:54 AM6/3/13
to Keith Randall, Brad Fitzpatrick, Kevin Gillette, Maxim Khitrov, golang-dev
On Mon, Jun 3, 2013 at 10:05 AM, Keith Randall <k...@google.com> wrote:
> func fallingtree(sound chan bool) {
> sound <- true
> }
> func person(sound chan bool) {
> if(<-sound) {
> fmt.Println("sound!")
> }
> }
> func forest(npeople int) {
> sound := make(chan bool)
> go fallingtree(sound)
> for i := 0; i < npeople; i++ {
> go person(sound)
> }
> }
> func main() {
> forest(0)
> }

?

Robin

unread,
Jun 3, 2013, 9:07:18 AM6/3/13
to golan...@googlegroups.com
On 06/03/2013 08:40 AM, Dmitry Vyukov wrote:
> On Mon, Jun 3, 2013 at 10:05 AM, Keith Randall <k...@google.com> wrote:
>> func fallingtree(sound chan bool) {
>> sound <- true
>> }
>> func person(sound chan bool) {
>> if(<-sound) {
>> fmt.Println("sound!")
>> }
>> }
>> func forest(npeople int) {
>> sound := make(chan bool)
>> go fallingtree(sound)
>> for i := 0; i < npeople; i++ {
>> go person(sound)
>> }
>> }
>> func main() {
>> forest(0)
>> }
>
> ?

"If a tree falls in a forest and no one is around to hear it, does it
make a sound?"

ref: http://en.wikipedia.org/wiki/If_a_tree_falls_in_a_forest

Kevin Gillette

unread,
Jun 3, 2013, 3:46:18 AM6/3/13
to golan...@googlegroups.com
The answer is, of course: "no, it makes a leak"

Russ Cox

unread,
Jun 3, 2013, 8:16:22 AM6/3/13
to Dmitry Vyukov, golang-dev
Dave Cheney is right: implementing lazy allocation of the channel buffer amounts to a language change. It's fine to propose and discuss language changes, but it should be kept as separate as possible from implementation changes triggered by engineering decisions. Unbounded allocation in any program is unwise. It's far from clear to me that we should be encouraging people to program against arbitrarily large channel buffers.

If there are solid engineering reasons to split the channel into two allocations, with the buffer as a separate array, then that's not visible to programmers in any way. It's just an implementation detail, not a language change, and so without looking at the specifics of the CL, at least the idea is probably fine. But please do not implement lazy growth of the channel buffer.

Russ


Ian Lance Taylor

unread,
Jun 3, 2013, 9:34:32 AM6/3/13
to Russ Cox, Dmitry Vyukov, golang-dev
On Mon, Jun 3, 2013 at 5:16 AM, Russ Cox <r...@golang.org> wrote:
> Dave Cheney is right: implementing lazy allocation of the channel buffer
> amounts to a language change. It's fine to propose and discuss language
> changes, but it should be kept as separate as possible from implementation
> changes triggered by engineering decisions. Unbounded allocation in any
> program is unwise. It's far from clear to me that we should be encouraging
> people to program against arbitrarily large channel buffers.

Let's not confuse lazy allocation of the channel buffer with unbounded
allocation. Dmitriy was only proposing that we grow the channel
buffer lazily up to the specified bound, rather than allocating it all
at once. That is not a language change, it is only an implementation
change. For example, we could manage the channel buffer as a linked
list of buffers, where each buffer could hold, say, 32 entries. That
would give us channel buffers that grow and shrink as needed, while
still always being constrained by the buffer size specified in the
program.

Ian

Jan Mercl

unread,
Jun 3, 2013, 9:54:15 AM6/3/13
to Ian Lance Taylor, Russ Cox, Dmitry Vyukov, golang-dev
On Mon, Jun 3, 2013 at 3:34 PM, Ian Lance Taylor <ia...@golang.org> wrote:
> That is not a language change, it is only an implementation
> change.

I hope you're right. But Dmitry wrote earlier (in the OP): "I think it
will allow new interesting patterns implemented with chans.". That
makes me a bit nervous. The change should not be, as a non language
change (cf. you Ian), observable in the first approximation at all,
correct? What new patterns it can enable then???

-j

Brad Fitzpatrick

unread,
Jun 3, 2013, 10:01:17 AM6/3/13
to Jan Mercl, Ian Lance Taylor, Russ Cox, Dmitry Vyukov, golang-dev
He meant later, optionally, once we have this. That's how I read it.
 

Dmitry Vyukov

unread,
Jun 3, 2013, 10:02:49 AM6/3/13
to Jan Mercl, Ian Lance Taylor, Russ Cox, golang-dev
I only meant that the patterns currently consume lost of memory (e.g.
1e6 chans with 1e3 capacity), with this change they will consume less
memory.

Russ Cox

unread,
Jun 3, 2013, 10:02:54 AM6/3/13
to Ian Lance Taylor, Dmitry Vyukov, golang-dev
Today, c = make(chan int, 1<<30) is prohibitively expensive.
If the implementation changes to do what Dmitriy describes, then it is cheap.
Programs written assuming the lazy implementation will not run on the old implementation.
That's a language change.

Russ

Gustavo Niemeyer

unread,
Jun 3, 2013, 10:05:21 AM6/3/13
to Ian Lance Taylor, Russ Cox, Dmitry Vyukov, golang-dev
The difference is subtle. If this allocated lazily:

make(chan T, math.MaxInt64)

for most practical purposes, it is unbounded allocation, and this must
necessarily be accompanied by a language change as it will crash any
implementations of the language that do not allocate lazily.

It's also a time bomb. It seems wiser to have people necessarily
thinking through what happens when the producer is faster than the
consumer. I've never seen any reasonable arguments to have lazy
allocations, or extremely large channels for that matter. Channels are
communication primitives, not storage.


gustavo @ http://niemeyer.net

Dmitry Vyukov

unread,
Jun 3, 2013, 10:10:24 AM6/3/13
to Gustavo Niemeyer, Ian Lance Taylor, Russ Cox, golang-dev
Another side of this:
As communication mechanism chans also provide feedback in the form of
producer throttling. This is crucial for asynchronous message-passing
systems. Once one starts buffering somewhere else, this properly of
chans is lost.

Rémy Oudompheng

unread,
Jun 3, 2013, 10:14:11 AM6/3/13
to Russ Cox, Ian Lance Taylor, Dmitry Vyukov, golang-dev
It is true. However, I don't remember we had the same reaction when
the new hashmap implementation was introduced.

In Go 1.0.1, the following program uses 30MB of memory (Go heap is 1MB
large). In Go 1.1, it takes 1.26GB (Go heap is 1.25GB large).

package main

import "time"

func main() {
m := make(map[int64]int64, 40<<20)
time.Sleep(time.Minute)
_ = m
}

Rémy.

Gustavo Niemeyer

unread,
Jun 3, 2013, 10:17:23 AM6/3/13
to Dmitry Vyukov, Ian Lance Taylor, Russ Cox, golang-dev
That property is always there, because consumers are not forced to
receive when that's not appropriate. One can buffer as much as
desired.


gustavo @ http://niemeyer.net

Dmitry Vyukov

unread,
Jun 3, 2013, 10:26:29 AM6/3/13
to Gustavo Niemeyer, Ian Lance Taylor, Russ Cox, golang-dev
If you do manual buffering (receive eagerly and put the messages into
a slice), the property is not there until you explicitly implement
logic to handle it
Reply all
Reply to author
Forward
0 new messages