elastic channel buffers

4,638 views
Skip to first unread message

Sameer Ajmani

unread,
Feb 2, 2012, 11:03:41 PM2/2/12
to golan...@googlegroups.com
In my Go code, I generally see channel buffers in three classes:
1) unbuffered, i.e., buffer size=0: we care about the synchronization of sends and receives
2) buffer size=K (usually 1): we expect at most K sends on this channel
3) arbitrary buffer: we expect many sends on this channel; there's usually one receiver

In case (3), the buffer size is really a performance parameter, and the runtime, not the programmer, should decide which value is best.  For example, if the typical send:receive ratio is 10:1, then a buffer size of 10 works well, as 10 sender goroutines can issue their sends, then the receiver goroutine can service all 10 sends in a single scheduling pass.  But if we make the buffer too large, we waste resources.  Are there any good heuristics for choosing channel buffer size in this case?  Should Go support an "elastic" channel buffer size for users that want to delegate this decision to the runtime, e.g., make(chan *request, -1) or make(chan *request, runtime.ChooseBufferSize()) ?

S

Kyle Lemons

unread,
Feb 3, 2012, 3:54:54 AM2/3/12
to Sameer Ajmani, golan...@googlegroups.com
I don't think it would be a bad thing to have, though I think it's far from the biggest thing that the runtime could be able to decide automatically, and I'm not sure the runtime would have the information necessary to make an informed decision any time soon.  Perhaps a more interesting exercise would be to create a "training" interface to the runtime (similar to the cpu and memory profiling) that can be used to empirically determine some of these values.  I may be wrong, but I believe creating a buffered channel with a large buffer doesn't immediately allocate that much memory, so you could theoretically do something like

ch := make(chan *request, SomeLargeValue)
runtime.TrainChannel(ch)
// ... use it

and the runtime would (when enabled) dump the maximum buffer size and the current in-use count periodically.  They could run with this for awhile under "normal" (possibly simulated) load.  We could then create tools to visualize and/or analyze this output and enable the programmer to make an informed decision about the value to set.  I imagine there could be a lot of other things that could be trained (size/capacity of a slice, timeout/deadlines, number of goroutines, who knows).

~K

Roberto Costumero Moreno

unread,
Feb 3, 2012, 6:23:39 AM2/3/12
to Kyle Lemons, Sameer Ajmani, golan...@googlegroups.com
I don't think it could be a solution good enough to let the runtime achieve the generation based on assumptions it could make on the code.

You are able to generate a buffer based on a user input and, therefore, it could be less efficient and secure and gives the runtime more responsibilities on memory management for the buffer.

Though it could be great to have a runtime-growable buffer based, for example, in the load of the program, but I think the counterpart is that performance will be penalized by all memory allocs and frees needed to be done.

John Asmuth

unread,
Feb 3, 2012, 8:30:40 AM2/3/12
to golan...@googlegroups.com, Kyle Lemons, Sameer Ajmani


On Friday, February 3, 2012 6:23:39 AM UTC-5, Roberto wrote:
Though it could be great to have a runtime-growable buffer based, for example, in the load of the program, but I think the counterpart is that performance will be penalized by all memory allocs and frees needed to be done.


You can actually be pretty efficient about this if you use a well-implemented circular buffer.

Sameer Ajmani

unread,
Feb 3, 2012, 9:11:10 AM2/3/12
to Roberto Costumero Moreno, golan...@googlegroups.com, Kyle Lemons

If channel buffers are indeed allocated lazily (so I can set a buffer size of 1000 and only use that when the offered load is that high), then I can just use a large buffer size as my "arbitrary" value. We don't want an infinite buffer, because we need to be able to apply backpressure on the senders when the receiver can't keep up.

John Asmuth

unread,
Feb 3, 2012, 1:03:22 PM2/3/12
to golan...@googlegroups.com, Roberto Costumero Moreno, Kyle Lemons
I don't believe they are lazy.

However, an array of 1000 pointers is not that big a deal if you allocate it at the beginning.

Marcel

unread,
Feb 3, 2012, 2:32:28 PM2/3/12
to golang-nuts
You can implement your own dynamic buffered channel with a slice and
use two channels to push and pop buffer values.
Like in this example "a channel" with a dynamic buffer:
http://play.golang.org/p/AiHBsxTFpj
The example could be extended to have a maximum buffer size and
multiple receivers.
It is probably also possible to hack something together where you
extend the channel type to get a result where you can use the dynamic
buffered channel as a normal channel.

André Moraes

unread,
Feb 3, 2012, 2:37:25 PM2/3/12
to golan...@googlegroups.com
Another solution:

Make a circular buffer of channels.
Expose the channels only for read, never for writing.
Expand/Contract the buffer by adding/removing new channels. Remember
to close on remove.
Send the information using the circular buffer, that way you can be
sure that nobody will write on a closed channel.


--
André Moraes
http://andredevchannel.blogspot.com/

eap...@gmail.com

unread,
Jan 9, 2014, 12:59:51 PM1/9/14
to golan...@googlegroups.com, marce...@googlemail.com
Based on this idea and sample code I ended up writing an entire package that implements a bunch of related ideas in this area:

https://github.com/eapache/channels
https://godoc.org/github.com/eapache/channels

It includes channels with "infinite" buffers channels with finite-but-resizable buffers and a bunch of other useful types and functions.

Øyvind Teig

unread,
Jan 12, 2014, 3:46:46 PM1/12/14
to golan...@googlegroups.com
This is very interesting! 

There also is a matured channel class library in JCSP: Communicating Sequential Processes for Java. It's matured with respect to offered functionality as well as having been thoroughly debugged for some 15 years. See http://www.cs.kent.ac.uk/projects/ofa/jcsp/. Although it's built on Java, which does not have any channel primitive, the interface might be of interest.

Personally I'd be thrilled to see 'xchan' also implemented (as a first, up to now it's only been modeled). Have a look at http://www.teigfam.net/oyvind/pub/pub_details.html#XCHAN

(When that's been done, have a look at 'feathering' - to avoid sending (but Go does not have guards in select, so it's probably not possible or needed - but it certainly avoids sending unnecessary messages). Have a look at http://www.teigfam.net/oyvind/pub/pub_details.html#FEATHERING)

Øyvind Teig
Trondheim, Norway

Dmitry Vyukov

unread,
Jan 12, 2014, 10:58:18 PM1/12/14
to Øyvind Teig, golang-nuts
On Mon, Jan 13, 2014 at 12:46 AM, Øyvind Teig <oyvin...@teigfam.net> wrote:
> This is very interesting!
>
> There also is a matured channel class library in JCSP: Communicating
> Sequential Processes for Java. It's matured with respect to offered
> functionality as well as having been thoroughly debugged for some 15 years.
> See http://www.cs.kent.ac.uk/projects/ofa/jcsp/. Although it's built on
> Java, which does not have any channel primitive, the interface might be of
> interest.
>
> Personally I'd be thrilled to see 'xchan' also implemented (as a first, up
> to now it's only been modeled). Have a look at
> http://www.teigfam.net/oyvind/pub/pub_details.html#XCHAN

Hi,

What problems will it make easier to solve with Go?

Øyvind Teig

unread,
Jan 13, 2014, 4:04:15 AM1/13/14
to golan...@googlegroups.com, Øyvind Teig

kl. 04:58:18 UTC+1 mandag 13. januar 2014 skrev Dmitry Vyukov følgende:
What problems will it make easier to solve with Go?
 
Best question there is!
 
In the xchan paper's Appendix I have two Go examples (courtesy of golang-nuts). However, answering this question I think the golang-nuts group would be more qualified to do than I. I have mentioned xchan before in this group, but not really suggested it as needed for Go. But then, in the scope of this thread I thought it worth mentioning. Observe that the two papers are peer reviewed under a rather rigorous regime (CPA).
 
The xchan paper also discusses the semantic differences between xchan and (output) guards in select. Go does not have the boolean expression of guards as first-class citizen, but it can simulate them and thus effectively have a flavour of them. Therefore I infer that xchan in Go would add something different. And with the introduction of 'feathering' in the second paper the difference is even further emphasized. I am not certain if feathering could be introduced if it were not for xchan.
 
One may view channels as the "goto of communication". In many use cases they are used to build higher level patterns like f.ex. some kind of transactions. If these patterns are then supported by the language which is able to run 'usage checks' on them, then we see that chan is only the beginning. The library provided for this thread addresses this, and also contains an overflowing channel type, a comon way to try to make matters safe. On the occam world we made oveflowing buffers to simulate this.
 
The 'xchan' is one such pattern. It joins asynchronous (buffered non-blocking) and synchronous (blocking) thinking and practice. xchan provides a 'safe' asynchronous mechanism on a synchronized foundation. I have used several blog notes recently trying to understand the common view that blocking is perceived as 'evil' and asynchronous is all that makes sense. I think I have been able to explain. This view is so common that I believe it is the main threat to Go, which easily may fail to tell that blocking is not evil. Even in golang-nuts I see it often shine through that chan needs to be buffered. I added buffering in xchan simply to avoid a red cloth to most programmers, but it's not needed. I fail to see any system where internal buffering + xchan with no buffering is not best.
 
The 'feathering' is also one such pattern - where explicit non-interest saves us communications. It's an implicit type of subscription mechanism.
 
I have suggested at least one example of xchan use in the paper. A server connected to an incoming connection, where the server never ever blocks because it empties itself over an xchan. So the server is always able to handle a connection. And overflow, flushing, prioritation atc. are handled by the server application.
 
xchan could potentially also help moving Go into the safety-critical (embedded) world, but I guess that is a far cry out.
 
Øyvind
 
 
 
 
 
 
 

Dmitry Vyukov

unread,
Jan 13, 2014, 4:42:17 AM1/13/14
to Øyvind Teig, golang-nuts
Is it Section 5.2 Local ChanSched ANSI C with Channel-Ready-Channel?

Øyvind Teig

unread,
Jan 13, 2014, 9:00:35 AM1/13/14
to golan...@googlegroups.com, Øyvind Teig
Yes, but that example is tied to a small embedded system with "hand coded" XCHAN and usage of it. The paper suggests a first-class citizen XCHAN, with usage checks by the compiler (if possible). Mentioning xchan in this golang-nuts thread would be a suggestion to try it out as a channel type in package channels.

Dmitry Vyukov

unread,
Jan 13, 2014, 9:24:36 AM1/13/14
to Øyvind Teig, golang-nuts
I may be missing something, but it seems to me that the use case is
already perfectly covered by non-blocking sends in Go:

for msg := range inchan {
select {
case outchan <- msg:
default:
// handle overflow (decide what value(s) to discard)
}
}

eap...@gmail.com

unread,
Jan 13, 2014, 10:03:55 AM1/13/14
to golan...@googlegroups.com
Xchan looks like a neat concept, but I don't have the time/inclination to implement it myself at the moment.

Martin Schnabel

unread,
Jan 13, 2014, 3:47:42 PM1/13/14
to golan...@googlegroups.com
On 01/09/2014 06:59 PM, eap...@gmail.com wrote:
> Based on this idea and sample code I ended up writing an entire package
> that implements a bunch of related ideas in this area:
>
> https://github.com/eapache/channels
> https://godoc.org/github.com/eapache/channels
>
> It includes channels with "infinite" buffers channels with
> finite-but-resizable buffers and a bunch of other useful types and
> functions.
>

just took a look at the package. in all channel implementations that use
a buffer you slice the buffer[1:] when sending but never readjust the
buffer. this means buffer will grow indefinatly with every
append/receive. this is very much broken.

Evan Huus

unread,
Jan 13, 2014, 3:56:28 PM1/13/14
to golan...@googlegroups.com

No it's not, due to the convenient behaviour of the append() function. As per [1], when appending to a full slice it allocates a new slice of necessary size, copies the elements, and returns the new slice. The old slice (with all the "stale" elements) can then be garbage-collected. This leads to very simple code and nice amortized run-time costs. A linked-list behaviour would provide guaranteed constant-time cost, but would kill the garbage-collector.

[1] http://blog.golang.org/slices

Kyle Lemons

unread,
Jan 13, 2014, 3:58:28 PM1/13/14
to Martin Schnabel, golang-nuts
It won't grow indefinitely.  When it needs to reallocate, it will get a new buffer with only the elements in the slice (plus the additional capacity).

 
--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Martin Schnabel

unread,
Jan 13, 2014, 4:01:32 PM1/13/14
to golan...@googlegroups.com
On 01/13/2014 09:58 PM, Kyle Lemons wrote:
> On Mon, Jan 13, 2014 at 12:47 PM, Martin Schnabel <m...@mb0.org
> <mailto:m...@mb0.org>> wrote:
>
> On 01/09/2014 06:59 PM, eap...@gmail.com <mailto:eap...@gmail.com>
> wrote:
>
> Based on this idea and sample code I ended up writing an entire
> package
> that implements a bunch of related ideas in this area:
>
> https://github.com/eapache/__channels
> <https://github.com/eapache/channels>
> https://godoc.org/github.com/__eapache/channels
> <https://godoc.org/github.com/eapache/channels>
>
> It includes channels with "infinite" buffers channels with
> finite-but-resizable buffers and a bunch of other useful types and
> functions.
>
>
> just took a look at the package. in all channel implementations that
> use a buffer you slice the buffer[1:] when sending but never
> readjust the buffer. this means buffer will grow indefinatly with
> every append/receive. this is very much broken.
>
>
> It won't grow indefinitely. When it needs to reallocate, it will get a
> new buffer with only the elements in the slice (plus the additional
> capacity).
>
> http://play.golang.org/p/yKiLdet-0m

ok thanks, i stand corrected. for some reason it never that never
occurred to me.

aro...@gmail.com

unread,
Jan 13, 2014, 4:32:27 PM1/13/14
to golan...@googlegroups.com, Martin Schnabel
On Monday, January 13, 2014 12:58:28 PM UTC-8, Kyle Lemons wrote:
On Mon, Jan 13, 2014 at 12:47 PM, Martin Schnabel <m...@mb0.org> wrote:
On 01/09/2014 06:59 PM, eap...@gmail.com wrote:
Based on this idea and sample code I ended up writing an entire package
that implements a bunch of related ideas in this area:

https://github.com/eapache/channels
https://godoc.org/github.com/eapache/channels

It includes channels with "infinite" buffers channels with
finite-but-resizable buffers and a bunch of other useful types and
functions.


just took a look at the package. in all channel implementations that use a buffer you slice the buffer[1:] when sending but never readjust the buffer. this means buffer will grow indefinatly with every append/receive. this is very much broken.

It won't grow indefinitely.  When it needs to reallocate, it will get a new buffer with only the elements in the slice (plus the additional capacity).


That's not the same, that's truncating the slice:
   a = a[:1]

But if you naively use the slice as a FIFO:
   el, a = a[0], a[1:] 

...then you have the memory problems, since you are continuously shifting your usage along a slice:  http://play.golang.org/p/mJJZO8iiEA
The runtime is forced to continuously reallocate and GC old slices.

mb0 is correct.

- Augusto


 
--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.

Evan Huus

unread,
Jan 13, 2014, 4:37:39 PM1/13/14
to golan...@googlegroups.com, Martin Schnabel, aro...@gmail.com
On Monday, January 13, 2014 4:32:27 PM UTC-5, aro...@gmail.com wrote:
On Monday, January 13, 2014 12:58:28 PM UTC-8, Kyle Lemons wrote:
On Mon, Jan 13, 2014 at 12:47 PM, Martin Schnabel <m...@mb0.org> wrote:
On 01/09/2014 06:59 PM, eap...@gmail.com wrote:
Based on this idea and sample code I ended up writing an entire package
that implements a bunch of related ideas in this area:

https://github.com/eapache/channels
https://godoc.org/github.com/eapache/channels

It includes channels with "infinite" buffers channels with
finite-but-resizable buffers and a bunch of other useful types and
functions.


just took a look at the package. in all channel implementations that use a buffer you slice the buffer[1:] when sending but never readjust the buffer. this means buffer will grow indefinatly with every append/receive. this is very much broken.

It won't grow indefinitely.  When it needs to reallocate, it will get a new buffer with only the elements in the slice (plus the additional capacity).


That's not the same, that's truncating the slice:
   a = a[:1]

But if you naively use the slice as a FIFO:
   el, a = a[0], a[1:] 

...then you have the memory problems, since you are continuously shifting your usage along a slice:  http://play.golang.org/p/mJJZO8iiEA
The runtime is forced to continuously reallocate and GC old slices.

This is all true, but the only alternative is to implement a linked-list-type structure whose many small allocations is actually more expensive for the GC to deal with.
 
mb0 is correct.

Not really. They were concerned that the old slices would never be GCed and so memory would leak, which is not the case.
 

simon place

unread,
Jan 13, 2014, 4:50:51 PM1/13/14
to golan...@googlegroups.com
"The old slice (with all the "stale" elements) can then be garbage-collected."

AFAIK slices don't have any elements, their underlying array does, and append beyond capacity creates an EXTENDED copy, you could have all the original slices go and so their array GC'ed, but the new one contains a copy of all the "stale" elements, even if no slice remains that refers to any of elements from prior to the append point.

so as long as there is a slice, any of its derived slices, or any from an append of any of them, nothing really goes away.(except some duplicates)

but, by doing your own special append, copying only what's needed to a new array and making the slices on that, you could do it.

aro...@gmail.com

unread,
Jan 13, 2014, 5:04:59 PM1/13/14
to golan...@googlegroups.com, Martin Schnabel, aro...@gmail.com
On Monday, January 13, 2014 1:37:39 PM UTC-8, Evan Huus wrote:
On Monday, January 13, 2014 4:32:27 PM UTC-5, aro...@gmail.com wrote:
On Monday, January 13, 2014 12:58:28 PM UTC-8, Kyle Lemons wrote:
On Mon, Jan 13, 2014 at 12:47 PM, Martin Schnabel <m...@mb0.org> wrote:
On 01/09/2014 06:59 PM, eap...@gmail.com wrote:
Based on this idea and sample code I ended up writing an entire package
that implements a bunch of related ideas in this area:

https://github.com/eapache/channels
https://godoc.org/github.com/eapache/channels

It includes channels with "infinite" buffers channels with
finite-but-resizable buffers and a bunch of other useful types and
functions.


just took a look at the package. in all channel implementations that use a buffer you slice the buffer[1:] when sending but never readjust the buffer. this means buffer will grow indefinatly with every append/receive. this is very much broken.

It won't grow indefinitely.  When it needs to reallocate, it will get a new buffer with only the elements in the slice (plus the additional capacity).


That's not the same, that's truncating the slice:
   a = a[:1]

But if you naively use the slice as a FIFO:
   el, a = a[0], a[1:] 

...then you have the memory problems, since you are continuously shifting your usage along a slice:  http://play.golang.org/p/mJJZO8iiEA
The runtime is forced to continuously reallocate and GC old slices.

This is all true, but the only alternative is to implement a linked-list-type structure whose many small allocations is actually more expensive for the GC to deal with.

I see.  Another alternative is to maintain two slices.  The active slice that you are slicing as you are removing elements, and an overflow slice that can grow as necessary.  When the active slice is empty, swap the two.  By keeping track of the original active slice, you can re-use memory efficiently but still allow infinite FIFO growing without producing garbage:


 
mb0 is correct.

Not really. They were concerned that the old slices would never be GCed and so memory would leak, which is not the case.

Thanks, I misunderstood.

Evan Huus

unread,
Jan 13, 2014, 5:06:40 PM1/13/14
to golan...@googlegroups.com
On Monday, January 13, 2014 4:50:51 PM UTC-5, simon place wrote:
"The old slice (with all the "stale" elements) can then be garbage-collected."

AFAIK slices don't have any elements, their underlying array does

Yes, sorry, I misspoke.
 
and append beyond capacity creates an EXTENDED copy

No, it only creates a copy of those elements still referenced by the slice, so the copy does *not* contain the stale elements. This is implied by http://blog.golang.org/slices and can be easily tested, eg http://play.golang.org/p/QZN8ZPV9-V only ever uses a few MB of memory even when 10 million integers (~40MB) have been transferred.

(unfortunately the playground won't allow 3rd-party imports so you can't run that online, but you can copy-paste it locally to verify)

Øyvind Teig

unread,
Jan 13, 2014, 5:07:37 PM1/13/14
to golan...@googlegroups.com, Øyvind Teig
kl. 15:24:36 UTC+1 mandag 13. januar 2014 skrev Dmitry Vyukov følgende:

I may be missing something, but it seems to me that the use case is
already perfectly covered by non-blocking sends in Go:

for msg := range inchan {
  select {
  case outchan <- msg:
  default:
    // handle overflow (decide what value(s) to discard)
  }
}

Full and overflow are not the same. The fact that a message is not taken by the receiver is not the same as an overflow. It simply means that the receiver is not ready, or the channel buffer is full. The sending process (the "server") can continue to hold the value it tried to send but failed to get away with. Deciding that this needs overflow handling is premature. For the non-buffered case it could also mean that scheduling of the receiver has not been done yet, and that it would be ready when scheduled and immediately enter the select and then it would have taken the input.

If we did like in the example above we would need to retry sending instead of just losing the message. We would have to do busy-poll sending. This is discussed in the paper. This is where the feedback channel of the xchan comes in. Xchan has three terminal points: send, receive and x-ready. The x-ready channel comes from the run-time system, not the receiver. I am therefore uncertain on whether a Go channel treated as bidirectional would be able to carry this scheme - and that xchan would be needed.

When the x-ready arrives telling that the receiver is ready (or buffer has space) the server must send something. If it has got new input in the meantime it had indeed overflowed and can decide to send the newest (plus perhaps an overflow tag). If not, it would send the original message. It must contractually send something. This is where help from the compiler would be nice, to check this pattern. In the example above, having a non-buffered chan it would theoretically be possible to lose all messages or lose none, depending on scheduling.

Handling of 'full' by the server could also mean to send a synchronizing stop message back to the external unit that sends to server, avoiding any overflow at that stage.

In Go context there may be subtleties that I've got wrong, but that's basically how xchan works (in the paper). I'd certainly be happy to see xchan discussed by some of you Go experts here. 

The similarity with Linux select is discussed in the paper. In my original url there also is a model of xchan, done by prof. Peter H. Welch at UofKent. And I have modeled it in CSPm with FDR2.

Øyvind

Evan Huus

unread,
Jan 13, 2014, 5:10:36 PM1/13/14
to golan...@googlegroups.com, Martin Schnabel, aro...@gmail.com

Huh, neat idea. The only potential drawback is that they don't shrink back down when the number of buffered items shrinks. I'd be interested in seeing benchmarks of two approaches though.
 

Kyle Lemons

unread,
Jan 13, 2014, 5:50:17 PM1/13/14
to Evan Huus, golang-nuts, Martin Schnabel, Augusto Roman
Apologies for the bug in my previous example.
 
The capacity of reallocated buffers definitely does decrease.  It's based on the size of the slice, not the number of elements that were originally in the underlying array.  Because of this, slices actually do work quite well as a queue, automatically handling allocation for bursts of data and shrinking the footprint back down when the spike is gone.


Notice that the capacities and the number of extra elements available for future data increase during the burst and come back down after.

simon place

unread,
Jan 13, 2014, 7:14:47 PM1/13/14
to golan...@googlegroups.com
i see now, the source slice is copied not the array, so the new array starts from the index of the start of the source slice, and you cant' even get to the array from the slice.


func Append(slice []int, elements ...int) []int {
    n := len(slice)
    total := len(slice) + len(elements)
    if total > cap(slice) {
        // Reallocate. Grow to 1.5 times the new size, so we can still grow.
        newSize := total*3/2 + 1
        newSlice := make([]int, total, newSize)
        copy(newSlice, slice)
        slice = newSlice
    }
    slice = slice[:total]
    copy(slice[n:], elements)
    return slice
}


Dmitry Vyukov

unread,
Jan 14, 2014, 2:23:36 AM1/14/14
to Øyvind Teig, golang-nuts
On Tue, Jan 14, 2014 at 2:07 AM, Øyvind Teig <oyvin...@teigfam.net> wrote:
> kl. 15:24:36 UTC+1 mandag 13. januar 2014 skrev Dmitry Vyukov følgende:
>
>>
>> I may be missing something, but it seems to me that the use case is
>> already perfectly covered by non-blocking sends in Go:
>>
>> for msg := range inchan {
>> select {
>> case outchan <- msg:
>> default:
>> // handle overflow (decide what value(s) to discard)
>> }
>> }
>
>
> Full and overflow are not the same. The fact that a message is not taken by
> the receiver is not the same as an overflow.

What is the difference? You just need to set buffer size so that full==overflow.

Øyvind Teig

unread,
Jan 14, 2014, 2:59:37 AM1/14/14
to golan...@googlegroups.com, Øyvind Teig

kl. 08:23:36 UTC+1 tirsdag 14. januar 2014 skrev Dmitry Vyukov følgende:
On Tue, Jan 14, 2014 at 2:07 AM, Øyvind Teig <oyvin...@teigfam.net> wrote:
> kl. 15:24:36 UTC+1 mandag 13. januar 2014 skrev Dmitry Vyukov følgende:
>
>>
>> I may be missing something, but it seems to me that the use case is
>> already perfectly covered by non-blocking sends in Go:
>>
>> for msg := range inchan {
>>   select {
>>   case outchan <- msg:
>>   default:
>>     // handle overflow (decide what value(s) to discard)
>>   }
>> }
>
>
> Full and overflow are not the same. The fact that a message is not taken by
> the receiver is not the same as an overflow.

What is the difference? You just need to set buffer size so that full==overflow.
 
I did try to explain exactly that. Looking at border cases is always a good idea:
  • For a zero-buffered channel, in your case sending to a receiver that is not ready will cause overflow (0==0). That's simply wrong semantics, the sender shall block. And as we know blocking is not harmful or evil. The else-on-failed-send condition is there simply to be able to poll to see if there is a receiver, nice to have for some cases. And if you don't want to lose that message you'd have to do re-send by timeout or busy-poll. That's not CSP.
  • For an N-buffered channel you say that when the channel is full there is overflow (N==N). That's also wrong semantics. Overflow happens when you try to fill into a full buffer. (The fact that the buffered channel then needs to keep the old messages since there is no "flush" on a channel (and there should not be..) points toward having full control of the buffer and just move it inside a goroutine. So a zero-buffered chan or xchan in my opinion is the most useful channel.)

Øyvind

Øyvind Teig

unread,
Jan 14, 2014, 3:15:15 AM1/14/14
to golan...@googlegroups.com
kl. 16:03:55 UTC+1 mandag 13. januar 2014 skrev Evan Huus følgende:
Xchan looks like a neat concept, but I don't have the time/inclination to implement it myself at the moment.
 
Ok. And besides, you might have to get into the Go scheduler to implement a proper ("classic") xchan... (how's that for an inclination..). (But that would just screw up Go, xchan isn't there by design).
 
But then, Peter Welch's model does it a little differently ("pre-confirmed"), where it might be possible to do it without the scheduler's immediate help. But the latter probably makes 'feathering' (not sending not-interesting messages) not possible.

Øyvind

 

Dmitry Vyukov

unread,
Jan 14, 2014, 3:40:45 AM1/14/14
to Øyvind Teig, golang-nuts
I probably understand what you want to do. Correct me if I am wrong,
you want to implement arbitrary complex overload control on top of
goroutines and channels.
But, hey, Go has channels of channels, and this gives you natural way
to say "I want a message. Now!":

inc := make(chan *Req, 1000)
outc := make(chan chan *Req, 1000)
go func() {
// dispatcher
reqs := MakeHeapOrWhatever()
for {
select {
case r := <-inc:
reqs.Add(r)
case c := <-outc:
c <- reqs.Get()
case <-time.After(...):
reqs.Tick()
}
}
}()

for i := 0; i < 10; i++ {
go func() {
// consumer
c := make(chan *Req, 1)
for {
outc <- c
r := <-c
Process(r)
}
}()
}

You also need to nil out outc in dispatcher select when reqs are
empty, so that you don't receive from outc when you have nothing to
send.

And another option in Go is just to protect an arbitrary data
structure with requests with a Mutex, and do arbitrary prioritization
and eviction under the mutex. Plus a ticker goroutine to timeout
requests.

Still do not see a need in XCHANs.

Øyvind Teig

unread,
Jan 14, 2014, 6:20:11 AM1/14/14
to golan...@googlegroups.com, Øyvind Teig
Nice! 

Sending over a channel and then picking it up to use is a nice idiom. Occam-2 did not have it and I missed it. Occam-pi does, they call it MOBILE channels I believe. Go uses this to simulate the missing boolean expresion in a guard: in order to close out other goroutines in a specific session, a session start also sends over a reply channel or channels over which to communicate while that session goes on.

I think this is what you are suggesting. Your code seems to be getting close to Peter Welch's model. But you have a timeout in there, which tells me there's something wrong with it. I am learning by reading code since I at short at writing any - so I'l have to be "clinical" about my analysis.

Provided you did code a "model" of xchan, that's fine. I could do that in any channel based language by using what I have. But in the rationale for XCHAN that's not the point. The XCHAN by itself, as a primary citizen of the language joins asynch/nonblocking and synch/blocking and safe channels and breaks cycles to avoid deadlock in one chump - and might save Go from the asynchronous hordes out there (..) As any language feature, it's the basic need that would decide on whether to include it. Still I don't know if xchan is a good idea for Go... but I think I would know how and when I would use it if it were in there.

Øyvind

Dmitry Vyukov

unread,
Jan 14, 2014, 6:35:38 AM1/14/14
to Øyvind Teig, golang-nuts
I've included timeouts to handle the following overload control policy:
Assume you want to send TIMEOUT response to requests that are not
serviced within 1 second.
If the dispatcher reacts only to input messages and requests from
workers, and there are no such messages/requests within 1 second, then
the dispatcher can not send TIMEOUT responses (it does not get control
within that second). So in the Tick() method you can scan all
outstanding requests, find very old ones, remove them from the
container and send TIMEOUT response.


> Provided you did code a "model" of xchan, that's fine. I could do that in
> any channel based language by using what I have. But in the rationale for
> XCHAN that's not the point. The XCHAN by itself, as a primary citizen of the
> language joins asynch/nonblocking and synch/blocking and safe channels and
> breaks cycles to avoid deadlock in one chump - and might save Go from the
> asynchronous hordes out there (..) As any language feature, it's the basic
> need that would decide on whether to include it. Still I don't know if xchan
> is a good idea for Go... but I think I would know how and when I would use
> it if it were in there.

How and when would you use it?

"joins asynch/nonblocking and synch/blocking and safe channels and
breaks cycles to avoid deadlock in one chump" -- this sounds very
abstract to me. For now I do not see any real-world problems with no
clean ways to solve in Go w/o xchans.

Øyvind Teig

unread,
Jan 14, 2014, 12:08:18 PM1/14/14
to golan...@googlegroups.com, Øyvind Teig
You may be right.  But my paper does answer these questions in a non-abstract way.
 
In the "real-world" anything blocking is 'evil'. I infer that Go's beautiful blocking channels (even buffered channels block when they are full) are considered horrifying by a large group of programmers. They would shy off. This is the main rationale for xchan (the rest is the 'abstract' paragraph above). When I programmed alone in occam I didn't miss it. I have tried to understand the mostly asynchronous world in some of these: http://www.teigfam.net/oyvind/home/technology/.
 
Pn => S -> C ->

I should have said this before: My wish would of course be to see any example code that does xchan semantics commented with "xchan idioms" from my paper and the figures there. There would be P(roducers) -> S(erver) -> C(onsumer) -> some hole. There would be local overflow handling in S and C is allowed to block on some hole forever (in which case all is lost). If C always gets rid of its data then Pn->S->C will not lose anything. The real-world may be in between. There is no dispatcher, as the x-ready feedback channel is triggered by the run-time. C is allowed to be read in a selective choice with other channels, and changing the input channel in C from reading on a chan to an xchan will not change a line in C. S is always ready to input data from any Pn (or else as specified). Timeout is a special case, not treated by xchan itself (and it should not be). The xchan between S and C may have zero buffering. And further: expanding xchan could open for feathering (but that I admit is harder with Go since there is no conditions in guards).
 
Thank you!
 
Øyvind 

Dmitry Vyukov

unread,
Jan 15, 2014, 5:36:22 AM1/15/14
to Øyvind Teig, golang-nuts
Blocking in Go in not bad. It is good, because it simplifies programs w/o sacrificing performance.
That "blocking is bad" usually stems either from "old single-thread unix world" or from "thread-per-request" world. Their arguments do not hold for Go.


 
They would shy off. This is the main rationale for xchan (the rest is the 'abstract' paragraph above). When I programmed alone in occam I didn't miss it. I have tried to understand the mostly asynchronous world in some of these: http://www.teigfam.net/oyvind/home/technology/.
 
Pn => S -> C ->

I should have said this before: My wish would of course be to see any example code that does xchan semantics commented with "xchan idioms" from my paper and the figures there. There would be P(roducers) -> S(erver) -> C(onsumer) -> some hole. There would be local overflow handling in S and C is allowed to block on some hole forever (in which case all is lost). If C always gets rid of its data then Pn->S->C will not lose anything. The real-world may be in between. There is no dispatcher, as the x-ready feedback channel is triggered by the run-time. C is allowed to be read in a selective choice with other channels, and changing the input channel in C from reading on a chan to an xchan will not change a line in C. S is always ready to input data from any Pn (or else as specified). Timeout is a special case, not treated by xchan itself (and it should not be). The xchan between S and C may have zero buffering. And further: expanding xchan could open for feathering (but that I admit is harder with Go since there is no conditions in guards).
 
Thank you!
 
Øyvind 

--

Øyvind Teig

unread,
Jan 16, 2014, 1:32:53 AM1/16/14
to golan...@googlegroups.com, Øyvind Teig

Dmitry Vyukov

unread,
Jan 16, 2014, 1:57:48 AM1/16/14
to Øyvind Teig, golang-nuts
OK

Evan Huus

unread,
Feb 28, 2014, 7:10:20 PM2/28/14
to golan...@googlegroups.com
A quick follow-up regarding the performance and memory-usage of go's slices (with append) vs other approaches for storing a queue. I just landed a patch [1] which shows a 30-40% improvement in memory usage (and particularly fewer garbage-collection pauses) in my benchmark.

Still just using a single slice, but doing some extra work to use it as a ring-buffer instead of a straight buffer.

[1] https://github.com/eapache/channels/pull/4
Reply all
Reply to author
Forward
0 new messages