Though it could be great to have a runtime-growable buffer based, for example, in the load of the program, but I think the counterpart is that performance will be penalized by all memory allocs and frees needed to be done.
If channel buffers are indeed allocated lazily (so I can set a buffer size of 1000 and only use that when the offered load is that high), then I can just use a large buffer size as my "arbitrary" value. We don't want an infinite buffer, because we need to be able to apply backpressure on the senders when the receiver can't keep up.
Make a circular buffer of channels.
Expose the channels only for read, never for writing.
Expand/Contract the buffer by adding/removing new channels. Remember
to close on remove.
Send the information using the circular buffer, that way you can be
sure that nobody will write on a closed channel.
--
André Moraes
http://andredevchannel.blogspot.com/
What problems will it make easier to solve with Go?
--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
On Mon, Jan 13, 2014 at 12:47 PM, Martin Schnabel <m...@mb0.org> wrote:
On 01/09/2014 06:59 PM, eap...@gmail.com wrote:just took a look at the package. in all channel implementations that use a buffer you slice the buffer[1:] when sending but never readjust the buffer. this means buffer will grow indefinatly with every append/receive. this is very much broken.
Based on this idea and sample code I ended up writing an entire package
that implements a bunch of related ideas in this area:
https://github.com/eapache/channels
https://godoc.org/github.com/eapache/channels
It includes channels with "infinite" buffers channels with
finite-but-resizable buffers and a bunch of other useful types and
functions.
It won't grow indefinitely. When it needs to reallocate, it will get a new buffer with only the elements in the slice (plus the additional capacity).
--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
On Monday, January 13, 2014 12:58:28 PM UTC-8, Kyle Lemons wrote:On Mon, Jan 13, 2014 at 12:47 PM, Martin Schnabel <m...@mb0.org> wrote:
On 01/09/2014 06:59 PM, eap...@gmail.com wrote:just took a look at the package. in all channel implementations that use a buffer you slice the buffer[1:] when sending but never readjust the buffer. this means buffer will grow indefinatly with every append/receive. this is very much broken.
Based on this idea and sample code I ended up writing an entire package
that implements a bunch of related ideas in this area:
https://github.com/eapache/channels
https://godoc.org/github.com/eapache/channels
It includes channels with "infinite" buffers channels with
finite-but-resizable buffers and a bunch of other useful types and
functions.
It won't grow indefinitely. When it needs to reallocate, it will get a new buffer with only the elements in the slice (plus the additional capacity).That's not the same, that's truncating the slice:a = a[:1]But if you naively use the slice as a FIFO:el, a = a[0], a[1:]...then you have the memory problems, since you are continuously shifting your usage along a slice: http://play.golang.org/p/mJJZO8iiEAThe runtime is forced to continuously reallocate and GC old slices.
mb0 is correct.
On Monday, January 13, 2014 4:32:27 PM UTC-5, aro...@gmail.com wrote:On Monday, January 13, 2014 12:58:28 PM UTC-8, Kyle Lemons wrote:On Mon, Jan 13, 2014 at 12:47 PM, Martin Schnabel <m...@mb0.org> wrote:
On 01/09/2014 06:59 PM, eap...@gmail.com wrote:just took a look at the package. in all channel implementations that use a buffer you slice the buffer[1:] when sending but never readjust the buffer. this means buffer will grow indefinatly with every append/receive. this is very much broken.
Based on this idea and sample code I ended up writing an entire package
that implements a bunch of related ideas in this area:
https://github.com/eapache/channels
https://godoc.org/github.com/eapache/channels
It includes channels with "infinite" buffers channels with
finite-but-resizable buffers and a bunch of other useful types and
functions.
It won't grow indefinitely. When it needs to reallocate, it will get a new buffer with only the elements in the slice (plus the additional capacity).That's not the same, that's truncating the slice:a = a[:1]But if you naively use the slice as a FIFO:el, a = a[0], a[1:]...then you have the memory problems, since you are continuously shifting your usage along a slice: http://play.golang.org/p/mJJZO8iiEAThe runtime is forced to continuously reallocate and GC old slices.
This is all true, but the only alternative is to implement a linked-list-type structure whose many small allocations is actually more expensive for the GC to deal with.
mb0 is correct.
Not really. They were concerned that the old slices would never be GCed and so memory would leak, which is not the case.
"The old slice (with all the "stale" elements) can then be garbage-collected."AFAIK slices don't have any elements, their underlying array does
and append beyond capacity creates an EXTENDED copy
I may be missing something, but it seems to me that the use case is
already perfectly covered by non-blocking sends in Go:
for msg := range inchan {
select {
case outchan <- msg:
default:
// handle overflow (decide what value(s) to discard)
}
}
func Append(slice []int, elements ...int) []int { n := len(slice) total := len(slice) + len(elements) if total > cap(slice) { // Reallocate. Grow to 1.5 times the new size, so we can still grow. newSize := total*3/2 + 1 newSlice := make([]int, total, newSize) copy(newSlice, slice) slice = newSlice } slice = slice[:total] copy(slice[n:], elements) return slice }
On Tue, Jan 14, 2014 at 2:07 AM, Øyvind Teig <oyvin...@teigfam.net> wrote:
> kl. 15:24:36 UTC+1 mandag 13. januar 2014 skrev Dmitry Vyukov følgende:
>
>>
>> I may be missing something, but it seems to me that the use case is
>> already perfectly covered by non-blocking sends in Go:
>>
>> for msg := range inchan {
>> select {
>> case outchan <- msg:
>> default:
>> // handle overflow (decide what value(s) to discard)
>> }
>> }
>
>
> Full and overflow are not the same. The fact that a message is not taken by
> the receiver is not the same as an overflow.
What is the difference? You just need to set buffer size so that full==overflow.
Øyvind
Xchan looks like a neat concept, but I don't have the time/inclination to implement it myself at the moment.
Øyvind
They would shy off. This is the main rationale for xchan (the rest is the 'abstract' paragraph above). When I programmed alone in occam I didn't miss it. I have tried to understand the mostly asynchronous world in some of these: http://www.teigfam.net/oyvind/home/technology/.Pn => S -> C ->I should have said this before: My wish would of course be to see any example code that does xchan semantics commented with "xchan idioms" from my paper and the figures there. There would be P(roducers) -> S(erver) -> C(onsumer) -> some hole. There would be local overflow handling in S and C is allowed to block on some hole forever (in which case all is lost). If C always gets rid of its data then Pn->S->C will not lose anything. The real-world may be in between. There is no dispatcher, as the x-ready feedback channel is triggered by the run-time. C is allowed to be read in a selective choice with other channels, and changing the input channel in C from reading on a chan to an xchan will not change a line in C. S is always ready to input data from any Pn (or else as specified). Timeout is a special case, not treated by xchan itself (and it should not be). The xchan between S and C may have zero buffering. And further: expanding xchan could open for feathering (but that I admit is harder with Go since there is no conditions in guards).Thank you!Øyvind
--