Channels in C++

2,149 views
Skip to first unread message

Aaron R Robinson

unread,
Oct 11, 2016, 6:19:30 PM10/11/16
to ISO C++ Standard - Future Proposals
Hello,

Has there been an proposal for a channel's class in C++?

I have written a basic implementation and proposal and am looking for feedback about usefulness. It does have one glaring feature missing which is the 'select' or 'alt' blocks.


Thanks,
arr

mihailn...@gmail.com

unread,
Oct 12, 2016, 3:16:46 AM10/12/16
to ISO C++ Standard - Future Proposals
How is this compare to Boot Fiber channels and John Bandela's Channels (cppcon '16)?

Domen Vrankar

unread,
Oct 12, 2016, 6:18:10 AM10/12/16
to std-pr...@isocpp.org
2016-10-12 0:19 GMT+02:00 Aaron R Robinson <linux...@gmail.com>:
I have written a basic implementation and proposal and am looking for feedback about usefulness. It does have one glaring feature missing which is the 'select' or 'alt' blocks.



I don't have allot of mileage with multi-threaded applications but there's something obvious that I'm missing in the proposal:
- Channel is bidirectional only. I would expect an additional producer channel that would have a make_consumer member function enforcing single producer multiple consumers pattern (not certain if multiple producers single consumer would also be useful).

Since channel is a really basic building block nice to have feature would be:
- Channel is using a single communication type so for bidirectional channel a custom structure would have to be of a single type even if request and response differ unless variant or any would be used. It would be a nice to have feature to support this internally but since this functionality can be added as a derived class this is not as important.

Future/promise pair can forward result or exceptions. Was that considered for channels as well?

Regards,
Domen

Message has been deleted

Aaron R Robinson

unread,
Oct 13, 2016, 1:53:40 AM10/13/16
to ISO C++ Standard - Future Proposals
>> How is this compare to Boot Fiber channels and John Bandela's Channels (cppcon '16)?

Great question. I have looked briefly at both and there are at least two very clear distinctions.

For the Boost Fiber Channels (BFC), the proposed implementation is more inline with implementations of channels as they are known outside of C++. For instance, the BFC's implementation of channels are _not_ interchangable. Meaning one can either use an unbounded (i.e. non-blocking) channel or a bounded (i.e. blocking) channel, but they are distinct ideas and that decision must be made during design time. This really isn't appropriate in the general sense and intent of channels - I am sure there are cases where an explicit decision should be made, but not generally. The BFC API surface also exposes calls that imply the underlying implementation (i.e. push()/pop()) which isn't technically bad, but can potentially cause confusion to people who have used channel in the more formal/traditional implementations of Go, Limbo, and occam.

John Bandela's talk was quite convincing, but intermingled the Promise/Future methodology with that of channels - which are two very distinct approaches to concurrency. Promise/Future is more related to an async solution where as channels are more specific to data-sharing synchronization. I think channels can be leveraged to make async approaches more natural, but channels are a more primitive concept that doesn't imply async at all, but merely synchronization.

>> - Channel is bidirectional only. I would expect an additional producer channel that would have a make_consumer member function enforcing single producer multiple consumers pattern (not certain if multiple producers single consumer would also be useful).

I don't quite understand this statement. I can say that channels do not and should not have any concept of directionality - they are always bidirectional. A thread given a channel can either read from it or write to it and that is by-design. I can definitely see the potential benefit from a wrapper class that enforces a single direction, but that would be an extension to a fully bidirectional channel.

>> - Channel is using a single communication type so for bidirectional channel a custom structure would have to be of a single type even if request and response differ unless variant or any would be used. It would be a nice to have feature to support this internally but since this functionality can be added as a derived class this is not as important.

The appropriate approach here would be to either have a channel the producer would read from and one it would write from since all the channel is doing is synchronizing the transmission of data/state. Basically if the following was done:

// Shared state
channel<int> ch;

// Producer
ch << 10;

int r;
ch >> r;

// Consumer
int r;
ch >> r;

There is a high probably that the consumer would never get 10, instead the producer would extract the very value it inserted. This is not the intent of channels as they are more inline with the concept of a stream that is merely consumed as opposed to a TCP/IP like connection where a confirmation occurs. That being said, your point about a derived or composed class that enables this kind of functionality may definitely find use. 

>> Future/promise pair can forward result or exceptions. Was that considered for channels as well?
I mentioned this above with respect to John Bandela's talk. Promise/Future approaches are solving a different problem where as channels are merely a synchronization primitive.

arr

Oliver Kowalke

unread,
Oct 13, 2016, 4:39:26 PM10/13/16
to std-pr...@isocpp.org
2016-10-13 7:53 GMT+02:00 Aaron R Robinson <linux...@gmail.com>:
>> How is this compare to Boot Fiber channels and John Bandela's Channels (cppcon '16)?
 
For the Boost Fiber Channels (BFC), the proposed implementation is more inline with implementations of channels as they are known outside of C++.

channels of boost.fiber are not channels in the sense of GO channels - both types in fact are queues.
unbounded_queue: stores unlimited items
bounded_queue: stores items only up to a maximum, any subsequent push operations will block until some items have been removed

Arthur O'Dwyer

unread,
Oct 13, 2016, 11:16:23 PM10/13/16
to ISO C++ Standard - Future Proposals
also implements merely a bounded queue, AFAICT, and not what Go means by "channels".

I don't know whether Go-style channels are implementable in present-day C++; you might need to get more coroutine-awareness in the language first somehow (and in fact I would be pleasantly shocked if Gor's coroutines proposal was anywhere close to aware enough).

I would think that a "C++-ish" channel should follow the same outline as std::future/std::promise — it should be uni-directional as requested by Domen in this thread. Now, just like with future/promise, it could happen that a single thread "owns" both the producing (promise) end of the channel and the consuming (future) end of the channel... but it should also be possible to have the producing end and consuming end located in different threads, and this is in fact the expected case. Actually, I take back the first part of that sentence — if someone proposed a C++ channel whose two ends must be in different threads or else deadlock, I wouldn't see anything odd about that.

Now for the important thing that differentiates "channels" (in my vague understanding) from "queues".  The point of a "channel" is that it never (well, hardly ever) contains more than a single item — because, as soon as the producing thread hits the epilogue of its call to "chan << x", we suspend that thread and go over to the consuming thread, which is almost certainly blocked in the prologue of "chan >> x". We can then let the consumer run until it returns to "chan >> x", at which point (the channel now being empty) we suspend the consumer and switch back to the producer... and so on.

I have very little experience with this stuff, but I know that Intel TBB's "pipelines" and "flow graphs" are supposed to work this way — alternately pushing and pulling data — somehow. Maybe a channel primitive could be constructed out of a very pared-down TBB flow-graph?

my $.02,
–Arthur

Oliver Kowalke

unread,
Oct 14, 2016, 1:20:17 AM10/14/16
to std-pr...@isocpp.org
2016-10-14 5:16 GMT+02:00 Arthur O'Dwyer <arthur....@gmail.com>:

I don't know whether Go-style channels are implementable in present-day C++; you might need to get more coroutine-awareness in the language first somehow (and in fact I would be pleasantly shocked if Gor's coroutines proposal was anywhere close to aware enough).

it is possible with boost.fiber without blocking the entire thread (waiting on a channel)

Jonathan Beard

unread,
Oct 17, 2016, 1:02:50 PM10/17/16
to std-pr...@isocpp.org
Just now getting a chance to respond, so sorry for the late reply. This seems quite close to what we've been doing with RaftLib (raftlib.io, also CPPNow16). It constructs a compute graph of sequential communicating actors. Each actor is connected via streams/channels. Each channel is uni-directional and it is presumed that each actor executes on a separate thread or fiber so that blocking on the channels can occur. 

>If someone proposed a C++ channel whose two ends must be in different threads or else deadlock, I wouldn't see anything odd about that.

Not sure if this needs to be a requirement or not. If each actor shares state only through FIFO channels then it'd be fairly simple in the run-time implementation to set call-backs for FIFO calls that would block so that deadlock wouldn't occur. Essentially this would be implemented like fibers (user-space threads, Sandia's Qthreads is a good place to look....). It is likely a bad assumption to assume only a single item, given noise in most systems (OS/hardware) the rates of each actor are often unmatched...so, allowance for some buffering is usually necessary for performance, but not needed for correctness. 

Going to have to read over the rest of the proposal, will follow up in a bit. 

-Jonathan


--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/a157469b-d311-447a-917b-724ea3cb5123%40isocpp.org.

Sean Middleditch

unread,
Oct 19, 2016, 12:27:26 PM10/19/16
to ISO C++ Standard - Future Proposals
On Wednesday, October 12, 2016 at 10:53:40 PM UTC-7, Aaron R Robinson wrote:
>> - Channel is bidirectional only. I would expect an additional producer channel that would have a make_consumer member function enforcing single producer multiple consumers pattern (not certain if multiple producers single consumer would also be useful).

I don't quite understand this statement. I can say that channels do not and should not have any concept of directionality - they are always bidirectional. A thread given a channel can either read from it or write to it and that is by-design. I can definitely see the potential benefit from a wrapper class that enforces a single direction, but that would be an extension to a fully bidirectional channel.

But _why_ is it by design?

Bidirectionality is trivially implemented (efficiently) using two unidirectional channels, but a unidirectional channel built over a bidirectional channel is nothing more than a limitation imposed with no memory usage improvements. It seems to me that unidirectional channels are the simplest and least error-prone building block.

Code I work on at least is rather littered with unidirectional producer-consumer queues - very similar in concept to a unidirectional channel - and adding additional memory or complexity to allow bidirectional communication that I'd then have to write additional wrapper code to disable is a pure _loss_ in usability and efficiency.

It's worth noting perhaps that Rust and Haskell both use unidirectional channels by default and that Go has first-class support for unidirectional channels.

>>> - Channel is using a single communication type so for bidirectional channel a custom structure would have to be of a single type even if request and response differ unless variant or any would be used. It would be a nice to have feature to support this internally but since this functionality can be added as a derived class this is not as important.

> The appropriate approach here would be to either have a channel the producer would read from and one it would write from since all the channel is doing is synchronizing the transmission of data/state. 

That's a problem with bidirectional channels only. Unidirectional channels would not allow a producer to read from its end of the channel, hence guaranteeing it can never pop its own value.

This again pointing out that unidirectional channels are safer building blocks. They're far harder to use incorrectly, especially if you leverage the type system appropriately in your interface e.g. creating a unidirectional channel should provide two separate objects - a producer end and a consumer end - while a bidirectional channel is just two objects that each contain a producer and a consumer end (from two separate channels).

This brings up the comparison again to futures/promises - separate the producer from the consumer.

Ronan Keryell

unread,
Oct 20, 2016, 3:12:54 PM10/20/16
to std-pr...@isocpp.org
On 10/19/2016 05:27 PM, Sean Middleditch wrote:
> On Wednesday, October 12, 2016 at 10:53:40 PM UTC-7, Aaron R Robinson wrote:
>
> >> - Channel is bidirectional only. I would expect an additional
> producer channel that would have a make_consumer member function
> enforcing single producer multiple consumers pattern (not certain if
> multiple producers single consumer would also be useful).
>
> I don't quite understand this statement. I can say that channels do
> not and should not have any concept of directionality - they are
> always bidirectional. A thread given a channel can either read from
> it or write to it and that is by-design. I can definitely see the
> potential benefit from a wrapper class that enforces a single
> direction, but that would be an extension to a fully bidirectional
> channel.
>
>
> But _why_ is it by design?
>
> Bidirectionality is trivially implemented (efficiently) using two
> unidirectional channels, but a unidirectional channel built over a
> bidirectional channel is nothing more than a limitation imposed with no
> memory usage improvements. It seems to me that unidirectional channels
> are the simplest and least error-prone building block.

I think also it is better to start from unidirectional channels because
it is what the hardware can do best at the wire level.

In C++ we just pay for what we need. Why paying for a bidirectional
channel when only 1 way is required?
It is important for SG14.
In OpenCL C++ 2.2 the pipe<> objects are unidirectional too.
It really makes sense when using C++ outside of the plain old CPU +
memory world and use it to program FPGA or make ASIC.
--
Ronan KERYELL, Xilinx Research Labs / Ireland.

Arthur O'Dwyer

unread,
Oct 21, 2016, 3:37:36 PM10/21/16
to ISO C++ Standard - Future Proposals
On Wed, Oct 19, 2016 at 9:27 AM, Sean Middleditch <sean.mid...@gmail.com> wrote:
On Wednesday, October 12, 2016 at 10:53:40 PM UTC-7, Aaron R Robinson wrote:
>> - Channel is bidirectional only. I would expect an additional producer channel that would have a make_consumer member function enforcing single producer multiple consumers pattern (not certain if multiple producers single consumer would also be useful).

I don't quite understand this statement. I can say that channels do not and should not have any concept of directionality - they are always bidirectional. A thread given a channel can either read from it or write to it and that is by-design. I can definitely see the potential benefit from a wrapper class that enforces a single direction, but that would be an extension to a fully bidirectional channel.

But _why_ is it by design?

Please remember that the concept Aaron is describing is not a "channel" as implemented by Golang, etc.; it's a "concurrent bounded queue", as implemented by TBBLawrence Crowl's P0260, etc.  Let me explain what I'm thinking...

Any kind of queue is fundamentally "bidirectional", in that the queue object itself has a push method and a pop method; it's unthinkable to have a "queue" without push and pop methods.  So we don't usually use the word "bidirectional" for a queue; we just say that it's a queue, and everybody knows what we mean.  The special thing about a concurrent queue is that different threads can call push and/or pop "simultaneously" without introducing data races.  (The meaning of "and/or" is explored more below.)

Notice that if your data structure exposes "fullness" to the user (by throwing, returning false, or undefined behavior) then what you have is not exactly a "queue" anymore but more like a vector or ringbuffer.  But once you add concurrency, you have another option: instead of exposing fullness, push() on a full queue can simply block until the queue becomes non-full through the action of another thread.  If your data structure has a potentially blocking push(), we can call it "bounded"; if your push() cannot fail, we call your data structure "unbounded".

When we say "different threads can call push and/or pop without data races": if this is literally true, we have a multi-producer multi-consumer queue (MPMC).  But sometimes we can make the implementation more efficient if we assume there will never be multiple people calling push() simultaneously (but still might be multiple people calling pop()): that's single-producer multi-consumer (SPMC). Or alternatively MPSC, or the simplest and thus hopefully most efficient case, SPSC.

Any language (but especially C++) has basically two ways of dealing with concurrency. Either you design your interface so that different threads never need to manipulate the same object simultaneously (see: promise/future) or else you design your interface so that operations that might need to happen simultaneously on the same object are guaranteed thread-safe (see: atomic<int>).

Concurrent queues in C++ (Boost.Fiber, TBB, P0260) generally take the latter approach.  If you applied the former approach to a concurrent queue, you'd split its logical interface into a "push/producer end" and a "pop/consumer end". Then it's no longer required for the producer thread and the consumer thread to manipulate the same queue object; the producer thread simply manipulates the "producer end", and the consumer thread simply manipulates the "consumer end", and any synchronization between the two ends becomes a detail of the hidden implementation.  ...At least, this is true if you're designing a SPSC queue. If you want a MPSC queue, then you'd have to provide a way for each producer thread to get its own "producer end" object; and so on.

If you have a SPSC concurrent queue whose "ends" have been split like this, you could call it a "channel"; although I personally would reserve that term for concurrent bounded queues with capacity 1 whose ends are aware of each other's owning coroutines so that producing into a channel implicitly context-switches to the coroutine in charge of consumption. In the paragraphs that follow, I will call a "split" queue "unidirectional" and a TBB-style queue "bidirectional".

What Rust calls a "channel" is a unidirectional SPSC concurrent unbounded queue, in that reads and writes are guaranteed not to block — but the producer and consumer "ends" are logically separated.
Rust also supports "sync channels", a.k.a. unidirectional SPSC concurrent bounded queues of any specified capacity.

What Golang calls a "channel" is a unidirectional SPSC concurrent bounded queue with capacity 1, in that reads and writes are guaranteed to block — and again the producer and consumer "ends" are logically separated.  (I intuitively believe that these channels implement the coroutine switching I described above, but I admit I don't know for sure.)
Golang also supports "buffered channels", a.k.a. unidirectional SPSC concurrent bounded queues of any specified capacity.

Boost.Fiber supports "unbounded_channel", a.k.a. bidirectional MPMC(?) concurrent unbounded queues.
Boost.Fiber also supports "bounded_channel", a.k.a. bidirectional MPMC(?) concurrent bounded queues of any specified capacity; with the additional feature of a "low water mark" such that once the queue has become full, every subsequent push() will block until the number of items in the queue has dropped below the low water mark. That is, push-blocking-ness can be specified to be a bit "sticky" in this implementation.

Haskell's Chan and TChan types, which they colloquially call "broadcast channels", seem almost like pubsubs, in that you can "duplicate" a Chan to subscribe to all future updates even if those future updates are also being consumed by someone else. This seems to me like just as egregious a misuse of the term "channel" as Boost.Fiber's. (But I don't know Haskell conventions so I might be misunderstanding what they're doing.)


Sean wrote:
Bidirectionality is trivially implemented (efficiently) using two unidirectional channels, but a unidirectional channel built over a bidirectional channel is nothing more than a limitation imposed with no memory usage improvements. It seems to me that unidirectional channels are the simplest and least error-prone building block.

Notice that you cannot build a bidirectional MPMC concurrent queue out of unidirectional SPSC concurrent queues.
 
[...] This again pointing out that unidirectional channels are safer building blocks. They're far harder to use incorrectly, especially if you leverage the type system appropriately in your interface e.g. creating a unidirectional channel should provide two separate objects - a producer end and a consumer end - while a bidirectional channel is just two objects that each contain a producer and a consumer end (from two separate channels).

I agree that unidirectional "ends", promise/future-style are more modern-C++ish than thread-safe concurrent_queue-style building blocks. However, there might be room for both, ideally within some nice rational framework.

–Arthur

Oliver Kowalke

unread,
Oct 22, 2016, 6:08:17 AM10/22/16
to std-pr...@isocpp.org
2016-10-21 21:37 GMT+02:00 Arthur O'Dwyer <arthur....@gmail.com>:
What Golang calls a "channel" is a unidirectional SPSC concurrent bounded queue with capacity 1, in that reads and writes are guaranteed to block — and again the producer and consumer "ends" are logically separated.  (I intuitively believe that these channels implement the coroutine switching I described above, but I admit I don't know for sure.)
Golang also supports "buffered channels", a.k.a. unidirectional SPSC concurrent bounded queues of any specified capacity.

I'm wondering because my understanding was that Go channels are per default unbuffered with capacity 0. A push operation is blocked (goroutine gets suspended) till another goroutine consumes the pushed data; a pop operation will block (suspending the goroutine) if no producer is waiting or immediately return otherwise.
Buffered Go channels are created by specifying a capacity (at least 1) at creation.
Go channels are per default bidirectional (push and pop operations can be invoked on the channel by the same goroutine) but can created as unidirectional: 'c chan<- int' == only push operations on channel c allowed, 'c <-chan int' == only pop operations allowed.

What I'm missing?
 
a misuse of the term "channel" as Boost.Fiber's.

in which sense?

Arthur O'Dwyer

unread,
Oct 22, 2016, 2:16:06 PM10/22/16
to ISO C++ Standard - Future Proposals
On Sat, Oct 22, 2016 at 3:07 AM, Oliver Kowalke <oliver....@gmail.com> wrote:
2016-10-21 21:37 GMT+02:00 Arthur O'Dwyer <arthur....@gmail.com>:
What Golang calls a "channel" is a unidirectional SPSC concurrent bounded queue with capacity 1, in that reads and writes are guaranteed to block — and again the producer and consumer "ends" are logically separated.  (I intuitively believe that these channels implement the coroutine switching I described above, but I admit I don't know for sure.)
Golang also supports "buffered channels", a.k.a. unidirectional SPSC concurrent bounded queues of any specified capacity.

I'm wondering because my understanding was that Go channels are per default unbuffered with capacity 0.

You're correct that Golang's own documentation describes its channels as "capacity 0"; but in the terminology I was using, that would mean you could never put anything into the channel because it would always be full! Whether you think of it as "0 capacity, 1 element allowed in-flight as a special case" or "1 capacity, 0 elements allowed in-flight as a special case" is basically just an implementation detail of the theory AFAIC. :)
(But in terms of specifying the runtime semantics and in terms of implementation, it does probably make things simpler to treat that "in-flight" element specially. I don't know for sure.)

A push operation is blocked (goroutine gets suspended) till another goroutine consumes the pushed data; a pop operation will block (suspending the goroutine) if no producer is waiting or immediately return otherwise.
Buffered Go channels are created by specifying a capacity (at least 1) at creation.

Agreed.
 
Go channels are per default bidirectional (push and pop operations can be invoked on the channel by the same goroutine) but can created as unidirectional: 'c chan<- int' == only push operations on channel c allowed, 'c <-chan int' == only pop operations allowed.

I wasn't aware of this; thanks!
Notice that invoking push or pop on an "unbuffered" channel confined to a single thread will always deadlock; but Go does allow you to write code like "ch <- 1; fmt.Println(<-ch)" within a single thread as long as "ch" is buffered.

I now think I was also wrong about their SPSC-ness. Not only can you have multiple "handles" to a single conceptual channel, but you can actually access a single handle simultaneously via pointers from different threads — which I think is not true of Rust and certainly shouldn't be true of C++. (The official docs indicate that you can reliably access a single "chan" object from multiple goroutines simultaneously, even though the same is not true of "map".)
So, revised:

What Golang calls a "channel" is a bidirectional MPMC concurrent bounded queue with capacity 1, in that reads and writes are guaranteed to block — and again the producer and consumer "ends" are logically separated.  (I intuitively believe that these channels implement the coroutine switching I described above, but I admit I don't know for sure.)
Golang also supports "buffered channels", a.k.a. bidirectional MPMC concurrent bounded queues of any specified capacity.


What I'm missing?
 
a misuse of the term "channel" as Boost.Fiber's.

in which sense?

I don't believe that the term "channel" should be used to refer to a "concurrent queue" object.  The point of a channel, in my mind, is that it is SPSC and preferably capacity-1 (or capacity-0 in Go's terms); this is what lets you make it fast.  If you add features until you've got a bidirectional MPMC concurrent bounded queue — or in Boost.Fiber's case a bidirectional MPMC concurrent unbounded queue — then not only do you lose a lot of performance, but you're just duplicating work that's already been done by queue implementations such as TBB.

At the very least, if you're designing a data structure whose public API is exactly the public API of std::queue, please call it some variation on "queue" or "concurrent_queue" so that people can find and understand it!  If you call it "channel", I for one am going to assume that it's got semantics that are somehow different from a queue's.

–Arthur

Oliver Kowalke

unread,
Oct 22, 2016, 4:13:41 PM10/22/16
to std-pr...@isocpp.org
2016-10-22 20:16 GMT+02:00 Arthur O'Dwyer <arthur....@gmail.com>:
I don't believe that the term "channel" should be used to refer to a "concurrent queue" object.  The point of a channel, in my mind, is that it is SPSC and preferably capacity-1 (or capacity-0 in Go's terms); this is what lets you make it fast.  If you add features until you've got a bidirectional MPMC concurrent bounded queue — or in Boost.Fiber's case a bidirectional MPMC concurrent unbounded queue — then not only do you lose a lot of performance,

why must a channel of type SPSC?
the implementation of a capacity-0 channel must not be much different than a capacity-N channel (see 'Go channels on steroids' from Dmitry Vyukov)
 
but you're just duplicating work that's already been done by queue implementations such as TBB.

TBB works very different than boost.fiber (boost.fiber does stack switching, TBB doesn't) - the TBB queues are completely unaware of the features a fiber provides
so I think your statement is not valid
 
At the very least, if you're designing a data structure whose public API is exactly the public API of std::queue, please call it some variation on "queue" or "concurrent_queue" so that people can find and understand it!  If you call it "channel", I for one am going to assume that it's got semantics that are somehow different from a queue's.

I assume you refer to push()/try_push() ... I believe that thsi API is no exclusively for queues.

Oliver Kowalke

unread,
Oct 22, 2016, 4:53:33 PM10/22/16
to std-pr...@isocpp.org
2016-10-22 20:16 GMT+02:00 Arthur O'Dwyer <arthur....@gmail.com>:
You're correct that Golang's own documentation describes its channels as "capacity 0"; but in the terminology I was using, that would mean you could never put anything into the channel because it would always be full! Whether you think of it as "0 capacity, 1 element allowed in-flight as a special case" or "1 capacity, 0 elements allowed in-flight as a special case" is basically just an implementation detail of the theory AFAIC. :)

what about following:
a producer using a channel takes care if the data has been consumed, e.g. the producer is blocked till a consumer has dequeued producers item.
a producer using a queue doesn't care whether the item was consumed or not. the producer will only be blocked if the queue is bound and the queue is full.
maybe this might work to distinguish queues and channels

Sean Middleditch

unread,
Oct 22, 2016, 6:18:29 PM10/22/16
to ISO C++ Standard - Future Proposals
On Friday, October 21, 2016 at 12:37:36 PM UTC-7, Arthur O'Dwyer wrote:
Please remember that the concept Aaron is describing is not a "channel" as implemented by Golang, etc.; it's a "concurrent bounded queue", as implemented by TBBLawrence Crowl's P0260, etc.  Let me explain what I'm thinking...

Please also remember that we're in C++ and have a type system. The original problems described are solved via types.

This is exactly why future is split into a future (consumer) and a promise (producer). It's less error-prone and no less expressive.

There is no reason why a channel/queue/whatever should be different. The original problems discussed are then solved. A function that produces values cannot accidentally consume because it has only a `channel_producer<T>` object that only offers `operator<<`.

If a function needs both ends, it can take both as parameters, or use a wrapper object, such as a `bichannel<T, U>` that offers `operator<<(T const&)` and `operator>>(U&)`.

The factory for a channel return both a producer and consumer. The factory for a bichannel returns both a `bichannel<T, U>` and a `bichannel<U, T>`.

Safe expression of simple concepts that leverages the type system to avoid errors.

Oliver Kowalke

unread,
Oct 23, 2016, 3:18:37 AM10/23/16
to std-pr...@isocpp.org
2016-10-23 0:18 GMT+02:00 Sean Middleditch <sean.mid...@gmail.com>:
If a function needs both ends, it can take both as parameters, or use a wrapper object, such as a `bichannel<T, U>` that offers `operator<<(T const&)` and `operator>>(U&)`.

so operator<<() == push() and operator>>() == pop(), that means no try-push() and try_pop(),.
are operator<<() and operator>>() blocking in general (for instance producer waits till item has been consumed) or does it block only at the slow path?

Sean Middleditch

unread,
Oct 24, 2016, 5:15:27 PM10/24/16
to std-pr...@isocpp.org
Just a note, that is a question more to other parts of the thread. I'd personally prefer push/try_push; I was just using the stream interface proposed earlier to illustrate an unrelated point. I personally think the overloaded operators here are bonkers.
 


--
You received this message because you are subscribed to a topic in the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this topic, visit https://groups.google.com/a/isocpp.org/d/topic/std-proposals/PFsRPzrUrJ8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to std-proposals+unsubscribe@isocpp.org.

To post to this group, send email to std-pr...@isocpp.org.

Oliver Kowalke

unread,
Oct 25, 2016, 2:37:14 AM10/25/16
to std-pr...@isocpp.org
2016-10-24 23:15 GMT+02:00 Sean Middleditch <se...@middleditch.us>:
On Sun, Oct 23, 2016 at 12:18 AM, Oliver Kowalke <oliver....@gmail.com> wrote:
2016-10-23 0:18 GMT+02:00 Sean Middleditch <sean.mid...@gmail.com>:
If a function needs both ends, it can take both as parameters, or use a wrapper object, such as a `bichannel<T, U>` that offers `operator<<(T const&)` and `operator>>(U&)`.

so operator<<() == push() and operator>>() == pop(), that means no try-push() and try_pop(),.
are operator<<() and operator>>() blocking in general (for instance producer waits till item has been consumed) or does it block only at the slow path?

Just a note, that is a question more to other parts of the thread. I'd personally prefer push/try_push; I was just using the stream interface proposed earlier to illustrate an unrelated point. I personally think the overloaded operators here are bonkers.

Despite of this the question is how queues are distinguished from channels or is it a synonym for the same object.
A channel is used for synchronization via message passing but a (concurrent) queue does serve this too.
What about following:
- a concurrent_queue provides asynchronous mp
- a (C++) channel provides synchronous message passing, e.g acts as an rendezvous point

Aaron R Robinson

unread,
Oct 26, 2016, 12:37:12 PM10/26/16
to ISO C++ Standard - Future Proposals
There has been a lot of comments about Promises/Futures in this proposal. I can't stress enough that Promises/Futures are solving a different problem and not related to channels, although they can be used to create more complex solutions. Channels give a guarantee that they will block when requesting data but the channel does not have data this is the intent. Returning a Promise breaks that fundamental model. However, there is nothing saying you can't have a channel of Promises/Futures, that is definitely possibly, but goes against the intent of CSP.

Go does _not_ have first class support for unidirectional channels - that is a mechanism that built on top of the type system, not channels. All channels in Go are bidirectional, it is the type system that allows the channel to be limited to one direction or another. Following that logic the base channel implementation is bidirectional and subclasses can be created to limit the direction. I am working on restructuring some of the types to make this possible - I have pushed another branch with an example of a read or write channel.

Based on previous proposals and reading a lot of the minutes from the committee, implementation isn't what is accepted/rejected, but API/functionality. The example implementation does use a queue, but that isn't really relevant to the proposal, the concept being proposed is a channel type which has very specific semantics to a lot of people outside C++.

This proposal is very much intended to be a way to allow current non-C++ developers to feel comfortable with moving to C++ since it would now have a channel primitive in the library. It is/was _not_ my intent to redefine the basic channel concept, which is preserved with the current proposal, for C++.

All that being said, based on this conversation I am seeing a few things emerging:

1) The C++ community views the channel concept a bit differently 
2) Channel behavior - as defined in CSP - may not be a good fit for C++ since there are other solutions
3) There maybe interest in a channel-esque solution, but it may need to be less abstract and more explicit
4) My core understanding of how the broader community views channels is wrong - I must accept this as a possibility; however, I have been doing work with channels for several years now and implemented them several times in various higher level languages. Therefore, I don't think that is the case, but I could have been doing this wrong for years :)

arr

Aaron R Robinson

unread,
Oct 26, 2016, 2:43:30 PM10/26/16
to ISO C++ Standard - Future Proposals
On Monday, October 24, 2016 at 11:37:14 PM UTC-7, Oliver Kowalke wrote:
Despite of this the question is how queues are distinguished from channels or is it a synonym for the same object.
A channel is used for synchronization via message passing but a (concurrent) queue does serve this too.
What about following:
- a concurrent_queue provides asynchronous mp
- a (C++) channel provides synchronous message passing, e.g acts as an rendezvous point

This is how channels have always worked in my mind. I usually refer to them as a 'synchronization point' but 'rendezvous point' is also a great description. This is why they are so fundamentally different from Promises/Futures. When a thread accesses a channel it only returns if there is data or a 'message'. Sending through a channel can be the same but internally it can __optionally__ buffer, but by default - see reference implementation, it is blocking.

arr

Domen Vrankar

unread,
Oct 26, 2016, 2:52:24 PM10/26/16
to std-pr...@isocpp.org
2016-10-26 18:37 GMT+02:00 Aaron R Robinson <linux...@gmail.com>:
There has been a lot of comments about Promises/Futures in this proposal. I can't stress enough that Promises/Futures are solving a different problem and not related to channels, although they can be used to create more complex solutions. Channels give a guarantee that they will block when requesting data but the channel does not have data this is the intent. Returning a Promise breaks that fundamental model. However, there is nothing saying you can't have a channel of Promises/Futures, that is definitely possibly, but goes against the intent of CSP.

I'm still trying to understand the rationale behind the channel concept that you are proposing (in the meantime I've also had some time to look at John Bandela's talk from CppCon).

From my perspective future/promise combo (and let's ignore the implementation in c++ and just consider the conceptual working of what I believe the combo does) is that you request a task to be done (don't care where) and expect the result. If task executor is a different thread then it's possible that when I request the result of the task on the other side, the task will already be finished and even block-until-task-is-finished (wait) on future doesn't block. If it's run on the same thread it has time to execute in entirety if I call wait or in part and then return control back to me (the owner of the future) until I call wait-for-a-while (wait_for, wait_until) on future again - I know that this isn't how c++ future/promise are implemented but when I was forced to write a single threaded library (long story) I used future/promise api as the interface with the hope that some day I'll be able to persuade the no-threads-believer into switching to c++ future/promise and using multiple threads without changes to the rest of the code (except recompile).

The point being that moving data from point a to point b or task delegation can be viewed as one and the same (you give data somewhere -> you're feeding the task; you expect data from somewhere -> you are waiting for the task of retrieving data to complete) and I see no benefit of enforcing that the one that is waiting for the data must block (he may block or check from time to time and do some other stuff in the meantime - it depends on the situation and the task that we're trying to solve).

So when you are talking about blocking bidirectional channel in my head it sounds "a nice interface for a callback function or coroutines" but since your code uses >> and << operators and not the start of the function I'm guessing that callback is out of the question and since the code must somehow come to both operators I'm guessing either multiple threads (where I don't understand why you'd force block - block by choice OK but why force?) or coroutines (where I don't exactly understand what benefit the channel brings - but it's true that I've never used coroutines).

And at this point I get to the problem that I still don't understand what the difference between your proposed channel and future/promise is (they both can - and in case of channel must - block while waiting for data/task completion). What would be fundamentally different from future/promise would be guarantee to block the sender until the receiver reads the data (in future/promise you block the receiver until the sender sends the data) but with my current understanding that isn't the case - I might be mistaken and you are talking about exactly that.

As far as my mental model goes I see channels as future/promise combinations where you can receive data from future as often as you'd like and put data in promise as often as you like (where it blocks if the data buffer is full) and nothing more. For me the api of future/promise is good enough (minus missing "then" and other extensions) but since I've heard quite allot about performance issues there might be something wrong with that api that I don't know about (but I'm guessing that it's more related to implementation restrictions than the api itself - but I might be wrong).

So the bottom line is that I still can't imagine how the proposed channel would bring me some benefits and make my life easier when thinking about and implementing solutions to certain problem types.

Regards,
Domen

Aaron R Robinson

unread,
Oct 31, 2016, 9:35:13 PM10/31/16
to ISO C++ Standard - Future Proposals
Thanks for the feedback Domen.


On Wednesday, October 26, 2016 at 11:52:24 AM UTC-7, Domen Vrankar wrote:
So the bottom line is that I still can't imagine how the proposed channel would bring me some benefits and make my life easier when thinking about and implementing solutions to certain problem types.

I think the above statement is really important and probably were a lot of confusion is coming from. The slew of use cases you provide all have one thing in common and that is the threading model is always implied. Futures/Promises/coroutines, all of them have an implied concurrency concept that in most cases is non-trivial to break out of. The channel type is much more akin to a mutex then it is to a Future or Promise as it is merely a synchronization primitive that allows multiple threads/coroutines/processes/etc to coordinate in a unified manner. In fact in the CSP model the term processes is used, not threads. In Limbo you could communicate through a channel with other processes or spawned threads. They were all treated equally and this allowed a large amount of flexibility since it was irrelevant what was on the other end of the channel, just that when you read from a channel either you waited or were given data/message.

I think have a channel of Futures is just fine, i don't think it is the best approach but it could work if you spun up a bunch of light weight coroutines - John Bandela's talk - and waited for one of them. I don't see a fundamental reason why that is bad other than the complexity of determining i have too many futures now and I need another consumer to handle them. But honestly that isn't really the point. The channel can be used for the currently popular Promise/Future approach or adopt to using other concurrency models like Go or Limbo has.

arr

Domen Vrankar

unread,
Nov 1, 2016, 9:43:44 AM11/1/16
to std-pr...@isocpp.org
2016-11-01 2:35 GMT+01:00 Aaron R Robinson <linux...@gmail.com>:
Thanks for the feedback Domen.

On Wednesday, October 26, 2016 at 11:52:24 AM UTC-7, Domen Vrankar wrote:
So the bottom line is that I still can't imagine how the proposed channel would bring me some benefits and make my life easier when thinking about and implementing solutions to certain problem types.

I think the above statement is really important and probably were a lot of confusion is coming from.

Yes that's true. It might be just me but would appreciate it if there were some more motivation examples and comparison why that is better to the alternative (future/promise) api.
 
The slew of use cases you provide all have one thing in common and that is the threading model is always implied. Futures/Promises/coroutines, all of them have an implied concurrency concept that in most cases is non-trivial to break out of. The channel type is much more akin to a mutex then it is to a Future or Promise as it is merely a synchronization primitive that allows multiple threads/coroutines/processes/etc to coordinate in a unified manner. In fact in the CSP model the term processes is used, not threads. In Limbo you could communicate through a channel with other processes or spawned threads. They were all treated equally and this allowed a large amount of flexibility since it was irrelevant what was on the other end of the channel, just that when you read from a channel either you waited or were given data/message.

I don't agree with that - your channel api proposal is just a subset of future/promise api (the wait functionality). I don't have a problem with the proposed idea but rather have a problem with the proposed api.

What I'm talking about is keeping the future/promise api but different internal implementations:

1. (current future/promise implementation) you have two threads - you give a promise to second thread, on first thread you either block by calling wait or keep doing other things and calling wait_for from time to time. Promise can only be set once and future can only be read from once.

The rest of the implementations is not in the standard. (I'm also giving examples with callbacks as I have experience with those but I'm assuming that they could be replaced by coroutines without a problem.)

2. You have a single thread. You create a future with a callback function. When you need the value you call the wait function or you keep doing other things and calling wait_for from time to time and allow the callback to work for a little while until it is done (time sharing). In this case future must be able to allow a callback function to the promise holder to be executed and coincidentally that callback is also the one that calculates the value that should be put into promise.

3. You create a future that spawns a process and that process returns a value on finish. Once again you can either wait for the process or do something else - it's once again the future's owner decision.

4. You extend future/promise pair with a shared state that holds a for e.g. std::queue of variants (holding either return value or exception) and give the promise to:
a) a different thread,
b) a callback function (function produces results in a loop),
c) a different process (process returns multiple results in a loop via std out pipe)
And they only block when shared state queue limit of results is reached (in this case you would need to extend the promise api to additionally have try_set_value and try_set_exception that don't block and instead allow the promise holder to save the value to its storage and try again later if shared state queue is full).
And on the other side the future doesn't care where the values came from (different thread, callback, different process) - that info is needed only during future/promise pair construction and in internal implementation. Future api also has to be extended with a function that tells it that promise was destroyed as exceptions might only be forwarded for a certain error event that doesn't necessarily indicate that we can't get the next value from that promise without exception (of course a different error reporting mechanism that doesn't involve exceptions could be used - e.g. has_error_value/get_error_value functions).

5. Now that you have a queue of multiple values/exceptions you can create multiple futures and/or that are distributed to different places and can process the data concurrently without knowing that others can get the data or put the data in queue (not a problem to add an additional producer or consumer - one on a different thread, one in a different process and one on a different pc all together as none of the existent futures/promises need to know about that new connection).

For me a channel is a communication tunnel to a different location/locations about which I as the producer or consumer don't want to care about where they reside (a different function can establish that connection and it's its responsibility to know where the connection went) but I want to be able to sent the data through multiple times (unlike current future/promise implementation in the standard), read on multiple locations and definitely don't want to be told to whether to block or not. Your proposed api contains >> and << which for me are just operators that can be used instead of future/promise wait/get and set_value and are therefore only a convenience for a narrow use case of channels communication.

As I already said I like the proposal of channels but see them as change to future/promise internal implementation and not as something completely different.

Now I'd really be interested how your api and mental model differs from a bit extended future/promise api (with most of the changes in the internal implementation part) and which cases would be simpler/more optimal with your proposed api.

Thanks,
Domen

Reply all
Reply to author
Forward
0 new messages