[boost] Boost.Fiber review January 6-15

164 views
Skip to first unread message

Nat Goodspeed

unread,
Jan 6, 2014, 8:07:04 AM1/6/14
to bo...@lists.boost.org, boost...@lists.boost.org, boost-a...@lists.boost.org
Hi all,

The review of Boost.Fiber by Oliver Kowalke begins today, Monday January
6th, and closes Wednesday January 15th.

-----------------------------------------------------

About the library:

Boost.Fiber provides a framework for micro-/userland-threads (fibers)
scheduled cooperatively. The API contains classes and functions to manage
and synchronize fibers similar to Boost.Thread. Each fiber has its own
stack.

A fiber can save the current execution state, including all registers and
CPU flags, the instruction pointer, and the stack pointer and later restore
this state. The idea is to have multiple execution paths running on a
single thread using a sort of cooperative scheduling (versus threads, which
are preemptively scheduled). The running fiber decides explicitly when it
should yield to allow another fiber to run (context switching). Boost.Fiber
internally uses coroutines from Boost.Coroutine; the classes in this
library manage, schedule and, when needed, synchronize those coroutines. A
context switch between threads usually costs thousands of CPU cycles on
x86, compared to a fiber switch with a few hundred cycles. A fiber can only
run on a single thread at any point in time.

docs: http://olk.github.io/libs/fiber/doc/html/
git: https://github.com/olk/boost-fiber
src: http://ok73.ok.funpic.de/boost.fiber.zip

The documentation has been moved to another site; see the link above.
If you have already downloaded the source, please refresh it; Oliver has
added some new material.

---------------------------------------------------

Please always state in your review whether you think the library should be
accepted as a Boost library!

Additionally please consider giving feedback on the following general
topics:

- What is your evaluation of the design?
- What is your evaluation of the implementation?
- What is your evaluation of the documentation?
- What is your evaluation of the potential usefulness of the library?
- Did you try to use the library? With what compiler? Did you have any
problems?
- How much effort did you put into your evaluation? A glance? A quick
reading? In-depth study?
- Are you knowledgeable about the problem domain?

Nat Goodspeed
Boost.Fiber Review Manager
________________________________

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

David Sankel

unread,
Jan 6, 2014, 11:31:32 PM1/6/14
to bo...@lists.boost.org
On Mon, Jan 6, 2014 at 6:07 AM, Nat Goodspeed <n...@lindenlab.com> wrote:

> Boost.Fiber provides a framework for micro-/userland-threads (fibers)
> scheduled cooperatively.


A few questions and a few comments...

What are the most typical use cases for a fiber library?

Were any alternatives to the following behavior? If there were, what were
the benefit/drawback tradeoffs that led to this decision?

{
boost::fibers::fiber f( some_fn);
} // std::terminate() will be called

What happens operationally to a detached fiber? Will it ever continue
execution or is it for all practical purposes destroyed?

Did you consider making algorithm specific fiber members, such as
'thread_affinity' and 'priority', controllable via. template arguments for
threads? If I wanted to create a new scheduler algorithm that required
per-fiber information, how would I implement that with this library?

Did you consider giving some more explanation or code for the
publish-subscribe application? It was a bit difficult to follow that
example without knowing what reg_ and cond_ were.

I love the functionality provided by 'fiber_group'.

I like the convenience of the heap allocated default scheduler as an
alternative to a defaulted template parameter (like std::vector's
allocator).

Best Regards,

David Sankel

Oliver Kowalke

unread,
Jan 7, 2014, 2:19:15 AM1/7/14
to boost
2014/1/7 David Sankel <cam...@gmail.com>

> What are the most typical use cases for a fiber library?
>

I would say that the most use cases are task related applications (as
boost.thread).
The interface and classes of boost.fiber are similar to boost.thread (this
was intended, you can use
patterns well known from multi-threaded programming).
The difference between both libraries is that a waiting thread (for
instance on a condition-variable)
is blocked while a fiber (waiting on a condition_variable) will be
suspended while the the thread running
the fiber is not (e.g. other code could be executed in the meanwhile).

For instance in the context of network applications which have to serve
many clients at the same time
(known as the C10K problem - see
http://olk.github.io/libs/fiber/doc/html/fiber/asio.html) fiber prevent
overloading the operating system with too many threads while the code is
easy to read/understand
(no scattering the code with callbacks etc. - see example pusher-subsriber
in directory examples/asio).

boost.fiber uses coroutines (from boost.coroutine) internally - but
boost.coroutine does not provide classes
to synchronize coroutines. On the developer list some requests for such
synchronization primitvies were
requested which is now available with boost.fiber.

Were any alternatives to the following behavior? If there were, what were
> the benefit/drawback tradeoffs that led to this decision?
>

In the context of async. I/O (boost.asio) you could use callbacks (asio's
previous strategy) but you
scatter your code with many callbacks which makes the code hard to read and
to follow (debugging).

You could use fibers in a thread pool too - with the specialized
fiber-scheduler (already provided by boost.fiber)
you can implement work-stealing/work-sharing easily.


> {
> boost::fibers::fiber f( some_fn);
> } // std::terminate() will be called
>
> What happens operationally to a detached fiber? Will it ever continue
> execution or is it for all practical purposes destroyed?
>

same as for std::thread - the fiber instance must not be joined but
continues executing inside the fiber-scheduler.

Did you consider making algorithm specific fiber members, such as
> 'thread_affinity' and 'priority', controllable via. template arguments for
> threads?


sorry, I don't understand your question. thready_affinity() and prioritiy()
are already
member-functions of class fiber - both are controlled at runtime.
I don't know to which template you are referring to and the purpose to make
both
attributes a tempalte arg.


> If I wanted to create a new scheduler algorithm that required
> per-fiber information, how would I implement that with this library?
>

deriving from interface algorithm and installing your scheduler at the top
of the thread.

class my_scheduler : public boost::fibers::algorithm {
...
};

void thread_fn() {
my_scheduler ms;
boost::fibers::set_scheduling_algorithm( & ms);
...
}


> Did you consider giving some more explanation or code for the
> publish-subscribe application? It was a bit difficult to follow that
> example without knowing what reg_ and cond_ were.
>

I mean more comments? yes, I will!
The code is similar as you would write it for threads - the difference is
that the example
runs in one thread (main-thread).


> I like the convenience of the heap allocated default scheduler as an
> alternative to a defaulted template parameter (like std::vector's
> allocator).
>

the lib allocates (via new-operator) a default scheduler - if you don't
want this you
could simply call set_scheduling_algorithm().

void thread_fn() {
boost::fibers::round_robin rr; // allocated on thread's stack
boost::fibers::set_scheduling_algorithm( & rr); // prevents allocating
round_robin on the heap
...

Niall Douglas

unread,
Jan 7, 2014, 6:37:32 AM1/7/14
to bo...@lists.boost.org
On 6 Jan 2014 at 8:07, Nat Goodspeed wrote:

> Please always state in your review whether you think the library should be
> accepted as a Boost library!

Currently I am undecided. My biggest issue is in the (lack of)
documentation. Without very significant improvements in the docs I'd
have to recommend no right now.

> Additionally please consider giving feedback on the following general
> topics:
>
> - What is your evaluation of the design?

The design overall is fine. I agree with the decision to replicate
std::thread closely, including all the support classes.

My only qualm with the design really is I wish there were combined
fibre/thread primitives i.e. say a future implementation which copes
with both fibers and threads. I appreciate it probably wouldn't be
particularly performant, but it sure would ease partially converting
existing threaded code over to fibers, which is probably a very large
majority use case.

I accept this can be labelled as a future feature, and shouldn't
impede entering Boost now.

> - What is your evaluation of the implementation?

The quality of the implementation is generally excellent. My only
issue is that the Boost.Thread mirror classes do not fully mirror
those in Boost.Thread. For example, where is
future::get_exception_ptr()? I think mirroring Boost.Thread rather
than std::thread is wise for better porting and future proofing.

I'd recommend finishing matching Boost.Thread before entering Boost.

Another suggestion is that the spinlock implementation add memory
transactions support. You can find a suitable spinlock implementation
in AFIO at
https://github.com/BoostGSoC/boost.afio/blob/master/boost/afio/detail/
MemoryTransactions.hpp.

> - What is your evaluation of the documentation?

My biggest concerns with admitting this library now are with the
documentation:

1. There is no formal reference section. What bits of reference
section there is does not contain reference documentation for all the
useful classes (e.g. the ASIO support).

2. I see no real world benchmarks. How are people supposed to know if
adopting fibers is worthwhile without some real world benchmarks? I
particularly want to know about time and space scalability as the
number of execution contexts rises to 10,000 and beyond.

3. I see no listing of compatible architectures, compilers, etc. I
want to see a list of test targets, ones regularly verified as
definitely working. I also want to see a list of minimum necessary
compiler versions.

4. I deeply dislike the documentation simply stating "Synchronization
between a fiber running on one thread and a fiber running on a
different thread is an advanced topic.". No it is not advanced, it's
*exactly* what I need to do. So tell me a great deal more about it!

5. Can I transport fibers across schedulers/threads? swap() suggests
I might be able to, but isn't clear. If not, why not?

6. What is the thread safety of fiber threading primitives? e.g. if a
fibre waits on a fibre future on thread A, can thread B signal that
fibre future safely? If not, what will it take to make this work,
because again this is something I will badly need in AFIO.

7. I want to see worked tutorial examples in the documentation e.g.
step me through implementing a fibre pool, and then show me how to
implement a M:N threading solution. That sort of thing, because this
is another majority use case.

8. There are quite a few useful code examples in the distribution in
the examples directory not mentioned in the formal docs (which is
bad, because people don't realise they are there), but they don't
explain to me why and how they work. I find the ASIO examples
particularly confusing, and I don't understand without delving into
the implementation code why they work which is a documentation
failure. This is bad, and it needs fixing.

9. It would help a lot to understand missing features if there were a
Design Rationale page explaining how and why the library looks the
way it does.

> - What is your evaluation of the potential usefulness of the library?

It's very useful, and I intend to add fiber support to proposed
Boost.AFIO.

> - Did you try to use the library? With what compiler? Did you have any
> problems?

Not yet. AFIO needs to be modified first (targeted thread_source
support).

> - How much effort did you put into your evaluation? A glance? A quick
> reading? In-depth study?

A few hours as I decided exactly how AFIO will add Fiber support. I
reviewed
both the source distribution and the docs.

> - Are you knowledgeable about the problem domain?

Yes. I wrote one of these myself in assembler in the 1990s.

Niall

--
Currently unemployed and looking for work.
Work Portfolio: http://careers.stackoverflow.com/nialldouglas/



Antony Polukhin

unread,
Jan 7, 2014, 6:40:05 AM1/7/14
to boost@lists.boost.org List
2014/1/6 Nat Goodspeed <n...@lindenlab.com>

> The review of Boost.Fiber by Oliver Kowalke begins today, Monday January
> 6th, and closes Wednesday January 15th.
>

I've got some questions about the library:

* Are mutexes of Boost.Fiber thread safe? If no - what purpose do they
serve (as I understand two fibers can not access same resource
simultaneously)
* Can fibers migrate between threads?
* Can Boost.Asio use multithreading and fibers (I wish to use fibers to
have a clear/readable code and multithreading to use all the CPUs)? Is
there an example of such program?

--
Best regards,
Antony Polukhin

Oliver Kowalke

unread,
Jan 7, 2014, 7:21:54 AM1/7/14
to boost
2014/1/7 Niall Douglas <s_sour...@nedprod.com>

> My only qualm with the design really is I wish there were combined
> fibre/thread primitives i.e. say a future implementation which copes
> with both fibers and threads.


my intention was that another library combines fibers and threads
(some kind of thread-pool with worker-threads using fibers).


> I appreciate it probably wouldn't be
> particularly performant, but it sure would ease partially converting
> existing threaded code over to fibers, which is probably a very large
> majority use case.
>

The problem is that a lock on a mutex from boost.thread would block the
entire
thread while a mutex from from boost.fiber keeps the thread running, e.g.
other fibers are executed/resumed while the current fiber is suspended until
the lock on the mutex is released.

You can say that boost.fiber prevents the thread (running the fibers)
is blocked while calling wait-functions on some sync. primitives.

I accept this can be labelled as a future feature, and shouldn't
> impede entering Boost now.
>

OK


> My only
> issue is that the Boost.Thread mirror classes do not fully mirror
> those in Boost.Thread.
>

Because I decided that the interface of std::thread should be the
blue-print.
Boost.thread has (at least for my taste) too many extensions/conditional
compilations etc.


> Another suggestion is that the spinlock implementation add memory
> transactions support. You can find a suitable spinlock implementation
> in AFIO at
> https://github.com/BoostGSoC/boost.afio/blob/master/boost/afio/detail/
> MemoryTransactions.hpp.
>

OK - I'll take a look at it.


> 1. There is no formal reference section. What bits of reference
> section there is does not contain reference documentation for all the
> useful classes (e.g. the ASIO support).
>

OK - I followed the style of other boost libraries (like boost.thread) -
it seams there is no 'standard' regarding to this topic.


> 2. I see no real world benchmarks. How are people supposed to know if
> adopting fibers is worthwhile without some real world benchmarks? I
> particularly want to know about time and space scalability as the
> number of execution contexts rises to 10,000 and beyond.
>

boost.fiber uses boost.coroutine (which itself uses boost.context) and
provides
some performance tests for context switching.
Of course I could provide a benchmark for a simple task running a
certain amount of fibers - but what would be a good example for such a task?


> 3. I see no listing of compatible architectures, compilers, etc. I
> want to see a list of test targets, ones regularly verified as
> definitely working. I also want to see a list of minimum necessary
> compiler versions.
>

boost.fiber itself contains simple C++03 code - the only restriction to
architectures is defined by
boost.context (because it's using assembler).


> 4. I deeply dislike the documentation simply stating "Synchronization
> between a fiber running on one thread and a fiber running on a
> different thread is an advanced topic.". No it is not advanced, it's
> *exactly* what I need to do. So tell me a great deal more about it!
>

because fibers do not use the underlying frameworks (like pthread) used by
boost.thread.
usually you run dependend fibers in the same thread concurrently. if your
code requires that
a fiber in thread A waits on a fiber running in thread B it is supported by
boost.fiber using
atomics (sync. primitves use atomics internally).


> 5. Can I transport fibers across schedulers/threads? swap() suggests
> I might be able to, but isn't clear. If not, why not?
>

not swap() - you have to move the fiber from fiber-scheduler scheduling
fibers in thread A
to fiber-scheduler ruinning in thread B.
for this purpose boost.fiber proivides the class round_robin_ws which has
member-functions steal_from() and migrate_to() to
move a fiber between threads.
Of course you are free to implement and use your own fiber-scheduler.


> 6. What is the thread safety of fiber threading primitives? e.g. if a
> fibre waits on a fibre future on thread A, can thread B signal that
> fibre future safely?
>

it's supported


> 7. I want to see worked tutorial examples in the documentation e.g.
> step me through implementing a fibre pool, and then show me how to
> implement a M:N threading solution. That sort of thing, because this
> is another majority use case.
>

this an advanced topic and should not be part of this lib. I'm working
currently
on a library implementing a thread-pool which uses internally fibers.
It maps to a M:N solution.
boost.fiber itself should provide only the low-level functionality (same as
boost.context does
for boost.coroutine and boost.coroutine acts for boost.fiber).
the lib contains a unit-tests (test_migration) which shows how a fiber can
be migrated
between two threads (this code shows how work-stealing/work-sharing can be
implemented in a thread-pool).


> 8. There are quite a few useful code examples in the distribution in
> the examples directory not mentioned in the formal docs (which is
> bad, because people don't realise they are there), but they don't
> explain to me why and how they work. I find the ASIO examples
> particularly confusing, and I don't understand without delving into
> the implementation code why they work which is a documentation
> failure. This is bad, and it needs fixing.
>

I've added comments to the publisher-subscriber example.
Does the comments in the code explain the ideas better?


> 9. It would help a lot to understand missing features if there were a
> Design Rationale page explaining how and why the library looks the
> way it does.
>

OK

Oliver Kowalke

unread,
Jan 7, 2014, 7:27:08 AM1/7/14
to boost
2014/1/7 Antony Polukhin <anto...@gmail.com>

> * Are mutexes of Boost.Fiber thread safe?
>

you can use the mutexes from bosot.fiber in a multi-threaded env
(but you have to use fibers in your code).


> * Can fibers migrate between threads?
>

yes - unit-test test_migration shows how it is done


> * Can Boost.Asio use multithreading and fibers (I wish to use fibers to
> have a clear/readable code and multithreading to use all the CPUs)? Is
> there an example of such program?
>

fibers run concurrently in one thread - if you want to use more CPUs you
have multiple choices:

1.) you assign one io_service for each CPU (fiber migration could be done
via a specific fiber-scheduler)
2.) you have one io_service and each CPU executes io_service::run() - you
have to execute the fibers in one strand
for each CPU (not tested yet)

Niall Douglas

unread,
Jan 7, 2014, 11:50:48 AM1/7/14
to bo...@lists.boost.org
On 7 Jan 2014 at 13:21, Oliver Kowalke wrote:

> > My only qualm with the design really is I wish there were combined
> > fibre/thread primitives i.e. say a future implementation which copes
> > with both fibers and threads.
>
> my intention was that another library combines fibers and threads
> (some kind of thread-pool with worker-threads using fibers).

Oh great! If you had a Design Rationale page which specifically says
such a feature is out of scope for Boost.Fiber because it's a more
complex additional layer which happens to be provided in another
library X (preferably with link to it), then I would be very pleased.

> > My only
> > issue is that the Boost.Thread mirror classes do not fully mirror
> > those in Boost.Thread.
> >
>
> Because I decided that the interface of std::thread should be the
> blue-print.
> Boost.thread has (at least for my taste) too many extensions/conditional
> compilations etc.

Well ... I agree that thread cancellation support is probably a step
too far, but I think there is also a reasonable happy medium between
C++11 and Boost.Thread.

Put it another way: what out of Boost.Thread's additions would look
extremely likely to appear to C++17? The future::get_exception_ptr()
is a very good example, it saves a lot of overhead when you're
transferring exception state from one future to another.

> > 1. There is no formal reference section. What bits of reference
> > section there is does not contain reference documentation for all the
> > useful classes (e.g. the ASIO support).
>
> OK - I followed the style of other boost libraries (like boost.thread) -
> it seams there is no 'standard' regarding to this topic.

I do think you need reference docs for the ASIO support classes. I
don't understand what they do, and I think I ought to.

> > 2. I see no real world benchmarks. How are people supposed to know if
> > adopting fibers is worthwhile without some real world benchmarks? I
> > particularly want to know about time and space scalability as the
> > number of execution contexts rises to 10,000 and beyond.
>
> boost.fiber uses boost.coroutine (which itself uses boost.context) and
> provides
> some performance tests for context switching.

Sure. But it's a "why should I use this library?" sort of thing? If I
see real world benchmarks for a library, it tells me the author has
probably done some performance tuning. That's a big tick for me in
considering to use a library.

It also suggests to me if refactoring my code is worth it. If
Boost.Fiber provides only a 10x linear scaling improvement, that's
very different from a log(N) scaling improvement. A graph making it
very obvious what the win is on both CPU time and memory footprint
makes decision making regarding Fiber support much easier.

Put it another way: if I am asking my management for time to
prototype adopting Fibers in the company's core software, a scaling
graph makes me getting that time a cinch. Without that graph, I have
to either make my own graph in my spare time, or hope that management
understands the difference between fibers and threads (unlikely).

> Of course I could provide a benchmark for a simple task running a
> certain amount of fibers - but what would be a good example for such a task?

You don't need much: a throughput test of null operations (i.e. a
pure test of context switching) for total threads 1...100,000 on some
reasonably specified 64 bit CPU e.g. an Intel Core 2 Quad or better.
I generally would display in CPU cycles to eliminate clock speed
differences.

Extra bonus points for the same thing on some ARMv7 CPU.

> > 3. I see no listing of compatible architectures, compilers, etc. I
> > want to see a list of test targets, ones regularly verified as
> > definitely working. I also want to see a list of minimum necessary
> > compiler versions.
>
> boost.fiber itself contains simple C++03 code - the only restriction to
> architectures is defined by
> boost.context (because it's using assembler).

Eh, well then I guess you need a link to the correct page in
boost.context where it lists the architectures it works on. Certainly
a big question for anyone considering Fiber is surely "will it work
on my CPU"?

> > 4. I deeply dislike the documentation simply stating "Synchronization
> > between a fiber running on one thread and a fiber running on a
> > different thread is an advanced topic.". No it is not advanced, it's
> > *exactly* what I need to do. So tell me a great deal more about it!
>
> because fibers do not use the underlying frameworks (like pthread) used by
> boost.thread.
> usually you run dependend fibers in the same thread concurrently. if your
> code requires that
> a fiber in thread A waits on a fiber running in thread B it is supported by
> boost.fiber using
> atomics (sync. primitves use atomics internally).

Sure, but I definitely won't be using Fibers that way. And I saw
Antony had the same observation too. I think you need some more docs
on this: just tell us what is possible, what works and how it works
and we'll figure out the rest. I definitely need the ability to
signal a fibre future belonging to thread A from some arbitrary
thread B. I'll also need to boost::asio::io_service::post() to an
ASIO io_service running fibers in thread A from some arbitrary thread
B.

> > 5. Can I transport fibers across schedulers/threads? swap() suggests
> > I might be able to, but isn't clear. If not, why not?
>
> not swap() - you have to move the fiber from fiber-scheduler scheduling
> fibers in thread A
> to fiber-scheduler ruinning in thread B.
> for this purpose boost.fiber proivides the class round_robin_ws which has
> member-functions steal_from() and migrate_to() to
> move a fiber between threads.

That's great news, I'll be needing that too.

> > 6. What is the thread safety of fiber threading primitives? e.g. if a
> > fibre waits on a fibre future on thread A, can thread B signal that
> > fibre future safely?
>
> it's supported

The docs need to explicitly say so then, and indeed thread safety for
*every* API in the library.

Complexity guarantees and exception safety statements for each API
would also be nice. I know they're a real pain to do, but it hugely
helps getting the library into a C++ standard later.

> > 7. I want to see worked tutorial examples in the documentation e.g.
> > step me through implementing a fibre pool, and then show me how to
> > implement a M:N threading solution. That sort of thing, because this
> > is another majority use case.
>
> this an advanced topic and should not be part of this lib. I'm working
> currently
> on a library implementing a thread-pool which uses internally fibers.
> It maps to a M:N solution.
> boost.fiber itself should provide only the low-level functionality (same as
> boost.context does
> for boost.coroutine and boost.coroutine acts for boost.fiber).
> the lib contains a unit-tests (test_migration) which shows how a fiber can
> be migrated
> between two threads (this code shows how work-stealing/work-sharing can be
> implemented in a thread-pool).

If you have a section in a design rationale page saying this, I would
be happy.

> > 8. There are quite a few useful code examples in the distribution in
> > the examples directory not mentioned in the formal docs (which is
> > bad, because people don't realise they are there), but they don't
> > explain to me why and how they work. I find the ASIO examples
> > particularly confusing, and I don't understand without delving into
> > the implementation code why they work which is a documentation
> > failure. This is bad, and it needs fixing.
>
> I've added comments to the publisher-subscriber example.
> Does the comments in the code explain the ideas better?

I'll get back to you on your comments later.

I think, as a minimum, all the examples need to appear verbatim in
the docs (quickbook makes this exceptionally easy, in fact it's
almost trivial). This is because Boost docs tend to appear on Google
searches far quicker than code in some git repo.

Antony Polukhin

unread,
Jan 7, 2014, 1:19:47 PM1/7/14
to boost@lists.boost.org List
2014/1/7 Oliver Kowalke <oliver....@gmail.com>

> 2014/1/7 Antony Polukhin <anto...@gmail.com>
>
> > * Are mutexes of Boost.Fiber thread safe?
> >
>
> you can use the mutexes from bosot.fiber in a multi-threaded env
> (but you have to use fibers in your code).
>

Thanks, already trying them.
What will happen on fiber::mutex.lock() call if all the fibers on current
thread are suspended? Will the mutex call `this_thread::yield()`?


> > * Can fibers migrate between threads?
> >
>
> yes - unit-test test_migration shows how it is done
>

Another implementation question:
Why there are so many atomics in fiber_base? Looks like fiber is usually
used in a single thread, and in situations when fiber is moved from one
thread to another memory barrier would be sufficient.

--
Best regards,
Antony Polukhin

Oliver Kowalke

unread,
Jan 7, 2014, 1:35:27 PM1/7/14
to boost
2014/1/7 Niall Douglas <s_sour...@nedprod.com>

> Oh great! If you had a Design Rationale page which specifically says
> such a feature is out of scope for Boost.Fiber because it's a more
> complex additional layer which happens to be provided in another
> library X (preferably with link to it), then I would be very pleased.
>

I'll add an rational section to the documentation - the library is
pre-alpha (e.g.
I'm experimenting with some ideas - but if you are interested I can discuss
it
with you in a private email).

Well ... I agree that thread cancellation support is probably a step
> too far, but I think there is also a reasonable happy medium between
> C++11 and Boost.Thread.
>

I don't say that I'm against this - I've added fiber itnerruption (I would
follow the suggestion
from Herb Sutter that a thread/fiber should be cooperativ-cancelable ==
interruptable).
I try to keep the interface small as possible - of course we can discuss
which funtions
should the interface contain.


> Put it another way: what out of Boost.Thread's additions would look
> extremely likely to appear to C++17? The future::get_exception_ptr()
> is a very good example, it saves a lot of overhead when you're
> transferring exception state from one future to another.
>

This is one items which could be discussed. for instance I've concerns to
add future::get_exception_ptr() because I believe it is not really
required, it
is only for convenience. The exception thrown by the fiber-function
(function which
is executed by the fiber) is re-thrown by future<>::get().
If you don't want the exception re-thrown you could simply add a
try-catch-statement
at the top-most level in your fiber-function and assign the catched example
to an exception_ptr
passed to the fiber-function.

void my_fiber_fn( boost::exception_ptr & ep) {
try {
...
} catch ( my_exception const& ex) {
ep = boost::current_exception();
}
}

boost::exception_ptr ep;
boost::fibers::fiber( boost::bind( my_fiber_fn, boost::ref( ep) ) ).join();
if ( ep) {
boost::rethrow_exception( ep);
}

I do think you need reference docs for the ASIO support classes.


I'll do it


> I don't understand what they do, and I think I ought to.
>

It is pretty simple - the fiber-scheduler contains a reference to asio's
io_service
and uses it a event-queue/event-dispatcher, e.g. it pushes fibers ready to
run
(e.g. fiber newly created, yielded fibers or signaled fibers suspended by a
wait-operation
on sync. primitives).


> Sure. But it's a "why should I use this library?" sort of thing? If I
> see real world benchmarks for a library, it tells me the author has
> probably done some performance tuning. That's a big tick for me in
> considering to use a library.
>

well I did not performance tuning - my main focus was to get the library
work correctly
(optimizations will we done later).


> It also suggests to me if refactoring my code is worth it. If
> Boost.Fiber provides only a 10x linear scaling improvement, that's
> very different from a log(N) scaling improvement. A graph making it
> very obvious what the win is on both CPU time and memory footprint
> makes decision making regarding Fiber support much easier.
>

I don't know the factor of fibers scaling - I would consider boost.fiber a
way
to simplify the code, e.g. it prevents scattering the code by callbacks
(for instance in the
context of boost.asio). my starting point was to solve problmes like the
C10K-problem
(Dan Kegel explains it in a more detail on its webpage - I'm referring to
it in boost.fiber's
documentation - http://olk.github.io/libs/fiber/doc/html/fiber/asio.html).


> Put it another way: if I am asking my management for time to
> prototype adopting Fibers in the company's core software, a scaling
> graph makes me getting that time a cinch. Without that graph, I have
> to either make my own graph in my spare time, or hope that management
> understands the difference between fibers and threads (unlikely).
>

well that are the problems we are all faced to :) I've not done
performance tests, sorry


> > Of course I could provide a benchmark for a simple task running a
> > certain amount of fibers - but what would be a good example for such a
> task?
>
> You don't need much: a throughput test of null operations (i.e. a
> pure test of context switching) for total threads 1...100,000 on some
> reasonably specified 64 bit CPU e.g. an Intel Core 2 Quad or better.
> I generally would display in CPU cycles to eliminate clock speed
> differences.
>

CPU cycles for context switching is provided for boost.context and
boost.coroutine - but
adapting the code for boost.fiber isn't an issue - I could addd it.

Extra bonus points for the same thing on some ARMv7 CPU.
>

counting CPU cycles isn't that easy for ARM (at least as I was looking at
it).


> Eh, well then I guess you need a link to the correct page in
> boost.context where it lists the architectures it works on. Certainly
> a big question for anyone considering Fiber is surely "will it work
> on my CPU"?
>

the docu of boost.context is mising this info - I'll add it (could you add
an bug-report for boost.context please).


> Sure, but I definitely won't be using Fibers that way. And I saw
> Antony had the same observation too. I think you need some more docs
> on this: just tell us what is possible, what works and how it works
> and we'll figure out the rest. I definitely need the ability to
> signal a fibre future belonging to thread A from some arbitrary
> thread B. I'll also need to boost::asio::io_service::post() to an
> ASIO io_service running fibers in thread A from some arbitrary thread
> B.
>

yes, as I explained it is supported.Maybe because I've writen the code
don't know why
the documentation is not enought for you.

what works is that you create fibers::packaged_task<> and execute it in a
fiber on thread A
and wait on fibers::future<> returned by packaged_task<>::get_future() in
thread B.


> The docs need to explicitly say so then, and indeed thread safety for
> *every* API in the library.
>

OK


> Complexity guarantees and exception safety statements for each API
> would also be nice. I know they're a real pain to do, but it hugely
> helps getting the library into a C++ standard later.
>

OK


> > between two threads (this code shows how work-stealing/work-sharing can
> be
> > implemented in a thread-pool).
>
> If you have a section in a design rationale page saying this, I would
> be happy.
>

OK

I think, as a minimum, all the examples need to appear verbatim in
> the docs
>

the complete code? I would prefer only code snippets - the complete code be
read in the
example directory. otherwise the documentation would be bloated (at least I
would skip pages of code).

Oliver Kowalke

unread,
Jan 7, 2014, 1:41:34 PM1/7/14
to boost
2014/1/7 Antony Polukhin <anto...@gmail.com>

> Thanks, already trying them.
> What will happen on fiber::mutex.lock() call if all the fibers on current
> thread are suspended? Will the mutex call `this_thread::yield()`?
>

yes.
btw, you could take a look at unit-test test_mutex_mt.cpp


> Another implementation question:
> Why there are so many atomics in fiber_base? Looks like fiber is usually
> used in a single thread, and in situations when fiber is moved from one
> thread to another memory barrier would be sufficient.
>

yes, synchronization is done via atomics.
In a single threaded env the lib wouldn't require atomics because all fibers
running in on thread.
A multithread env requires an memory barrier - boost.fiber uses a spinlock
which
yields a fiber if already locked.

Paul A. Bristow

unread,
Jan 7, 2014, 1:49:53 PM1/7/14
to bo...@lists.boost.org


> -----Original Message-----
> From: Boost [mailto:boost-...@lists.boost.org] On Behalf Of Oliver Kowalke
> Sent: Tuesday, January 07, 2014 6:35 PM
> To: boost
> Subject: Re: [boost] Boost.Fiber review January 6-15
>
> I think, as a minimum, all the examples need to appear verbatim in
> > the docs
>
> the complete code? I would prefer only code snippets - the complete code be read in the example
> directory. otherwise the documentation would be bloated (at least I would skip pages of code).

It's easy to use Quickbook snippets for this key bits in the text

*and* then a link to the complete code in /example.

[@../../example/my_example.cpp]

You can often have more than one snippet from the source code example.

//[my_library_example_1
code snippet 1
...
//] [my_library_example_1] // This ends the 1st snippet.

//[my_library_example_2
more code snippet 2
...
//] [my_library_example_2] // This ends the 2nd snippet.

Providing sample output from the example is also sometimes useful.

I have done this by pasting (part?) output into a comment at the end of the example .cpp

/*
//[my_library_example_output

//`[* Output from running my_library_example.cpp is:]

my_library_example.vcxproj -> J:\Cpp\my_library_example\Debug\my_library_example.exe
Hello World!

//] [my_library_example_output] // End of output snippet.

*/

HTH

Paul

---
Paul A. Bristow,
Prizet Farmhouse, Kendal LA8 8AB UK
+44 1539 561830 07714330204
pbri...@hetp.u-net.com

Niall Douglas

unread,
Jan 7, 2014, 4:32:13 PM1/7/14
to bo...@lists.boost.org
On 7 Jan 2014 at 19:35, Oliver Kowalke wrote:

> I'll add an rational section to the documentation - the library is
> pre-alpha (e.g. I'm experimenting with some ideas - but if you are
> interested I can discuss it with you in a private email).

I know the feeling. Bjorn (Reese) has been doing a sort of private
peer review of AFIO with me recently, and I keep telling him things
are weird in AFIO because it's all very alpha and I want to keep my
future design options open.

Thing is, Bjorn is generally right, and bits of AFIO suck and need
fixing. I've been trying my best to fix things, but it ain't easy
compiling and debugging Boost on an Intel Atom 220 (all I have
available right now).

> This is one items which could be discussed. for instance I've concerns
> to add future::get_exception_ptr() because I believe it is not really
> required, it is only for convenience.

Ehhh not really ... it lets you avoid a catch(), which is one of the
few places in C++ without a worst case complexity guarantee. Catching
types with RTTI (anything deriving from std::exception) in a large
application can be unpleasantly slow (and unpredictably slow), and if
you're bouncing exception state across say five separate futures,
that's five separate unnecessary try...catch() invocations.

BTW, older compilers also tend to serialise parts of try...catch()
processing across threads. If you fire off a thousand threads doing
nothing but throwing and catching exceptions you'll see some fun on
older compilers (e.g. pre-VS2012).

> > I don't understand what they do, and I think I ought to.
>
> It is pretty simple - the fiber-scheduler contains a reference to asio's
> io_service and uses it a event-queue/event-dispatcher, e.g. it pushes
> fibers ready to run (e.g. fiber newly created, yielded fibers or signaled
> fibers suspended by a wait-operation on sync. primitives).

Oh I see ... just like AFIO does with ASIO's io_service in fact.

If it's that easy, adding Fiber support should be a real cinch.

> I don't know the factor of fibers scaling - I would consider boost.fiber
> a way to simplify the code, e.g. it prevents scattering the code by
> callbacks (for instance in the context of boost.asio). my starting point
> was to solve problmes like the C10K-problem (Dan Kegel explains it in a
> more detail on its webpage - I'm referring to it in boost.fiber's
> documentation -
> http://olk.github.io/libs/fiber/doc/html/fiber/asio.html).

Thing is ... the C10K problem *is* a performance problem. If you're
going to suggest that Boost.Fiber helps to solve or does solve the
C10K problem, I think you need to demonstrate at least a 100K socket
processing capacity seeing as some useful work also needs to be done
with the C10K problem. Otherwise best not mention the C10K problem,
unless you're saying that you hope in the near future to be able to
address that problem in which case the wording needs to be clarified.

> well that are the problems we are all faced to :) I've not done
> performance tests, sorry

I appreciate and understand this. However, I must then ask this: is
your library ready to enter Boost if you have not done any
performance testing or tuning?

> > Eh, well then I guess you need a link to the correct page in
> > boost.context where it lists the architectures it works on. Certainly
> > a big question for anyone considering Fiber is surely "will it work
> > on my CPU"?
>
> the docu of boost.context is mising this info - I'll add it (could you add
> an bug-report for boost.context please).

https://svn.boost.org/trac/boost/ticket/9551

> > and we'll figure out the rest. I definitely need the ability to
> > signal a fibre future belonging to thread A from some arbitrary
> > thread B. I'll also need to boost::asio::io_service::post() to an
> > ASIO io_service running fibers in thread A from some arbitrary thread
> > B.
>
> yes, as I explained it is supported.Maybe because I've writen the code
> don't know why the documentation is not enought for you.

It wasn't clear from the docs that your Fiber support primitives
library is also threadsafe. I had assumed (like Antony seemed to as
well) that for speed you wouldn't have done that, but it certainly is
a lot easier if it's all threadsafe too.

> what works is that you create fibers::packaged_task<> and execute it in
> a fiber on thread A and wait on fibers::future<> returned by
> packaged_task<>::get_future() in thread B.

Yes I understand now. fibers::packaged_task<> is a strict superset of
std::packaged_task<>, not an incommensurate alternative. That works
for me, but others may have issue with it.

> > The docs need to explicitly say so then, and indeed thread safety for
> > *every* API in the library.
>
> OK

BTW, I did this in AFIO by creating a macro with the appropriate
boilerplate text saying "this function is threadsafe and exception
safe", and then doing a large scale regex find and replace :) I only
wish I could do the same for complexity guarantees, but that requires
studying each API's possible code paths individually.

> > I think, as a minimum, all the examples need to appear verbatim
in
> > the docs
>
> the complete code? I would prefer only code snippets - the complete code
> be read in the example directory. otherwise the documentation would be
> bloated (at least I would skip pages of code).

You can stick them into an Appendix section at the bottom. They're
purely there for Googlebot to find, no one is expecting humans to
really go there.

That said, if you want to do an annotated commentary on a broken up
set of snippets from the examples, do feel free :). But I think
inline source comments is usually plenty enough.

You asked about the comments you added to
https://github.com/olk/boost-fiber/blob/master/examples/asio/publish_s
ubscribe/server.cpp. They do help, but I am still struggling
somewhat.

How about this? Can you do a side-by-side code example where on the
left side is an old style ASIO callback based implementation, and on
the right is an ASIO Fiber based implementation? Something like
https://ci.nedprod.com/job/Boost.AFIO%20Build%20Documentation/Boost.AF
IO_Documentation/doc/html/afio/quickstart/async_file_io/hello_world.ht
ml.

It doesn't have to be anything more than a trivial Hello World style
toy thing. I just need to map in my head what yield[ec] means, and
how that interplays with boost::fibers::asio::spawn and
io_service::post().

Also one last point: your fiber condvar seems to suffer from no
spurious wakeups like pthread condvars? You're certainly not
protecting it from spurious wakeups in the example code. If spurious
wakeups can't happen, you *definitely* need to mention that in the
docs as that is a much tighter guarantee over standard condvars.

Nat Goodspeed

unread,
Jan 7, 2014, 5:05:31 PM1/7/14
to bo...@lists.boost.org
On Tue, Jan 7, 2014 at 4:32 PM, Niall Douglas <s_sour...@nedprod.com> wrote:

> On 7 Jan 2014 at 19:35, Oliver Kowalke wrote:

> > I've not done
> > performance tests, sorry

> I appreciate and understand this. However, I must then ask this: is
> your library ready to enter Boost if you have not done any
> performance testing or tuning?

I don't have a C10K problem, but I do have a code-organization
problem. Much essential processing in a large old client app is
structured (if that's the word I want) as chains of callbacks from
asynchronous network I/O. Given the latency of the network requests,
fibers would have to have ridiculous, shocking overhead before it
would start to bother me.

I think that's a valid class of use cases. I don't buy the argument
that adoption of Fiber requires performance tuning first.

Oliver Kowalke

unread,
Jan 7, 2014, 5:07:12 PM1/7/14
to boost
2014/1/7 Niall Douglas <s_sour...@nedprod.com>

> > I don't know the factor of fibers scaling - I would consider boost.fiber
> > a way to simplify the code, e.g. it prevents scattering the code by
> > callbacks (for instance in the context of boost.asio). my starting point
> > was to solve problmes like the C10K-problem (Dan Kegel explains it in a
> > more detail on its webpage - I'm referring to it in boost.fiber's
> > documentation -
> > http://olk.github.io/libs/fiber/doc/html/fiber/asio.html).
>
> Thing is ... the C10K problem *is* a performance problem. If you're
> going to suggest that Boost.Fiber helps to solve or does solve the
> C10K problem, I think you need to demonstrate at least a 100K socket
> processing capacity seeing as some useful work also needs to be done
> with the C10K problem. Otherwise best not mention the C10K problem,
> unless you're saying that you hope in the near future to be able to
> address that problem in which case the wording needs to be clarified.
>

my focus was to address the one-thread-per-client pattern used for C10K.
the pattern makes code much more readable/reduces the complexity but
doesn't scale. if you create too many threads on your system your overall
performance
will be shrink because at a certain number of threads, the overhead of the
kernel scheduler
starts to swamp the available cores.



> I appreciate and understand this. However, I must then ask this: is
> your library ready to enter Boost if you have not done any
> performance testing or tuning?
>

is performance testing and tuning a precondition for a lib to be accepted?
I did some tuning (for instance spinlock implementation) but it is not
finished.


> How about this? Can you do a side-by-side code example where on the
> left side is an old style ASIO callback based implementation, and on
> the right is an ASIO Fiber based implementation? Something like
> https://ci.nedprod.com/job/Boost.AFIO%20Build%20Documentation/Boost.AF
> IO_Documentation/doc/html/afio/quickstart/async_file_io/hello_world.ht
> ml.
>

OK


> It doesn't have to be anything more than a trivial Hello World style
> toy thing. I just need to map in my head what yield[ec] means, and
> how that interplays with boost::fibers::asio::spawn and
> io_service::post().
>

- io_service::post() pushes a callable to io_service's internal queue
(executed by io_service::run() and related functions)
- fibers::asio::spawn() creates a new fiber and adds it to the
fiber-scheduler
(specialized to use asio's io_service hence asios's asyn-result feature)
- yield is an instance of boost::fibers::asio::yield_context which
represents the fiber
running this code; it is used by asio's async result feature
- yield[ec] is put to an async-operation in order suspending the current
fiber and pass an
error (if happend during execution of async-op) back to the calling code,
for instance EOF if socket was closed


> Also one last point: your fiber condvar seems to suffer from no
> spurious wakeups like pthread condvars? You're certainly not
> protecting it from spurious wakeups in the example code. If spurious
> wakeups can't happen, you *definitely* need to mention that in the
> docs as that is a much tighter guarantee over standard condvars.
>

OK

Hartmut Kaiser

unread,
Jan 8, 2014, 9:02:42 AM1/8/14
to bo...@lists.boost.org
One of the main questions arising for me when looking through the code is
why doesn't the fiber class expose the same API as std::thread (or
boost::thread for that matter)? This would make using fibers so much more
usable, even more as the rest of the library was aligned with the C++11
standard library.

In fact, in my book a fiber _is_ a thread-like construct and having it
expose a new interface is just confusing and unnecessary.

Regards Hartmut
---------------
http://boost-spirit.com
http://stellar.cct.lsu.edu

Nat Goodspeed

unread,
Jan 8, 2014, 10:20:02 AM1/8/14
to bo...@lists.boost.org
On Wed, Jan 8, 2014 at 9:02 AM, Hartmut Kaiser <hartmut...@gmail.com> wrote:

> One of the main questions arising for me when looking through the code is
> why doesn't the fiber class expose the same API as std::thread (or
> boost::thread for that matter)? This would make using fibers so much more
> usable, even more as the rest of the library was aligned with the C++11
> standard library.

To the best of my knowledge, Oliver is indeed trying to mimic the
std::thread API. It would be very helpful if you would point out the
deltas that you find distressing.

Eugene Yakubovich

unread,
Jan 8, 2014, 5:37:19 PM1/8/14
to bo...@lists.boost.org
On Mon, Jan 6, 2014 at 7:07 AM, Nat Goodspeed <n...@lindenlab.com> wrote:
>
> Please always state in your review whether you think the library should be
> accepted as a Boost library!
I think the library should be accepted but would prefer some changes made as
outlined below.

>
> Additionally please consider giving feedback on the following general
> topics:
>
> - What is your evaluation of the design?
The overall design is good. I like the parallel between thread interface
and fiber interface.

Some suggestions:
- Maybe this is something that should be handled in a separate fiber
pool library but I'd like to be able to specify multiple threads for
affinity. This is useful for when a fiber should be bound to any one
of threads local to a NUMA domain or physical CPU (for cache sharing).

- I dislike fiber_group. I understand that it parallels thread_group.
I don't know the whole story behind thread_group but I think it
predates move support and quick search through the mailing list
archives shows there were objections about its design as well. I
specifically don't like having to new up fiber object to pass
ownership to add_fiber. fiber object is just a handle (holds a
pointer) so it's a great candidate for move semantics. It maybe best
to take out fiber_group altogether. What's a use case for it?

> - What is your evaluation of the implementation?
- Can I suggest replacing the use of std::auto_ptr with
boost::scoped_ptr? It leads to deprecation warnings on GCC.

- While I understand that scheduling algorithm is more internal than
the rest of the library, I still don't like detail namespace leaking
out. Perhaps these classes should be moved out of the detail
namespace.

- algorithm interface seems to do too much. I think a scheduling
algorithm is something that just manages the run queue -- selects
which fiber to run next (e.g. Linux kernel scheduler interface works
like this). As a result, implemented scheduling algorithms have much
overlap. Indeed, round_robin and round_robin_ws is almost identical
code.

> - What is your evaluation of the documentation?
Ok but, like others have said, could be improved. ASIO integration is
poorly documented.

> - What is your evaluation of the potential usefulness of the library?
Very useful. I've used Python's greenlets (via gevent) and they were
very useful for doing
concurrent I/O. Would love to see equivalent functionality in C++.

> - Did you try to use the library? With what compiler? Did you have any
> problems?
Yes, I used it but very little. Used g++ 4.8.

> - How much effort did you put into your evaluation? A glance? A quick
> reading? In-depth study?
Spend about 3 hours studying the code and writing toy examples.

> - Are you knowledgeable about the problem domain?
Not really but I've spent many years writing server code with
completion routines so I know the value of not having to do that.

Vicente J. Botet Escriba

unread,
Jan 8, 2014, 5:48:31 PM1/8/14
to bo...@lists.boost.org
Le 06/01/14 14:07, Nat Goodspeed a écrit :
> Hi all,
>
> The review of Boost.Fiber by Oliver Kowalke begins today, Monday January
> 6th, and closes Wednesday January 15th.
>
>
>
> - What is your evaluation of the design?

Hi Oliver,

glad to see that you Fibers library is under review.

I have some question related to the design.

The interface must at least follows the interface of the standard thread
library and if there are some limitations, they must be explicitly
documeted.
Any difference respect Boost.Thread must also documented and the
rational explained.

std::thread is not copyable by design, that is only one owner. WHy
boost::fibers::fiber is copyable?

Why the exceptions throw by the function given to a fiber is consumed by
the framework instead of terminate the program as std::thread?

Which exception is thrown when the Error
Conditions:resource_deadlock_would_occurand invalid_argument are signaled?

Why priority and thread_affinity are not part of the fiber attributes?

The interface let me think that the affinity can be changed by the owned
of the thread and the thred itself. Is this by design?

Please don't document get/set functions as thread_affinity altogether.

The safe_bool idiom should be replaced by a explict operator bool.

Why is the scheduling algorithm global? Could it be threads specific?
BTW i sthere an exmple showing the |thread_specific_ptr trick mentioned
on the documentation. |

Why the time related function are limited to a specific clock?

The interface of fiber_group based on old and deprecated thread_group is
not based on move semantics. Have you take a look at the proposal

N3711
<http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3711.pdf>
Task Groups As a Lower Level C++ Library Solution To Fork-Join Parallelism


Maybe it is worth adapting it to fibers.
**
Boost.Thread has deprecated the use of the nested type scoped_lock as it
introduce unnecessary dependencies. DO you think it is worth maintaining it?

I made some adaptations to boost::barrier that could also have a sens
for fibers. I don't know if a single class could be defined that takes
care of both contexts for high level classes as the barrier?

Boost.Thread would deliver two synchronized bounded and unbounded queue
soon based on
N3533
<http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3533.html>
C++ Concurrent Queues


Have you tried to follow the same interface?

Best,
Vicente

Agustín K-ballo Bergé

unread,
Jan 8, 2014, 9:20:56 PM1/8/14
to bo...@lists.boost.org
On 08/01/2014 12:20 p.m., Nat Goodspeed wrote:
> On Wed, Jan 8, 2014 at 9:02 AM, Hartmut Kaiser <hartmut...@gmail.com> wrote:
>
>> One of the main questions arising for me when looking through the code is
>> why doesn't the fiber class expose the same API as std::thread (or
>> boost::thread for that matter)? This would make using fibers so much more
>> usable, even more as the rest of the library was aligned with the C++11
>> standard library.
>
> To the best of my knowledge, Oliver is indeed trying to mimic the
> std::thread API. It would be very helpful if you would point out the
> deltas that you find distressing.
>

Just from looking at the documentation:

- The constructor takes a nullary function, instead of a callable and an
arbitrary number of parameters. It is not clear whether Fn has to be a
function object or if it can be a callable as with std::thread. The
order of parameters of the overload taking attributes does not match
that of boost::thread.

- No notion of native_handle (this may not make sense for fibers, I
haven't looked at the implementation).

- There is no notion of explicit operator bool in neither boost::thread
nor std::thread.

- There is an operator < for fibers and none for id. There should be no
relational operators for fiber, and the full set for fiber::id as well
as hash support and ostream insertion.

- Several functions take a fixed time_point type instead of a chrono one.

- There is no indication whether the futures support void (I assume they
do) and R& (I assume they don't). The return type for shared_future::get
is wrong. Again, there's additional explicit operator bools.

- The documentation for promise doesn't seem to support void, it is
unclear whether they support references. Another explicit operator bool.

- I saw mentions of async in the documentation, but I couldn't find the
actual documentation for it. It's not clear whether deferred futures are
supported, at least they appear not to be from future's reference.

Regards,
--
Agustín K-ballo Bergé.-
http://talesofcpp.fusionfenix.com

Oliver Kowalke

unread,
Jan 9, 2014, 1:59:36 AM1/9/14
to boost
2014/1/8 Hartmut Kaiser <hartmut...@gmail.com>

> One of the main questions arising for me when looking through the code is
> why doesn't the fiber class expose the same API as std::thread (or
> boost::thread for that matter)?


I thought is does expose the same interface (with some small additions) as
std::thread does.
Could you point out what you are missing or what are the differences your
are not comfortable with?


> This would make using fibers so much more
> usable, even more as the rest of the library was aligned with the C++11
> standard library.
>

it was my intention to make boost.fiber usable as std::thread/boost::thread


> In fact, in my book a fiber _is_ a thread-like construct


agreed


> and having it expose a new interface is just confusing and unnecessary.
>

can be more specific _ I thought boost.fiber has a similiar interface as
std::thread/boost::thread

Oliver Kowalke

unread,
Jan 9, 2014, 2:11:32 AM1/9/14
to boost
2014/1/8 Eugene Yakubovich <eyaku...@gmail.com>

> Some suggestions:
> - Maybe this is something that should be handled in a separate fiber
> pool library but I'd like to be able to specify multiple threads for
> affinity. This is useful for when a fiber should be bound to any one
> of threads local to a NUMA domain or physical CPU (for cache sharing).
>

yes, that's what I implement in another library (for instance
thread-pining; used in
performance tests of boost.context)


> - I dislike fiber_group. I understand that it parallels thread_group.
> I don't know the whole story behind thread_group but I think it
> predates move support and quick search through the mailing list
> archives shows there were objections about its design as well. I
> specifically don't like having to new up fiber object to pass
> ownership to add_fiber. fiber object is just a handle (holds a
> pointer) so it's a great candidate for move semantics. It maybe best
> to take out fiber_group altogether. What's a use case for it?
>

I've added fiber_group only to mimic the boost.thread API - I've never used
thread_group so I've no problems to remove fiber_group from the API (if
desired).


> > - What is your evaluation of the implementation?
> - Can I suggest replacing the use of std::auto_ptr with
> boost::scoped_ptr? It leads to deprecation warnings on GCC.
>

OK


> - While I understand that scheduling algorithm is more internal than
> the rest of the library, I still don't like detail namespace leaking
> out. Perhaps these classes should be moved out of the detail
> namespace.
>

hmm - interface algorithm is already in namespace fibers. could you tell me
to
which class you referring to?


> - algorithm interface seems to do too much. I think a scheduling
> algorithm is something that just manages the run queue -- selects
> which fiber to run next (e.g. Linux kernel scheduler interface works
> like this). As a result, implemented scheduling algorithms have much
> overlap.


maybe manager would be a better wording?
the implementations of algorithm (schedulers in the docu) own the fibers
internal
data strucutres. the fibers are stored if waiting or but in a ready-queue if
ready to be resumed.
what would you suggest?


> Indeed, round_robin and round_robin_ws is almost identical
> code.
>

the difference is that round_robin_ws enables fiber-stealing and owns two
additional
member-functions for this purpose. the internal ready-queue is made
thread-safe (concurrent
access by different thread required for fiber-stealing).


> > - What is your evaluation of the documentation?
> Ok but, like others have said, could be improved. ASIO integration is
> poorly documented.
>

OK

Oliver Kowalke

unread,
Jan 9, 2014, 2:42:20 AM1/9/14
to boost
2014/1/8 Vicente J. Botet Escriba <vicent...@wanadoo.fr>

> Hi Oliver,
>

hello Vicente


> The interface must at least follows the interface of the standard thread
> library and if there are some limitations, they must be explicitly
> documeted.
> Any difference respect Boost.Thread must also documented and the rational
> explained.
>

OK - section rational


> std::thread is not copyable by design, that is only one owner. WHy
> boost::fibers::fiber is copyable?
>

boost::fibers::fiber should be movable only - it is derived from
boost::noncopyable and uses BOOST_MOVABLE_BUT_NOT_COPYABLE


> Why the exceptions throw by the function given to a fiber is consumed by
> the framework instead of terminate the program as std::thread?
>

the trampoline-function used for the context does following:
- in a try-cactch-block execute the fiber-code (fiber-function given by the
user)
- catch exception forced_unwind from boost.coroutine -> release the fiber
and continue unwinding the stack
- catch fiber_interrupted -> stored inside boost::exception_ptr (might be
re-thrown in fiber::join() )
- catch all other exceptions and call std::terminate()

I though this would let to an equivalent behaviour as std::thread


> Which exception is thrown when the Error Conditions:resource_deadlock_would_occurand
> invalid_argument are signaled?
>

I use BOOST_ASSERT instead of exception


> Why priority and thread_affinity are not part of the fiber attributes?
>

you referre to class attributes passed to fiber's ctor? this class is a
vehicle for passing special parameter (for instance stack-size) to
boost::coroutine - if you use
the segmented-stack feature you usually don't need it.
priority() and thread_affinity() are member-functions of
boost::fibers::fiber to make the modifications those parameters for an
instance more explicit


> The interface let me think that the affinity can be changed by the owned
> of the thread and the thred itself. Is this by design?
>

thread_affinity() expresses if the fiber is bound to the thread - this is
required for fiber-stealing, e.g. a fiber which is bound to its running
thread
will not be selected as candidate for fiber-stealing.


> Please don't document get/set functions as thread_affinity altogether.
>

OK


> The safe_bool idiom should be replaced by a explict operator bool.
>

hmm - I thought using 'operator bool' is dangerous (I remind on some
discussion of this issue by Scott Meyers).
do you have other infos?


> Why is the scheduling algorithm global? Could it be threads specific?


It is thread specific (using boost::thread_specific_ptr)


> BTW i sthere an exmple showing the |thread_specific_ptr trick mentioned
> on the documentation. |
>

which trick? maybe you referring to

algorithm *
scheduler::instance()
{
if ( ! instance_.get() )
{
default_algo_.reset( new round_robin() );
instance_.reset( default_algo_.get() );
}
return instance_.get();
}


> Why the time related function are limited to a specific clock?
>

it is a typedef to steady_clock (if avaliable) or system_clock - one of the
main reasons is that you would have made the schedulers be templates.


> The interface of fiber_group based on old and deprecated thread_group is
> not based on move semantics. Have you take a look at the proposal
> N3711 <http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3711.pdf>
> Task Groups As a Lower Level C++ Library Solution To Fork-Join Parallelism
>

no - but I've no problems to remove it form the library


> Maybe it is worth adapting it to fibers.
> **
> Boost.Thread has deprecated the use of the nested type scoped_lock as it
> introduce unnecessary dependencies. DO you think it is worth maintaining it?
>

oh - I wasn't aware of this issue - I've no preferrence to scoped_lock
(which is a typedef to unique_lock, AFAIK)


> I made some adaptations to boost::barrier that could also have a sens for
> fibers.


OK - what are those adaptations?


> I don't know if a single class could be defined that takes care of both
> contexts for high level classes as the barrier?
>

a problem is raised by the mutex implementations - thread's mutexes are
blocking the thread while fiber's mutexes do
only suspend the current fiber while keep the thread running (so other
fibers are able to run instead)

I was thinking on a combination of sync. primitives for threads and fibers
too , but it is not that easy to implement (with a clean interface)


> Boost.Thread would deliver two synchronized bounded and unbounded queue
> soon based on
> N3533 <http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3533.html>
> C++ Concurrent Queues
>

OK


> Have you tried to follow the same interface?
>

I did look at the proposal of C++ Concurrent Queues - but I didn't adapt
the complete interface.
for instance Element queue::value_pop(); -> queue_op_status queue::pop(
Element &);

Oliver Kowalke

unread,
Jan 9, 2014, 3:06:41 AM1/9/14
to boost
2014/1/9 Agustín K-ballo Bergé <kaba...@hotmail.com>

> - The constructor takes a nullary function, instead of a callable and an
> arbitrary number of parameters.


will collide with the additional parameters required by fibers and not part
of boost::thread/std::thread
for arbitrary number of parameters bind() could be used

It is not clear whether Fn has to be a function object or if it can be a
> callable as with std::thread.


can be callable


> The order of parameters of the overload taking attributes does not match
> that of boost::thread.
>

the attributes control things like stack-size which is not required by
boost::thread


>
> - No notion of native_handle (this may not make sense for fibers, I
> haven't looked at the implementation).
>

native_handle in the context of thread referre to handles of the underlying
framework (for instance pthread)
for fibers not applicable


> - There is no notion of explicit operator bool in neither boost::thread
> nor std::thread.
>

add for convenience to the if a fiber instance is valid or not (== check
for not-a-fiber)


> - There is an operator < for fibers and none for id. There should be no
> relational operators for fiber, and the full set for fiber::id as well as
> hash support and ostream insertion.
>

id has 'operator<' etc.


> - Several functions take a fixed time_point type instead of a chrono one.
>

it is a typedef of chrono::steady_clock/system_clock -> otherwise all
scheduler instances would have made templates


> - There is no indication whether the futures support void (I assume they
> do) and R& (I assume they don't).


future supports future< R >, future< R & >, future< void > - the problem
was how to express it in a comfortable way in the docu


> The return type for shared_future::get is wrong.
>

OK - this is a copy-and-past error from future<>::get. I'll fix it!


> - The documentation for promise doesn't seem to support void, it is
> unclear whether they support references. Another explicit operator bool.
>

promise supports promise< R >, promise< R& >, promise< void > - suggestions
how to write the docu without repreating the interface for the
specializations?


> - I saw mentions of async in the documentation, but I couldn't find the
> actual documentation for it.


the docu about futures has a short reference to async() but I'll add a
explicit section for async() too.


> It's not clear whether deferred futures are supported, at least they
> appear not to be from future's reference.
>

not supported

Agustín K-ballo Bergé

unread,
Jan 9, 2014, 10:06:16 AM1/9/14
to bo...@lists.boost.org
On 09/01/2014 05:06 a.m., Oliver Kowalke wrote:
> 2014/1/9 Agustín K-ballo Bergé <kaba...@hotmail.com>
>
>> - The constructor takes a nullary function, instead of a callable and an
>> arbitrary number of parameters.
>
>
> will collide with the additional parameters required by fibers and not part
> of boost::thread/std::thread
> for arbitrary number of parameters bind() could be used
>
>> The order of parameters of the overload taking attributes does not match
>> that of boost::thread.
>>
>
> the attributes control things like stack-size which is not required by
> boost::thread
>

boost::thread has a constructor taking attributes too, this is the first
argument so there would be no collision. This should be adjusted not
only to be coherent with boost::thread, but also to implement the
general constructor since matching the same semantics otherwise is
tricky. Consider:

boost::fiber fib{
[f = std::move(f), a0 = std::move(a0), ...]() mutable
{
std::move(f)(std::move(a0), ...);
};
}

against the standard conformant constructor with the same semantics:

boost::fiber fib{std::move(f), std::move(a0), ...};

>> - There is no notion of explicit operator bool in neither boost::thread
>> nor std::thread.
>>
>
> add for convenience to the if a fiber instance is valid or not (== check
> for not-a-fiber)
>

I understand the perceived convenience of semantic sugar, to me it is an
unnecessary divergence from std/boost::thread. However, I can always
avoid their use completely.

>> - There is an operator < for fibers and none for id. There should be no
>> relational operators for fiber, and the full set for fiber::id as well as
>> hash support and ostream insertion.
>>
>
> id has 'operator<' etc.
>

Please add reference documentation for fiber::id.

>> - There is no indication whether the futures support void (I assume they
>> do) and R& (I assume they don't).
>
>
> future supports future< R >, future< R & >, future< void > - the problem
> was how to express it in a comfortable way in the docu
>
>> - The documentation for promise doesn't seem to support void, it is
>> unclear whether they support references. Another explicit operator bool.
>>
>
> promise supports promise< R >, promise< R& >, promise< void > - suggestions
> how to write the docu without repreating the interface for the
> specializations?
>

Refer to the standard for a concise definition of future/promise. It
basically defines the specializations only when they differ.

Regards,
--
Agustín K-ballo Bergé.-
http://talesofcpp.fusionfenix.com

Oliver Kowalke

unread,
Jan 9, 2014, 11:15:33 AM1/9/14
to boost
2014/1/9 Agustín K-ballo Bergé <kaba...@hotmail.com>

> boost::thread has a constructor taking attributes too, this is the first
> argument so there would be no collision.

This should be adjusted not only to be coherent with boost::thread, but
> also to implement the general constructor since matching the same semantics
> otherwise is tricky. Consider:
>
> boost::fiber fib{
> [f = std::move(f), a0 = std::move(a0), ...]() mutable
> {
> std::move(f)(std::move(a0), ...);
> };
> }
>
> against the standard conformant constructor with the same semantics:
>
> boost::fiber fib{std::move(f), std::move(a0), ...};


OK - agreed, it should be too complicated to add it


> I understand the perceived convenience of semantic sugar, to me it is an
> unnecessary divergence from std/boost::thread.

However, I can always avoid their use completely.


it could be removed but if found it useful in the case of fiber-stealing,
e.g.

boost::fibers::fiber f( other_rr->steal_from() );if ( f) {
// migrate stolen fiber to scheduler running in this thread
rr.migrate_to( f);
...
}


of course, I could let steal_from() return a bool

Please add reference documentation for fiber::id.
>

OK


>
> - There is no indication whether the futures support void (I assume they
>>> do) and R& (I assume they don't).
>>>
>>
>>
>> future supports future< R >, future< R & >, future< void > - the problem
>> was how to express it in a comfortable way in the docu
>>
>> - The documentation for promise doesn't seem to support void, it is
>>> unclear whether they support references. Another explicit operator bool.
>>>
>>>
>> promise supports promise< R >, promise< R& >, promise< void > -
>> suggestions
>> how to write the docu without repreating the interface for the
>> specializations?
>>
>>
> Refer to the standard for a concise definition of future/promise. It
> basically defines the specializations only when they differ.


I've had tried something like
http://en.cppreference.com/w/cpp/thread/futurebut I wasn't satisfied
with the result from quickbook-generated
html. I'll follow your advice.

Oliver Kowalke

unread,
Jan 9, 2014, 11:16:54 AM1/9/14
to boost
2014/1/9 Oliver Kowalke <oliver....@gmail.com>
>
> OK - agreed, it should be too complicated to add it
>

s/should be/shouldn't be/ - sorry

Vicente J. Botet Escriba

unread,
Jan 9, 2014, 1:54:37 PM1/9/14
to bo...@lists.boost.org
Le 09/01/14 08:42, Oliver Kowalke a écrit :
> 2014/1/8 Vicente J. Botet Escriba <vicent...@wanadoo.fr>
>
>> Hi Oliver,
>>
> hello Vicente
>
>
>> The interface must at least follows the interface of the standard thread
>> library and if there are some limitations, they must be explicitly
>> documeted.
>> Any difference respect Boost.Thread must also documented and the rational
>> explained.
>>
> OK - section rational
>
>
>> std::thread is not copyable by design, that is only one owner. WHy
>> boost::fibers::fiber is copyable?
>>
> boost::fibers::fiber should be movable only - it is derived from
> boost::noncopyable and uses BOOST_MOVABLE_BUT_NOT_COPYABLE
Sorry, I don't find now from where I read that fiber is copyable. Maybe
I was tired :(
>
>
>> Why the exceptions throw by the function given to a fiber is consumed by
>> the framework instead of terminate the program as std::thread?
>>
> the trampoline-function used for the context does following:
> - in a try-cactch-block execute the fiber-code (fiber-function given by the
> user)
> - catch exception forced_unwind from boost.coroutine -> release the fiber
> and continue unwinding the stack
> - catch fiber_interrupted -> stored inside boost::exception_ptr (might be
> re-thrown in fiber::join() )
> - catch all other exceptions and call std::terminate()
>
> I though this would let to an equivalent behaviour as std::thread
>
Then you should be more explicit in this paragraph
"Exceptions thrown by the function or callable object passed to the
|fiber|
<http://olk.github.io/libs/fiber/doc/html/fiber/fiber_mgmt/fiber.html#class_fiber>
constructor are consumed by the framework. If you need to know which
exception was thrown, use |future<>|
<http://olk.github.io/libs/fiber/doc/html/fiber/synchronization/futures/future.html#class_future>
and |packaged_task<>|
<http://olk.github.io/libs/fiber/doc/html/fiber/synchronization/futures/packaged_task.html#class_packaged_task>.
"
>> Which exception is thrown when the Error Conditions:resource_deadlock_would_occurand
>> invalid_argument are signaled?
>>
> I use BOOST_ASSERT instead of exception
Where is this documented?
>
>> Why priority and thread_affinity are not part of the fiber attributes?
>>
> you referre to class attributes passed to fiber's ctor? this class is a
> vehicle for passing special parameter (for instance stack-size) to
> boost::coroutine - if you use
> the segmented-stack feature you usually don't need it.
> priority() and thread_affinity() are member-functions of
> boost::fibers::fiber to make the modifications those parameters for an
> instance more explicit
What do you think about adding them to the class atributes also?
>
>> The interface let me think that the affinity can be changed by the owned
>> of the thread and the thred itself. Is this by design?
>>
> thread_affinity() expresses if the fiber is bound to the thread - this is
> required for fiber-stealing, e.g. a fiber which is bound to its running
> thread
> will not be selected as candidate for fiber-stealing.
This doesn't respond to my question. Why the change of thread_affinity
need to be changed by the thread itself and by the fiber owner?
>
>
>> Please don't document get/set functions as thread_affinity altogether.
>>
> OK
>
>
>> The safe_bool idiom should be replaced by a explict operator bool.
>>
> hmm - I thought using 'operator bool' is dangerous (I remind on some
> discussion of this issue by Scott Meyers).
Could you explain why it is dangerous and how it is more dangerous than
using the safe bool idiom?
> do you have other infos?
>
>
>> Why is the scheduling algorithm global? Could it be threads specific?
>
> It is thread specific (using boost::thread_specific_ptr)
How the user selects the thread to which the scheduling algorithm is
applied? the current thread?
If yes, what about adding the function on a this_thread namespace?

boost::fibers::this_thread::set_scheduling_algorithm( & mfs);


>
>
>> BTW i sthere an exmple showing the |thread_specific_ptr trick mentioned
>> on the documentation. |
>>
> which trick? maybe you referring to
"Note:

|set_scheduling_algorithm()| does /not/ take ownership of the passed
|algorithm*|: *Boost.Fiber* does not claim responsibility for the
lifespan of the referenced |scheduler| object. The caller must
eventually destroy the passed |scheduler|, just as it must allocate
it in the first place. (Storing the pointer in a
|boost::thread_specific_ptr| is one way to ensure that the instance
is destroyed on thread termination.)"

> algorithm *
> scheduler::instance()
> {
> if ( ! instance_.get() )
> {
> default_algo_.reset( new round_robin() );
> instance_.reset( default_algo_.get() );
> }
> return instance_.get();
> }
>
>
>> Why the time related function are limited to a specific clock?
>>
> it is a typedef to steady_clock (if avaliable) or system_clock - one of the
> main reasons is that you would have made the schedulers be templates.

Maybe schedulers need to take care of a single time_point, but the other
classes should provide an time related interface using any clock.
>
>
>> The interface of fiber_group based on old and deprecated thread_group is
>> not based on move semantics. Have you take a look at the proposal
>> N3711 <http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3711.pdf>
>> Task Groups As a Lower Level C++ Library Solution To Fork-Join Parallelism
>>
> no - but I've no problems to remove it form the library

>
>
>> Maybe it is worth adapting it to fibers.
>> **
>> Boost.Thread has deprecated the use of the nested type scoped_lock as it
>> introduce unnecessary dependencies. DO you think it is worth maintaining it?
>>
> oh - I wasn't aware of this issue - I've no preferrence to scoped_lock
> (which is a typedef to unique_lock, AFAIK)
>
>
>> I made some adaptations to boost::barrier that could also have a sens for
>> fibers.
>
> OK - what are those adaptations?
See
http://www.boost.org/doc/libs/1_55_0/doc/html/thread/synchronization.html#thread.synchronization.barriers
and
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3817.html#barrier_operations
>
>
>> I don't know if a single class could be defined that takes care of both
>> contexts for high level classes as the barrier?
>>
> a problem is raised by the mutex implementations - thread's mutexes are
> blocking the thread while fiber's mutexes do
> only suspend the current fiber while keep the thread running (so other
> fibers are able to run instead)
>
> I was thinking on a combination of sync. primitives for threads and fibers
> too , but it is not that easy to implement (with a clean interface)
Ok. Glad to see that you have tried it.
>
>> Boost.Thread would deliver two synchronized bounded and unbounded queue
>> soon based on
>> N3533 <http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3533.html>
>> C++ Concurrent Queues
>>
> OK
>
>
>> Have you tried to follow the same interface?
>>
> I did look at the proposal of C++ Concurrent Queues - but I didn't adapt
> the complete interface.
> for instance Element queue::value_pop(); -> queue_op_status queue::pop(
> Element &);
>
Could you explain the rationale?

Element queue::value_pop();

can be used with no default constructible types

while

queue_op_status queue::pop(Element &);

not.

Best,
Vicente

Oliver Kowalke

unread,
Jan 9, 2014, 2:28:57 PM1/9/14
to boost
2014/1/9 Vicente J. Botet Escriba <vicent...@wanadoo.fr>

> Then you should be more explicit in this paragraph
> "Exceptions thrown by the function or callable object passed to the
> |fiber| <http://olk.github.io/libs/fiber/doc/html/fiber/fiber_
> mgmt/fiber.html#class_fiber> constructor are consumed by the framework.
> If you need to know which exception was thrown, use |future<>| <
> http://olk.github.io/libs/fiber/doc/html/fiber/synchronization/futures/
> future.html#class_future> and |packaged_task<>| <
> http://olk.github.io/libs/fiber/doc/html/fiber/synchronization/futures/
> packaged_task.html#class_packaged_task>. "


OK


> Which exception is thrown when the Error Conditions:resource_deadlock_
>>> would_occurand
>>> invalid_argument are signaled?
>>>
>>> I use BOOST_ASSERT instead of exception
>>
> Where is this documented?


it's not documented - I'll add some notes


> What do you think about adding them to the class atributes also?


it would be possible - I've not thought on this variant


> This doesn't respond to my question. Why the change of thread_affinity
> need to be changed by the thread itself and by the fiber owner?


- fiber owner might deside on some point that a specia fiber is save to be
migrated to another thread
- it is the fiber itself (code running in the fiber) which can modify its
thread-affinity

-> the user (code) can decide when I fiber is safe to be selected for
migrating between threads


> Could you explain why it is dangerous and how it is more dangerous than
> using the safe bool idiom?


struct X {
operator bool() {}
};

struct Y {
operator bool() {}
};

X x; Y y;

if ( x == y) // does compile


> How the user selects the thread to which the scheduling algorithm is
> applied? the current thread?
> If yes, what about adding the function on a this_thread namespace?
>
> boost::fibers::this_thread::set_scheduling_algorithm( & mfs);


maybe - I would not add too many nested namespaces


> "Note:
>
> |set_scheduling_algorithm()| does /not/ take ownership of the passed
> |algorithm*|: *Boost.Fiber* does not claim responsibility for the
> lifespan of the referenced |scheduler| object. The caller must
> eventually destroy the passed |scheduler|, just as it must allocate
> it in the first place. (Storing the pointer in a
> |boost::thread_specific_ptr| is one way to ensure that the instance
> is destroyed on thread termination.)"


there is no special trick - it is the code below which installs the default
scheduler if the user
does not call set_scheduling_algorithm().


> algorithm *
> scheduler::instance()
> {
> if ( ! instance_.get() )
> {
> default_algo_.reset( new round_robin() );
> instance_.reset( default_algo_.get() );
> }
> return instance_.get();
> }
>


Maybe schedulers need to take care of a single time_point, but the other
> classes should provide an time related interface using any clock.


OK - boost.chrono is your domain. functions accepting a
time_duration/time_point (from boost.chrono) can always be mapped/applied
to steady_clock/system_clock?

>
> I made some adaptations to boost::barrier that could also have a sens for
>> fibers.
>>
>
> OK - what are those adaptations?
>
See
> http://www.boost.org/doc/libs/1_55_0/doc/html/thread/
> synchronization.html#thread.synchronization.barriers and
> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/
> n3817.html#barrier_operations


OK - that's new to me, but I don't know the use case of completion function
etc. - do you have some hints?


> Ok. Glad to see that you have tried it.


maybe in another library the combination of those two kinds of sync.
primitives will succeed


> Could you explain the rationale?
>
> Element queue::value_pop();
>
> can be used with no default constructible types
>
> while
>
> queue_op_status queue::pop(Element &);
>
> not.
>

yes - you are right. I was more focused on a symmetry of the interface
(returning queue_op_status)

Nat Goodspeed

unread,
Jan 9, 2014, 2:52:24 PM1/9/14
to bo...@lists.boost.org
On Thu, Jan 9, 2014 at 2:28 PM, Oliver Kowalke <oliver....@gmail.com> wrote:

> 2014/1/9 Vicente J. Botet Escriba <vicent...@wanadoo.fr>

> > Could you explain the rationale?
> > Element queue::value_pop();
> > can be used with no default constructible types
> > while
> > queue_op_status queue::pop(Element &);
> > not.

> yes - you are right. I was more focused on a symmetry of the interface
> (returning queue_op_status)

There's the question of what should happen to a caller blocked in
value_pop() when the producer calls close(). Returning queue_op_status
seems safer.

Vicente J. Botet Escriba

unread,
Jan 9, 2014, 5:10:54 PM1/9/14
to bo...@lists.boost.org
Le 09/01/14 20:28, Oliver Kowalke a écrit :
I don't see the problem with explicit conversion

struct X {
explicit operator bool() {}
};

struct Y {
explicit operator bool() {}
};


>> How the user selects the thread to which the scheduling algorithm is
>> applied? the current thread?
>> If yes, what about adding the function on a this_thread namespace?
>>
>> boost::fibers::this_thread::set_scheduling_algorithm( & mfs);
>
> maybe - I would not add too many nested namespaces
It has the advantage to be clear. As you can see I was confused to what
the function was applying.
>
>
>> "Note:
>>
>> |set_scheduling_algorithm()| does /not/ take ownership of the passed
>> |algorithm*|: *Boost.Fiber* does not claim responsibility for the
>> lifespan of the referenced |scheduler| object. The caller must
>> eventually destroy the passed |scheduler|, just as it must allocate
>> it in the first place. (Storing the pointer in a
>> |boost::thread_specific_ptr| is one way to ensure that the instance
>> is destroyed on thread termination.)"
>
> there is no special trick - it is the code below which installs the default
> scheduler if the user
> does not call set_scheduling_algorithm().
>
>
>> algorithm *
>> scheduler::instance()
>> {
>> if ( ! instance_.get() )
>> {
>> default_algo_.reset( new round_robin() );
>> instance_.reset( default_algo_.get() );
>> }
>> return instance_.get();
>> }
>>
Is there an example showing this on the repository?
Can the scheduler be shared between threads?
>
> Maybe schedulers need to take care of a single time_point, but the other
>> classes should provide an time related interface using any clock.
>
> OK - boost.chrono is your domain. functions accepting a
> time_duration/time_point (from boost.chrono) can always be mapped/applied
> to steady_clock/system_clock?
No mapping can be done between clocks and there is no need to do this
mapping. This is how standard chrono libray was designed. Please take a
look at the following implementation

template <class Clock, class Duration>
bool try_lock_until(const chrono::time_point<Clock, Duration>& t)
{
using namespace chrono;
system_clock::time_point s_now = system_clock::now();
typename Clock::time_point c_now = Clock::now();
return try_lock_until(s_now + ceil<nanoseconds>(t - c_now));
}
template <class Duration>
bool try_lock_until(const
chrono::time_point<chrono::system_clock, Duration>& t)
{
using namespace chrono;
typedef time_point<system_clock, nanoseconds> nano_sys_tmpt;
return
try_lock_until(nano_sys_tmpt(ceil<nanoseconds>(t.time_since_epoch())));
}

Note that only the try_lock_until(const
chrono::time_point<chrono::system_clock, nanoseconds>& ) function needs
to be virtual.
>
>> I made some adaptations to boost::barrier that could also have a sens for
>>> fibers.
>>>
>> OK - what are those adaptations?
>>
> See
>> http://www.boost.org/doc/libs/1_55_0/doc/html/thread/
>> synchronization.html#thread.synchronization.barriers and
>> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/
>> n3817.html#barrier_operations
>
> OK - that's new to me, but I don't know the use case of completion function
> etc. - do you have some hints?
From the pointed proposal
"


A Note on Completion Functions and Templates

The proposed barrier takes an optional completion function, which may
either return void or size_t. A barrier may thus do one of three things
after all threads have called |count_down_and_wait()|:

* Reset itself automatically (if given no completion function.)
* Invoke the completion function and then reset itself automatically
(if given a function returning void).
* Invoke the completion function and use the return value to reset
itself (if given a function returning size_t)."

As you can see this parameter is most of the time related to how to
reset the barrier counter.

>
>
>> Ok. Glad to see that you have tried it.
>
> maybe in another library the combination of those two kinds of sync.
> primitives will succeed
>
>
>> Could you explain the rationale?
>>
>> Element queue::value_pop();
>>
>> can be used with no default constructible types
>>
>> while
>>
>> queue_op_status queue::pop(Element &);
>>
>> not.
>>
> yes - you are right. I was more focused on a symmetry of the interface
> (returning queue_op_status)
>
Great.

Vcente

Vicente J. Botet Escriba

unread,
Jan 9, 2014, 5:11:41 PM1/9/14
to bo...@lists.boost.org
Le 09/01/14 20:52, Nat Goodspeed a écrit :
> On Thu, Jan 9, 2014 at 2:28 PM, Oliver Kowalke <oliver....@gmail.com> wrote:
>
>> 2014/1/9 Vicente J. Botet Escriba <vicent...@wanadoo.fr>
>>> Could you explain the rationale?
>>> Element queue::value_pop();
>>> can be used with no default constructible types
>>> while
>>> queue_op_status queue::pop(Element &);
>>> not.
>> yes - you are right. I was more focused on a symmetry of the interface
>> (returning queue_op_status)
> There's the question of what should happen to a caller blocked in
> value_pop() when the producer calls close(). Returning queue_op_status
> seems safer.
>
>
The proposal throws an exception.

Vicente

Nat Goodspeed

unread,
Jan 9, 2014, 5:25:16 PM1/9/14
to bo...@lists.boost.org
On Thu, Jan 9, 2014 at 5:10 PM, Vicente J. Botet Escriba
<vicent...@wanadoo.fr> wrote:

> Le 09/01/14 20:28, Oliver Kowalke a écrit :

>> 2014/1/9 Vicente J. Botet Escriba <vicent...@wanadoo.fr>

> I don't see the problem with explicit conversion
>
> struct X {
> explicit operator bool() {}
> };
>
> struct Y {
> explicit operator bool() {}
> };

Isn't that new in C++11? I'm pretty sure Oliver is trying hard to
retain C++03 compatibility.

Oliver Kowalke

unread,
Jan 9, 2014, 5:29:23 PM1/9/14
to boost
2014/1/9 Vicente J. Botet Escriba <vicent...@wanadoo.fr>

> struct X {
>> operator bool() {}
>> };
>>
>> struct Y {
>> operator bool() {}
>> };
>>
>> X x; Y y;
>>
>> if ( x == y) // does compile
>>
>> I don't see the problem with explicit conversion
>
>>
you can compare X and Y

Is there an example showing this on the repository?
>

no its is an implementation in side the framework, e.g. class scheduler
which holds the user-defined or the default
implementation of the scheduler-impl (default is round_robin)


> Can the scheduler be shared between threads?


no - not the schedulers (== classes derived from interface algorithm, e.g.
round_robin and round_robin_ws)
which might be share-able between threads but I doubt that it would make
sense - a scheduler derived from alogrithm
schedules the fibers running in the current thread


> Maybe schedulers need to take care of a single time_point, but the other
>>
>>> classes should provide an time related interface using any clock.
>>>
>>
>> OK - boost.chrono is your domain. functions accepting a
>> time_duration/time_point (from boost.chrono) can always be mapped/applied
>> to steady_clock/system_clock?
>>
> No mapping can be done between clocks and there is no need to do this
> mapping. This is how standard chrono libray was designed. Please take a
> look at the following implementation
>
> template <class Clock, class Duration>
> bool try_lock_until(const chrono::time_point<Clock, Duration>& t)
> {
> using namespace chrono;
> system_clock::time_point s_now = system_clock::now();
> typename Clock::time_point c_now = Clock::now();
> return try_lock_until(s_now + ceil<nanoseconds>(t - c_now));
> }
> template <class Duration>
> bool try_lock_until(const chrono::time_point<chrono::system_clock,
> Duration>& t)
> {
> using namespace chrono;
> typedef time_point<system_clock, nanoseconds> nano_sys_tmpt;
> return try_lock_until(nano_sys_tmpt(ceil<nanoseconds>(t.time_
> since_epoch())));
> }
>
> Note that only the try_lock_until(const chrono::time_point<chrono::system_clock,
> nanoseconds>& ) function needs to be virtual.


seem to me at the first look that it woul be compilcate in teh case of
boost.fiber because:
- the scheduler (derived from algorithm, e.g. for instance round_robin)
uses internally a specific clock type (steady_clock)
- member-functions of sync classes like condition_variable::wait_for()
takes a time_duration or condition_variable::wait_until() a time_point

time_duration + time_point can be of arbitrary clock-type from boost.chrono?
can I simply add a time_duration different fro msteady_clock to
steady_clock::now()?


>
>
>> I made some adaptations to boost::barrier that could also have a sens for
>>>
>>>> fibers.
>>>>
>>>> OK - what are those adaptations?
>>>
>>> See
>>
>>> http://www.boost.org/doc/libs/1_55_0/doc/html/thread/
>>> synchronization.html#thread.synchronization.barriers and
>>> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/
>>> n3817.html#barrier_operations
>>>
>>
>> OK - that's new to me, but I don't know the use case of completion
>> function
>> etc. - do you have some hints?
>>
> From the pointed proposal
> "
>
>
> A Note on Completion Functions and Templates
>
> The proposed barrier takes an optional completion function, which may
> either return void or size_t. A barrier may thus do one of three things
> after all threads have called |count_down_and_wait()|:
>
> * Reset itself automatically (if given no completion function.)
> * Invoke the completion function and then reset itself automatically
> (if given a function returning void).
> * Invoke the completion function and use the return value to reset
> itself (if given a function returning size_t)."
>
> As you can see this parameter is most of the time related to how to reset
> the barrier counter.
>

ready this too but do you have an example code demonstrating a use case,
sorry I've no idea which use cases would require a completion function?

Vicente J. Botet Escriba

unread,
Jan 9, 2014, 5:37:25 PM1/9/14
to bo...@lists.boost.org
Le 08/01/14 23:48, Vicente J. Botet Escriba a écrit :
> Le 06/01/14 14:07, Nat Goodspeed a écrit :
>> Hi all,
>>
>> The review of Boost.Fiber by Oliver Kowalke begins today, Monday January
>> 6th, and closes Wednesday January 15th.
>>
>>
>>
>> - What is your evaluation of the design?
>
More questions/remarks:

The algorithm class should be called scheduler. Either his definition is
hidden to the user or you don't use detail and document all the needed
types.

Please replace _ws by work_stealing.

Can the user define a specific scheduler that provide the work stealing?

Could you show an example of a specific algorithm?

How portable is the priority if it is specific of the scheduler?

I'm sure I'm missing the role of the scheduler. what are the propose of
the virtual functions. I have the impression that it implements a lot of
things none of them related to scheduler. It seems to me that it is the
class that is calling to the scheduler that schedules. What am I missing?

I'll continue once I understand what the scheduler algorithm purpose is for.

Vicente J. Botet Escriba

unread,
Jan 9, 2014, 5:40:34 PM1/9/14
to bo...@lists.boost.org
Le 09/01/14 23:25, Nat Goodspeed a écrit :
> On Thu, Jan 9, 2014 at 5:10 PM, Vicente J. Botet Escriba
> <vicent...@wanadoo.fr> wrote:
>
>> Le 09/01/14 20:28, Oliver Kowalke a écrit :
>>> 2014/1/9 Vicente J. Botet Escriba <vicent...@wanadoo.fr>
>> I don't see the problem with explicit conversion
>>
>> struct X {
>> explicit operator bool() {}
>> };
>>
>> struct Y {
>> explicit operator bool() {}
>> };
> Isn't that new in C++11? I'm pretty sure Oliver is trying hard to
> retain C++03 compatibility.
>
Then include both and let the use choose if he wants to have an
application portable to c++11 compiler only or to c++98 also.

Vicente

Vicente J. Botet Escriba

unread,
Jan 9, 2014, 5:49:23 PM1/9/14
to bo...@lists.boost.org
Le 09/01/14 23:29, Oliver Kowalke a écrit :
> 2014/1/9 Vicente J. Botet Escriba <vicent...@wanadoo.fr>
>
>> struct X {
>>> operator bool() {}
>>> };
>>>
>>> struct Y {
>>> operator bool() {}
>>> };
>>>
>>> X x; Y y;
>>>
>>> if ( x == y) // does compile
>>>
>>> I don't see the problem with explicit conversion
> you can compare X and Y
You are surely right, but I don't see how? Could you show it?
>
> Is there an example showing this on the repository?
> no its is an implementation in side the framework, e.g. class scheduler
> which holds the user-defined or the default
> implementation of the scheduler-impl (default is round_robin)
Could you post here an example at the user level?
>
>
>> Can the scheduler be shared between threads?
>
> no - not the schedulers (== classes derived from interface algorithm, e.g.
> round_robin and round_robin_ws)
> which might be share-able between threads but I doubt that it would make
> sense - a scheduler derived from alogrithm
> schedules the fibers running in the current thread
Why do the user needs to own the scheduler?
I give you un example using system_clock. The opposite is of course possible
> - member-functions of sync classes like condition_variable::wait_for()
> takes a time_duration or condition_variable::wait_until() a time_point
And? Take a look at the thread implementation of e.g. libc++ which is
form me the best one.
> time_duration + time_point can be of arbitrary clock-type from boost.chrono?
No.
> can I simply add a time_duration different fro msteady_clock to
> steady_clock::now()?
There is no time_duration time. Just duration. duration can be added to
any system_clock::time_point, steady_clock::time_point or whatever
timepoint<Clock, Duration>.
The completion function can be used to reset the barrier to whatever new
value you want, for example.

Vicente

Oliver Kowalke

unread,
Jan 10, 2014, 2:19:53 AM1/10/14
to boost
2014/1/9 Vicente J. Botet Escriba <vicent...@wanadoo.fr>
>
> you can compare X and Y
>
You are surely right, but I don't see how? Could you show it?


implicit conversion to bool through operator bool()


> Is there an example showing this on the repository?
>> no its is an implementation in side the framework, e.g. class scheduler
>> which holds the user-defined or the default
>> implementation of the scheduler-impl (default is round_robin)
>>
> Could you post here an example at the user level?


sorry - I don't understand your question.

boost.fiber contains already two implementations of fiber-schedulers
(derived from algorithm) -
class round_robin and class round_robin_ws.

an instance of class round_robin will be installed by the framework if the
user does not apply its own
scheduler (via set_scheduling_algorithm()).

how the fiber-.schedulers are managed inside the framework is out of scope
for the user (because an implementation detail) 00
the user does not have to deal with it.


> Why do the user needs to own the scheduler?


in contrast to threads - where the operating system owns the scheduler -
there is no instance other the user
responsible to own/manage the fiber-scheduler (this includes instannling
the default fiber-scheduler)

detailed: you can describe a fiber as a thin wrapper around a coroutine -
the fiber contains some additional data structures like
internal state (like READY, WAITING, TERMIANTED etc.) and a list of joining
fibers (waiting on that fiber to terminate).

A fiber-scheduler (e.g. a class derived from algorithm) is required to
manage the fibers running in one thread - therefore the scheduler isntance
is stored in a thread_specific_ptr (this is an implementation detail),
hence the fiber-scheduler instance is global for this thread but in other
threads
you have other fiber-schdulers running.
The fiber-schduler itself has two queues (this is true for round_robin and
round_robin_ws from the lib) - a queue for suspended fiber waiting on some
event
( to be signaled from a sync. primitive or sleeping for a certain time ->
internal state is WAITING) and a queue holding ready to run coroutines
(internal state is READY).
the scheduler deques a fiber from the ready-queue and resumes it.

If the user is not satisfied with the features provided by round_robin he
can implement its own fiber-scheduler (providing priority ordering) by
derving from algorithm and calling set_scheduling_algorithm().


> There is no time_duration time. Just duration.
>

yes - I see you know what I mean ;)


> duration can be added to any system_clock::time_point,
> steady_clock::time_point or whatever timepoint<Clock, Duration
>

OK - but the interface of the lib already accepts arbitrary duration types
from boost.chrono.

Oliver Kowalke

unread,
Jan 10, 2014, 2:26:47 AM1/10/14
to boost
2014/1/9 Vicente J. Botet Escriba <vicent...@wanadoo.fr>

> The algorithm class should be called scheduler.


or manager?


> Either his definition is hidden to the user or you don't use detail and
> document all the needed types.
>

usually the user don't have to care but it allows the user to implement its
own scheduler


> Please replace _ws by work_stealing.
>

OK


> Can the user define a specific scheduler that provide the work stealing?
>

yes, as round_robin_ws already does and the class might be a blue print


> Could you show an example of a specific algorithm?
>

class round_robin in the library


> How portable is the priority if it is specific of the scheduler?
>

sorry - I don't get it. inside a thread the fibers are running the semantic
of a priority is the same (because you have only one scheduler dealing with
the priorities)
if you refer to migrating a fiber between threads (e.g. between
fiber-schedulers) the interpretation of priority depends on the semantics
of priority in the fiber-scheduler
the fiber was migrated to.


> I'm sure I'm missing the role of the scheduler. what are the propose of
> the virtual functions.


deriving/overloading


> I have the impression that it implements a lot of things none of them
> related to scheduler. It seems to me that it is the class that is calling
> to the scheduler that schedules. What am I missing?
> I'll continue once I understand what the scheduler algorithm purpose is
> for.
>

please look at my previous email - it contains a rough explanation of
schdulers

Vicente J. Botet Escriba

unread,
Jan 10, 2014, 2:43:41 AM1/10/14
to bo...@lists.boost.org
Le 10/01/14 08:19, Oliver Kowalke a écrit :
> 2014/1/9 Vicente J. Botet Escriba <vicent...@wanadoo.fr>
>> you can compare X and Y
>>
> You are surely right, but I don't see how? Could you show it?
>
>
> implicit conversion to bool through operator bool()
>
>
>> Is there an example showing this on the repository?
>>> no its is an implementation in side the framework, e.g. class scheduler
>>> which holds the user-defined or the default
>>> implementation of the scheduler-impl (default is round_robin)
>>>
>> Could you post here an example at the user level?
>
> sorry - I don't understand your question.
I will come later with this point.
>
> boost.fiber contains already two implementations of fiber-schedulers
> (derived from algorithm) -
> class round_robin and class round_robin_ws.
>
> an instance of class round_robin will be installed by the framework if the
> user does not apply its own
> scheduler (via set_scheduling_algorithm()).
>
> how the fiber-.schedulers are managed inside the framework is out of scope
> for the user (because an implementation detail) 00
> the user does not have to deal with it.
>
I understand why I was lost by the algorithm function description. The
fact that you showed the algorithm interface make me though that this
is an extension point pf the library. You must remove this a just say
that there are two scheduler.
Hrr. So its is an extension point, but for advanced users that know how
to do it looking at the code.
You mustn't document the interface of the algorithm class even in this
case.
>> There is no time_duration time. Just duration.
>>
> yes - I see you know what I mean ;)
>
>
>> duration can be added to any system_clock::time_point,
>> steady_clock::time_point or whatever timepoint<Clock, Duration
>>
> OK - but the interface of the lib already accepts arbitrary duration types
> from boost.chrono.
>
Don't forget that my point was related to time_points.

Vicente

Antony Polukhin

unread,
Jan 10, 2014, 2:50:53 AM1/10/14
to boost@lists.boost.org List
>
> Please always state in your review whether you think the library should be
> accepted as a Boost library!
>

Library should be accepted, but some modification must be applied (see
below).


> Additionally please consider giving feedback on the following general
> topics:
>
> - What is your evaluation of the design?
>

Part that mimics the Boost.Thread interface is good. Missing functions and
minor updates can be easily added even after the review.

Scheduler does not look good. Class "algorithm" looks overcomplicated: too
many methods, classes from detail namespace are used in function
signatures, looks like some methods duplicate each other (yield, wait).

I'd recommend to hide "algorithm" from user for nearest releases in detail
namespace, refactor it and only after that - show it to user. Maybe an
additional mini review will be required for algorithm.


> - What is your evaluation of the implementation?
>

I'm slightly confused by the fact that fiber_base use many dynamic memory
allocations (it contains std::vector, std::map) and virtual functions.
There must be some way to optimize away that, otherwise fibers may be much
slower than threads.

Heap allocations also exist in mutex class. I dislike the `waiting_.
push_back( n);` inside the spinlocked sections. Some intrusive container
must be used instead of std::deque.

There is a copy-pasted code in mutex.cpp. `try_lock` and `lock` member
functions have very different implementations, that's suspicious.

Back to the fiber_base. There's too many atomics and locks in it. As I
understand, the default use-case does not include thread migrations so
there is no need to put all the synchronizations inside the fiber_base.
round_robin_ws and mutexes work with multithreading, ideologically it's the
more correct place for synchronizations. Let fiber_base be thread unsafe,
and schedulers and mutexes take care about the serializing access to the
fiber_base. Otherwise singlethreaded users will loose performance.


> - What is your evaluation of the documentation?
>

Please add "Examples" section. Paul Bristow already mentioned a QuickBook's
ability to import source files using [@../../example/my_example.cpp] syntax.


> - What is your evaluation of the potential usefulness of the library?
>

I have a few complicated libraries that will benefit from fibers. However
short real-life usecase examples will be good to get the idea of fibers to
new library users.


> - Did you try to use the library? With what compiler? Did you have any
> problems?
>

There were no problems with GCC-4.7 and clang. Many warnings were rised
during compilation, but most part of the warnings belong to the DateTime
library.


> - How much effort did you put into your evaluation? A glance? A quick
> reading? In-depth study?
>

I've tried a few examples, made a bunch of experiments with asio. Spend few
hours looking through the source code.


> - Are you knowledgeable about the problem domain?
>

Not a pro with fibers, but have good experience with threads.

--
Best regards,
Antony Polukhin

Oliver Kowalke

unread,
Jan 10, 2014, 2:52:29 AM1/10/14
to boost
2014/1/10 Vicente J. Botet Escriba <vicent...@wanadoo.fr>

> I understand why I was lost by the algorithm function description. The
> fact that you showed the algorithm interface make me though that this is
> an extension point pf the library. You must remove this a just say that
> there are two scheduler.


OK


> Hrr. So its is an extension point, but for advanced users that know how to
> do it looking at the code.
> You mustn't document the interface of the algorithm class even in this
> case.


I should remove the description of algorithm from the docu?


> Don't forget that my point was related to time_points.


in the case of time_points it is a little bit complicated. algorithm,
fiber-schdulers and the sync. primitives use steady_clock::time_point.
I don't see how I could make this flexible so that it would work with all
kinds of clocks from boost.chrono.
the only possibility would be to make the member-functions and the classes
(for instance condition_variable) templates (clock-type as template
parameter).
but this would be make the complete code templated and uncomfortable.

I thought that using one clock (steady_clock would be preferable) is OK.

Oliver Kowalke

unread,
Jan 10, 2014, 3:17:04 AM1/10/14
to boost
2014/1/10 Antony Polukhin <anto...@gmail.com>

> Scheduler does not look good. Class "algorithm" looks overcomplicated: too
> many methods, classes from detail namespace are used in function
> signatures, looks like some methods duplicate each other (yield, wait).
>

which one should be removed an why?
algorithm::wait() and algorithm::yield() have different effect on the fiber

algorithm::wait() sets the internal state of a fiber to WAITING suspends
and stores it
into an internal queue (waiting-queue). the fiber must be signaled to be
ready in order to
moved from the waiting- to the ready-queue so it can be resumed later.
in contrast to this algorithm::yield() suspends the fiber, let the internal
state be READY and appends it at the end of
the ready-queue so that it does not have to be signaled etc.


> I'm slightly confused by the fact that fiber_base use many dynamic memory
> allocations (it contains std::vector, std::map) and virtual functions.
> There must be some way to optimize away that, otherwise fibers may be much
> slower than threads.
>

hmm - boost.thread uses stl containers too (for instance thread_data_base)
the fiber must hold some data by itself, for instance a list of fibers
waiting on it to terminated (join).
otherwise thread migration between thread would be hard to implement if not
impossible.


> Heap allocations also exist in mutex class. I dislike the `waiting_.
> push_back( n);` inside the spinlocked sections. Some intrusive container
> must be used instead of std::deque.
>


OK - the kind of container can be optimized, but the mutex must know which
fibers are
waiting on it to lock the mutex.
otherwise thread migration between thread would be hard to implement if not
impossible.


> There is a copy-pasted code in mutex.cpp. `try_lock` and `lock` member
> functions have very different implementations, that's suspicious.
>

why is it suspicious? mutex::try_lock() will never suspend the fiber if the
mutex is already locked
while mutex::lock() will do.


> Back to the fiber_base. There's too many atomics and locks in it. As I
> understand, the default use-case does not include thread migrations so
> there is no need to put all the synchronizations inside the fiber_base.
> round_robin_ws and mutexes work with multithreading, ideologically it's the
> more correct place for synchronizations. Let fiber_base be thread unsafe,
> and schedulers and mutexes take care about the serializing access to the
> fiber_base. Otherwise singlethreaded users will loose performance.
>

the only way to do this would be that every fiber_base holds a pointer to
its fiber-scheduler (algorithm *).
in the case of fibers joining the current fiber (e.g. stored in the
internal list of fiber_base) the fiber
has to signal the schduler of the joining fiber.

BOOST_FOREACH( fiber_base::ptr_t f, joining_fibers) {
f->set_ready() // fiber_base::set_ready() { scheduler->set_ready(
this); }
}

this pattern must be applied to mutex and condition_variable too.
if a fier is migrated its point to the scheduler must be swapped.

Antony Polukhin

unread,
Jan 10, 2014, 5:02:23 AM1/10/14
to boost@lists.boost.org List
2014/1/10 Oliver Kowalke <oliver....@gmail.com>

> 2014/1/10 Antony Polukhin <anto...@gmail.com>
>
> > Scheduler does not look good. Class "algorithm" looks overcomplicated:
> too
> > many methods, classes from detail namespace are used in function
> > signatures, looks like some methods duplicate each other (yield, wait).
> >
>
> which one should be removed an why?
>

Do not remove schedulers. Just remove "algorithm" from docs and hide it in
namespace detail.



> algorithm::wait() and algorithm::yield() have different effect on the fiber
>
> algorithm::wait() sets the internal state of a fiber to WAITING suspends
> and stores it
> into an internal queue (waiting-queue). the fiber must be signaled to be
> ready in order to
> moved from the waiting- to the ready-queue so it can be resumed later.
> in contrast to this algorithm::yield() suspends the fiber, let the internal
> state be READY and appends it at the end of
> the ready-queue so that it does not have to be signaled etc.
>

Oh, now I see.


> > I'm slightly confused by the fact that fiber_base use many dynamic memory
> > allocations (it contains std::vector, std::map) and virtual functions.
> > There must be some way to optimize away that, otherwise fibers may be
> much
> > slower than threads.
> >
>
> hmm - boost.thread uses stl containers too (for instance thread_data_base)
> the fiber must hold some data by itself, for instance a list of fibers
> waiting on it to terminated (join).
> otherwise thread migration between thread would be hard to implement if not
> impossible.
>

My concern is that fiber migration between threads is a rare case. So first
of all fibers must be optimized for a single threaded usage. This means
that all the thread migration mechanics must be put in scheduler, leaving
fiber light and thread unsafe.

What if round_robin scheduler be a thread local variable (just like now,
but without any thread sync), but for cases when fiber migration is
required a global round_robin_ws scheduler is created for all the threads?
In that way we'll get high performance in single threaded applications and
hide all the thread migrations and sync inside the thread safe
round_robin_ws scheduler.


> > Back to the fiber_base. There's too many atomics and locks in it. As I
> > understand, the default use-case does not include thread migrations so
> > there is no need to put all the synchronizations inside the fiber_base.
> > round_robin_ws and mutexes work with multithreading, ideologically it's
> the
> > more correct place for synchronizations. Let fiber_base be thread unsafe,
> > and schedulers and mutexes take care about the serializing access to the
> > fiber_base. Otherwise singlethreaded users will loose performance.
> >
>
> the only way to do this would be that every fiber_base holds a pointer to
> its fiber-scheduler (algorithm *).
> in the case of fibers joining the current fiber (e.g. stored in the
> internal list of fiber_base) the fiber
> has to signal the schduler of the joining fiber.
>
> BOOST_FOREACH( fiber_base::ptr_t f, joining_fibers) {
> f->set_ready() // fiber_base::set_ready() { scheduler->set_ready(
> this); }
> }
>
> this pattern must be applied to mutex and condition_variable too.
> if a fier is migrated its point to the scheduler must be swapped.


Is that easy to implement? Does it scale well with proposal of thread
unsafe round_robin and thread safe round_robin_ws?


--
Best regards,
Antony Polukhin

Paul A. Bristow

unread,
Jan 10, 2014, 5:15:26 AM1/10/14
to bo...@lists.boost.org
> -----Original Message-----
> From: Boost [mailto:boost-...@lists.boost.org] On Behalf Of Antony Polukhin
> Sent: Friday, January 10, 2014 10:02 AM
> To: bo...@lists.boost.org List
> Subject: Re: [boost] Boost.Fiber review January 6-15
>
> 2014/1/10 Oliver Kowalke <oliver....@gmail.com>
>
> > 2014/1/10 Antony Polukhin <anto...@gmail.com>
> >
> > > Scheduler does not look good. Class "algorithm" looks overcomplicated:
> > too
> > > many methods, classes from detail namespace are used in function
> > > signatures, looks like some methods duplicate each other (yield, wait).
> > >
> >
> > which one should be removed an why?
> >
>
> Do not remove schedulers. Just remove "algorithm" from docs and hide it in namespace detail.

And document the schedulers in a separate 'Implementation' section.

Paul

---
Paul A. Bristow,
Prizet Farmhouse, Kendal LA8 8AB UK
+44 1539 561830 07714330204
pbri...@hetp.u-net.com

Oliver Kowalke

unread,
Jan 10, 2014, 5:19:25 AM1/10/14
to boost
2014/1/10 Antony Polukhin <anto...@gmail.com>

> Do not remove schedulers. Just remove "algorithm" from docs and hide it in
> namespace detail.
>

OK


> My concern is that fiber migration between threads is a rare case.
>

yes/maybe - I've introduced this in order to support work-stealing in
thread-pools


> So first of all fibers must be optimized for a single threaded usage. This
> means
> that all the thread migration mechanics must be put in scheduler, leaving
> fiber light and thread unsafe.
>

that's already reality.
suppose you want to migrate a fiber from thread A to thread B using
round_robin_ws as
the fiber-scheduler (each thread has running an instance of this class).
Thread B accesses the fiber-scheduler of thread A by calling
round_robin_ws::migrate_from().
this function selects a fiber (ready to run) from the ready-queue and adds
this fiber
to its own internal read-queue.
the access to the ready-queue must be thread-safe in round_robin_ws.

In contrast to this class round_robin does not support fiber-stealing and
thus does not
protect its internal ready-queue for concurrent access from different
threads.


> What if round_robin scheduler be a thread local variable (just like now,
> but without any thread sync), but for cases when fiber migration is
> required a global round_robin_ws scheduler is created for all the threads?
> In that way we'll get high performance in single threaded applications and
> hide all the thread migrations and sync inside the thread safe
> round_robin_ws scheduler.
>

see above - the current implementation already separates a 'thread-safe'
scheduler from an un-safe

mutex and condition_variable must be thread-safe because it could be
executed concurrently in different threads.

the only class which could be optimized would be fiber_base itself, e.g.
the accesss to the internal state must be threadsafe
(currently done via an atomic). of course this atmoic is not required if
the fiber runs in a fiber-scheduler which
does not support fiber-migration. threfore, as you already mentioned, it
would be an optimzation to move the management of
fibers internal state into the scheduler itslef (for instance internal
class schedulable). The scheduler determines if accessing
the internal state is thread-safe or not.


> Is that easy to implement?


at the first look, yes


> Does it scale well with proposal of thread
> unsafe round_robin and thread safe round_robin_ws?
>

it doesn't matter if you update the state in the fiber_base or in the
scheduler - you have the same amount of function calls.
the benefit is that you don't update atomics in the thread unsafe
round_robin class.

Oliver Kowalke

unread,
Jan 10, 2014, 5:20:01 AM1/10/14
to boost
2014/1/10 Paul A. Bristow <pbri...@hetp.u-net.com>

> And document the schedulers in a separate 'Implementation' section.
>

OK

Nat Goodspeed

unread,
Jan 10, 2014, 9:34:11 AM1/10/14
to bo...@lists.boost.org
On Jan 10, 2014, at 5:20 AM, Oliver Kowalke <oliver....@gmail.com> wrote:

> 2014/1/10 Paul A. Bristow <pbri...@hetp.u-net.com>
>
>> And document the schedulers in a separate 'Implementation' section.
>>
>
> OK

May I suggest "library extension" or "customization" instead? I think it's important to allow user-supplied schedulers - e.g. the complaint about the containers used by the default scheduler implementation. Such users could, if desired, supply a scheduler with containers more to their liking.

(I must admit I was a bit floored by the remark about "slower than threads," since that completely disregards the cost of preemptive kernel context switching.)

Niall Douglas

unread,
Jan 10, 2014, 11:08:23 AM1/10/14
to bo...@lists.boost.org
On 7 Jan 2014 at 17:05, Nat Goodspeed wrote:

> > I appreciate and understand this. However, I must then ask this: is
> > your library ready to enter Boost if you have not done any
> > performance testing or tuning?
>
> I don't have a C10K problem, but I do have a code-organization
> problem. Much essential processing in a large old client app is
> structured (if that's the word I want) as chains of callbacks from
> asynchronous network I/O. Given the latency of the network requests,
> fibers would have to have ridiculous, shocking overhead before it
> would start to bother me.
>
> I think that's a valid class of use cases. I don't buy the argument
> that adoption of Fiber requires performance tuning first.

I agree one doesn't need to performance tune to enter Boost.

I disagree with not at least _measuring_ performance before entering
Boost. One should at least have some minimal idea as to performance.
I would really like to see at least one graph of scaling with the
number of active contexts, both for CPU and RAM footprint.

Niall

--
Currently unemployed and looking for work.
Work Portfolio: http://careers.stackoverflow.com/nialldouglas/



Niall Douglas

unread,
Jan 10, 2014, 11:36:57 AM1/10/14
to bo...@lists.boost.org
On 7 Jan 2014 at 23:07, Oliver Kowalke wrote:

> my focus was to address the one-thread-per-client pattern used for C10K.
> the pattern makes code much more readable/reduces the complexity but
> doesn't scale. if you create too many threads on your system your
> overall performance will be shrink because at a certain number of threads,
> the overhead of the kernel scheduler starts to swamp the available cores.

If you replace "doesn't scale" with "doesn't scale linearly" then I
agree. The linearity is important.

> > I appreciate and understand this. However, I must then ask this: is
> > your library ready to enter Boost if you have not done any
> > performance testing or tuning?
>
> is performance testing and tuning a precondition for a lib to be accepted?

No. I do think the question ought to be asked though, even if the
answer is no. It helps people decide the relative merit of a new
library.

> I did some tuning (for instance spinlock implementation) but it is not
> finished.

Even my own spinlock based on Intel TSX is unfinished - I had no TSX
hardware to test it on, and had to rely on the Intel simulator which
is dog slow.

I mentioned this in another reply, but I really think you ought to at
least measure performance and supply some results in the docs. You
don't need to performance tune, but it sure is handy to know some
idea of performance.

> - io_service::post() pushes a callable to io_service's internal queue
> (executed by io_service::run() and related functions)
> - fibers::asio::spawn() creates a new fiber and adds it to the
> fiber-scheduler
> (specialized to use asio's io_service hence asios's asyn-result feature)
> - yield is an instance of boost::fibers::asio::yield_context which
> represents the fiber
> running this code; it is used by asio's async result feature
> - yield[ec] is put to an async-operation in order suspending the current
> fiber and pass an
> error (if happend during execution of async-op) back to the calling code,
> for instance EOF if socket was closed

Ok, let me try rewriting this description into my own words.

To supply fibers to an io_service, one creates N fibers each of which
call io_service::run() as if they were threads. If there is no work
available, the fiber executing the run() gives up context to the next
fiber. If some fiber doing some work does an io_service::post(), that
selects some fiber currently paused waiting inside run() for new
work, and next context switch that fiber will be activated to do
work.

This part I think I understand okay. The hard part for me is how
yield() fits in. I get that one can call yield ordinarily like in
Python

def foo:
a=0
while 1:
a=a+1
yield a

... and every time you call foo() you get back an increasing number.

Where I am finding things harder is what yield means to ASIO which
takes a callback function of specification void (*callable)(const
boost::system::error_code& error, size_t bytes_transferred) so fiber
yield must supply a prototype matching that specification. You said
above this:

> - yield is an instance of boost::fibers::asio::yield_context which
> represents the fiber running this code; it is used by asio's async result
> feature
>
> - yield[ec] is put to an async-operation in order suspending the current
> fiber and pass an error (if happend during execution of async-op) back to
> the calling code, for instance EOF if socket was closed

So I figure that yield will store somewhere a new bit of context, do
a std::bind() encapsulating that context and hand off the functor to
ASIO. ASIO then schedules the async i/o. When that completes, ASIO
will do an io_service::post() with the bound functor, and so one of
the fibers currently executing run() will get woken up and will
invoke the bound functor.

So far so good. But here is where I fall down: the fibre receives in
an error state and bytes transferred, and obviously transmits that
back to the fiber which called ASIO async_read() by switching back to
the original context. I would understand just fine if yield()
suspended the calling fiber, but it surely cannot because yield()
will get executed before ASIO async_read() does as it's a parameter
to async_read(). So I therefore must be missing something very
important, and this is why I am confused. Is it possible that yield()
takes a *copy* of the context like setjmp()?

It's this kind of stuff which documentation is for, and why at least
one person i.e. me needs hand holding through mentally groking how
the ASIO support in Fiber works. Sorry for being a bit stupid.

Vicente J. Botet Escriba

unread,
Jan 10, 2014, 12:14:16 PM1/10/14
to bo...@lists.boost.org
Le 10/01/14 08:52, Oliver Kowalke a écrit :
> 2014/1/10 Vicente J. Botet Escriba <vicent...@wanadoo.fr>
>
>
>
>> Don't forget that my point was related to time_points.
>
> in the case of time_points it is a little bit complicated. algorithm,
> fiber-schdulers and the sync. primitives use steady_clock::time_point.
> I don't see how I could make this flexible so that it would work with all
> kinds of clocks from boost.chrono.
> the only possibility would be to make the member-functions and the classes
> (for instance condition_variable) templates (clock-type as template
> parameter).
> but this would be make the complete code templated and uncomfortable.

maybe it is not simple, but it is possible.
> I thought that using one clock (steady_clock would be preferable) is OK.
>
>
Well, is your library, so you decide.

Best,
Vicente

Oliver Kowalke

unread,
Jan 10, 2014, 12:46:22 PM1/10/14
to boost
2014/1/10 Niall Douglas <s_sour...@nedprod.com>
you have to distinct between to 'yield' symbols - in the context of fibers
without ASIO
, e.g. calling boost::this_fiber::yield() the current fiber is supended at
added at the end of
the ready-queue of the fiber-scheduler (the fiber will be resumed if
dequeued from this list).

the other case you are referring to is the async-feature of boost.asio.
Chris has already
implemented the async-feature for coroutines (from boost.coroutine). so the
best source
of the internal working is the docu of boost.asio.
I've added support of asio's async-feature for fibers, e.g. if provided
some classes required
to use fibers in the context of asio and its async-operations (for instance
async_read() see examples).
usaly this code should belong to boost.asio instead boost.fiber - anyway,
because boost.fiber
is new I've added it in my lib.

the yield instance you use with the async-ops of asio (async_read() for
isntance) is a global of
type yield_t. this type holds an error_code as member and
operator[error_code &] let you
retrieve the error_code from the async-op.

boost::fibers::asio::spawn() is the function which starts a new fiber

Eugene Yakubovich

unread,
Jan 10, 2014, 1:11:43 PM1/10/14
to bo...@lists.boost.org
On Thu, Jan 9, 2014 at 1:11 AM, Oliver Kowalke <oliver....@gmail.com> wrote:

>> - While I understand that scheduling algorithm is more internal than
>> the rest of the library, I still don't like detail namespace leaking
>> out. Perhaps these classes should be moved out of the detail
>> namespace.
>>
>
> hmm - interface algorithm is already in namespace fibers. could you tell me
> to
> which class you referring to?

algorithm is in namespace fibers but the arguments and return types
are often in detail namespace.
For example, spawn() takes detail::fiber_base::ptr_t and
get_main_notifier() returns detail::notify::ptr_t.

>
>> - algorithm interface seems to do too much. I think a scheduling
>> algorithm is something that just manages the run queue -- selects
>> which fiber to run next (e.g. Linux kernel scheduler interface works
>> like this). As a result, implemented scheduling algorithms have much
>> overlap.
>
>
> maybe manager would be a better wording?
> the implementations of algorithm (schedulers in the docu) own the fibers
> internal
> data strucutres. the fibers are stored if waiting or but in a ready-queue if
> ready to be resumed.
> what would you suggest?
>
I think there are a couple of things going on inside current
schedulers. One is selecting the next fiber to run (that's the round
robin portion). Another is managing the suspension, resumption, and
waiting of fibers -- that would be a manager portion. I forked the
repo and I'm trying to see if it's possible to separate it. That would
allow someone to write a priority based scheduler and then use it with
both "regular" fibers and asio managed ones. I probably won't be done
with it until next week.


>> Indeed, round_robin and round_robin_ws is almost identical
>> code.
>>
>
> the difference is that round_robin_ws enables fiber-stealing and owns two
> additional
> member-functions for this purpose. the internal ready-queue is made
> thread-safe (concurrent
> access by different thread required for fiber-stealing).

I agree but my point was that b/c the class does too much, there's a
bunch of copy/pasted coded (not related to the run queue).

Eugene Yakubovich

unread,
Jan 10, 2014, 1:32:51 PM1/10/14
to bo...@lists.boost.org
On Fri, Jan 10, 2014 at 1:50 AM, Antony Polukhin <anto...@gmail.com> wrote:
> I'm slightly confused by the fact that fiber_base use many dynamic memory
> allocations (it contains std::vector, std::map) and virtual functions.
> There must be some way to optimize away that, otherwise fibers may be much
> slower than threads.
>
> Heap allocations also exist in mutex class. I dislike the `waiting_.
> push_back( n);` inside the spinlocked sections. Some intrusive container
> must be used instead of std::deque.
>

That's a good point. Since a fiber can only be in one waiting list at
a time, an intrusive list node can be placed right into the fiber_base
object. Alternatively, a node object can be allocated right on the
stack. It won't get destructed as the fiber suspends itself right
after putting itself into a wait queue (that's the way Linux kernel
does it). The only thing is I'm not sure if boost::intrusive supports
such usage.

Oliver Kowalke

unread,
Jan 10, 2014, 2:52:37 PM1/10/14
to boost
2014/1/10 Eugene Yakubovich <eyaku...@gmail.com>

> That's a good point. Since a fiber can only be in one waiting list at
> a time, an intrusive list node can be placed right into the fiber_base
> object. Alternatively, a node object can be allocated right on the
> stack. It won't get destructed as the fiber suspends itself right
> after putting itself into a wait queue (that's the way Linux kernel
> does it). The only thing is I'm not sure if boost::intrusive supports
> such usage.
>

I didn't noticed boost.intrusive until now - I think it could be possible to
add the required intrusive interface to fiber_base (it already adds support
for intrusive_ptr).

I'll try to implement this stuff.

Evgeny Panasyuk

unread,
Jan 10, 2014, 5:19:27 PM1/10/14
to bo...@lists.boost.org
07.01.2014 20:50, Niall Douglas:
>> boost.fiber uses boost.coroutine (which itself uses boost.context) and
>> >provides
>> >some performance tests for context switching.
> Sure. But it's a "why should I use this library?" sort of thing? If I
> see real world benchmarks for a library, it tells me the author has
> probably done some performance tuning. That's a big tick for me in
> considering to use a library.

I made micro benchmark of Boost.Coroutine some time ago. I was able to
run 400k coroutines in one thread, each had 10KiB of stack (~3.8GiB was
consumed only by stacks). I used simple allocator for stacks:
return last_free += allocation_size;
Environment was: Windows 7 x64, MSVC2010SP1 x64, Boost 1.53. System had
6GiB of RAM, part of which was consumed by OS and background programs. I
think with proper tuning it is possible to launch more coroutines on
that system.

--
Evgeny Panasyuk

Niall Douglas

unread,
Jan 11, 2014, 6:13:07 AM1/11/14
to bo...@lists.boost.org
On 10 Jan 2014 at 18:46, Oliver Kowalke wrote:

> > It's this kind of stuff which documentation is for, and why at least
> > one person i.e. me needs hand holding through mentally groking how
> > the ASIO support in Fiber works. Sorry for being a bit stupid.
>
> the other case you are referring to is the async-feature of boost.asio.
> Chris has already implemented the async-feature for coroutines (from
> boost.coroutine). so the best source of the internal working is the docu
> of boost.asio. I've added support of asio's async-feature for fibers,
> e.g. if provided some classes required to use fibers in the context of
> asio and its async-operations (for instance async_read() see examples).
> usaly this code should belong to boost.asio instead boost.fiber -
> anyway, because boost.fiber is new I've added it in my lib.

Aha! You're reusing ASIO's coroutines support as-is. For some reason
my old brain didn't grok this, now it does. Thank you Oliver, and for
putting up with my stupid questions.


Ok I think I am now ready to give my final peer review:

Acceptance: I recommend yes, provided considerable improvements are
done on the following areas:

* It needs to be explicitly documented per API which of the support
classes are thread safe (I discovered above that's almost all of
them).

* More identical replication of Boost.Thread's specific APIs. Others
have gone into more detail on this than I, but I would add that if an
extension API is trivial to add, I'd say it's very likely to appear
in C++17 anyway.

* Basically condense this thread of discussion into docs on the ASIO
support. Say explicitly you're reusing ASIO's coroutines support
as-is, and link to ASIO's coroutines docs pages.

* Need to see performance scaling results as N fibers of execution
rise. Need to see CPU cycles and memory consumption in those results.

* Need a link to Coroutine's list of supported architectures.

* C++11 support needs improving. Others have mentioned more on this
than I.

* Include the code examples into the docs so Googlebot can find them.
Like ASIO does at
http://www.boost.org/doc/libs/1_55_0/doc/html/boost_asio/examples.html

* You definitely absolutely need a Design Rationale page explaining
why what's in Fiber is there, and why what isn't is not. Do link to
external libraries extending Fiber with "missing" features if
appropriate.


Nice to have:

* Intel TSX support to avoid locks.

* Complexity guarantees and exception safety statements per API.


Lastly:

Oliver: you did a great job on Fiber and Coroutine. Please accept my
personal thank you for all your hard work and making it available to
the Boost community.

Nat Goodspeed

unread,
Jan 11, 2014, 10:21:35 AM1/11/14
to bo...@lists.boost.org
On Sat, Jan 11, 2014 at 6:13 AM, Niall Douglas
<s_sour...@nedprod.com> wrote:

> * C++11 support needs improving. Others have mentioned more on this
> than I.

For my purposes in collating results, though, I'd ask you for a bit
more detail here. As far as I can tell, you might be alluding to
'explicit operator bool' rather than the C++03 'operator safe_bool'
trick. If the review might end up requesting more work from Oliver,
it's only fair to be as specific as we can about what work is
required. Otherwise it's sort of a "too many notes" level of critique
-- not really actionable.

Nat Goodspeed

unread,
Jan 11, 2014, 11:01:15 AM1/11/14
to bo...@lists.boost.org
On Fri, Jan 10, 2014 at 2:19 AM, Oliver Kowalke
<oliver....@gmail.com> wrote:

> 2014/1/9 Vicente J. Botet Escriba <vicent...@wanadoo.fr>

>> you can compare X and Y

> You are surely right, but I don't see how? Could you show it?

> implicit conversion to bool through operator bool()

Vicente, the bad case is when class X and class Y provide operator
bool. A coder inadvertently writes:

if (myXinstance == myYinstance) ...

(only of course with names in the problem domain rather than names
that emphasize their respective classes). This should produce a
compile error; you don't want X and Y to be comparable; the comparison
is meaningless. Yet the compiler accepts it, converting each of
myXinstance and myYinstance to bool and then comparing those bool
values. The code runs. The product ships. Then some customer gets
badly whacked...

Rob Stewart

unread,
Jan 11, 2014, 11:18:35 AM1/11/14
to bo...@lists.boost.org
On Jan 11, 2014, at 11:01 AM, Nat Goodspeed <n...@lindenlab.com> wrote:

> On Fri, Jan 10, 2014 at 2:19 AM, Oliver Kowalke
> <oliver....@gmail.com> wrote:
>
>> 2014/1/9 Vicente J. Botet Escriba <vicent...@wanadoo.fr>
>
>>> you can compare X and Y
>
>> You are surely right, but I don't see how? Could you show it?
>
>> implicit conversion to bool through operator bool()
>
> Vicente, the bad case is when class X and class Y provide operator bool. A coder inadvertently writes:
>
> if (myXinstance == myYinstance) ...
>
> (only of course with names in the problem domain rather than names that emphasize their respective classes). This should produce a compile error; you don't want X and Y to be comparable; the comparison
> is meaningless. Yet the compiler accepts it, converting each of
> myXinstance and myYinstance to bool and then comparing those bool values. The code runs. The product ships. Then some customer gets
> badly whacked...

That isn't an issue when the conversion operator is explicit, which Vicente suggested.

Oliver Kowalke

unread,
Jan 11, 2014, 11:25:47 AM1/11/14
to boost
2014/1/11 Rob Stewart <robert...@comcast.net>

> That isn't an issue when the conversion operator is explicit, which
> Vicente suggested.
>

many other boost libraries use the safe_bool idiom - I believe it's not bad
to follow this 'common' coding practice

Oliver Kowalke

unread,
Jan 11, 2014, 11:29:03 AM1/11/14
to boost
2014/1/11 Niall Douglas <s_sour...@nedprod.com>

> Aha! You're reusing ASIO's coroutines support as-is. For some reason
> my old brain didn't grok this, now it does. Thank you Oliver, and for
> putting up with my stupid questions.
>

there's no such thing as a stupid question ;^)

* More identical replication of Boost.Thread's specific APIs. Others
> have gone into more detail on this than I, but I would add that if an
> extension API is trivial to add, I'd say it's very likely to appear
> in C++17 anyway.
>

but I'm uncertain which one (at least if already added support for
interruption) - but what
about the others (many of them a conditionally compiled)?

<snip>

Nice to have:
>
> * Intel TSX support to avoid locks.
>

may I contact you for some info regarding to this issue (I'd like to
benefit from your knowledge)?


> Oliver: you did a great job on Fiber and Coroutine. Please accept my
> personal thank you for all your hard work and making it available to
> the Boost community.
>

thank you too

Nat Goodspeed

unread,
Jan 11, 2014, 11:33:08 AM1/11/14
to bo...@lists.boost.org
On Sat, Jan 11, 2014 at 11:18 AM, Rob Stewart <robert...@comcast.net> wrote:

> On Jan 11, 2014, at 11:01 AM, Nat Goodspeed <n...@lindenlab.com> wrote:
>
> > the bad case is when class X and class Y provide operator bool. A coder inadvertently writes:
> >
> > if (myXinstance == myYinstance) ...

> That isn't an issue when the conversion operator is explicit, which Vicente suggested.

http://permalink.gmane.org/gmane.comp.lib.boost.devel/248167

Vicente J. Botet Escriba

unread,
Jan 11, 2014, 12:12:16 PM1/11/14
to bo...@lists.boost.org
Le 06/01/14 14:07, Nat Goodspeed a écrit :
> Hi all,
>
> The review of Boost.Fiber by Oliver Kowalke begins today, Monday January
> 6th, and closes Wednesday January 15th.
>

More question about stealing.

What would be the advantages of using work-stealing at the fiber level
instead of using it at the task level?

I wonder if the steel and migrate functions shouldn't be an internal
detail of the library and that the library should provide a fiber_pool.
I'm wondering also if the algorithm shouldn't be replaced by an enum.

What do you think?

Vicente

Vicente J. Botet Escriba

unread,
Jan 11, 2014, 12:14:11 PM1/11/14
to bo...@lists.boost.org
Le 11/01/14 17:29, Oliver Kowalke a écrit :
> 2014/1/11 Niall Douglas <s_sour...@nedprod.com>
>
>> Aha! You're reusing ASIO's coroutines support as-is. For some reason
>> my old brain didn't grok this, now it does. Thank you Oliver, and for
>> putting up with my stupid questions.
>>
> there's no such thing as a stupid question ;^)
>
> * More identical replication of Boost.Thread's specific APIs. Others
>> have gone into more detail on this than I, but I would add that if an
>> extension API is trivial to add, I'd say it's very likely to appear
>> in C++17 anyway.
>>
> but I'm uncertain which one (at least if already added support for
> interruption) - but what
> about the others (many of them a conditionally compiled)?
Could you elaborate? What is the problem you find?

Vicente

Nat Goodspeed

unread,
Jan 11, 2014, 12:19:29 PM1/11/14
to bo...@lists.boost.org
On Fri, Jan 10, 2014 at 2:50 AM, Antony Polukhin <anto...@gmail.com> wrote:

> Back to the fiber_base. There's too many atomics and locks in it. As I
> understand, the default use-case does not include thread migrations so
> there is no need to put all the synchronizations inside the fiber_base.
> round_robin_ws and mutexes work with multithreading, ideologically it's the
> more correct place for synchronizations. Let fiber_base be thread unsafe,
> and schedulers and mutexes take care about the serializing access to the
> fiber_base. Otherwise singlethreaded users will loose performance.

Niall Douglas said:

> I definitely need the ability to signal a fibre future belonging to thread A
> from some arbitrary thread B.

In other words, the case in which you migrate fibers from one thread
to another is not the only case in which fibers must be thread-safe.
If you have even one additional thread (A and B), and some fiber AF on
thread A makes a request of thread B that might later cause fiber AF
to change state, all fiber state changes must be thread-safe.

An application running many fibers on exactly one thread (the
process's original thread) is an interesting case, and one worth
optimizing. But I'm not sure how the library could safely know that.
If you depend on the coder to set a library parameter, some other
coder will later come along and add a new thread without even knowing
about the fiber library initialization. If there were some way for the
library to count the threads in the process, a new thread might be
launched just after the fiber library has made its decision.

Either way, the complaint would be: "This library is too buggy to use!"

Better to be thread-safe by default.

Vicente J. Botet Escriba

unread,
Jan 11, 2014, 12:25:48 PM1/11/14
to bo...@lists.boost.org
Le 11/01/14 18:12, Vicente J. Botet Escriba a écrit :
> Le 06/01/14 14:07, Nat Goodspeed a écrit :
>> Hi all,
>>
>> The review of Boost.Fiber by Oliver Kowalke begins today, Monday January
>> 6th, and closes Wednesday January 15th.
>>
>
> More question about stealing.
>
> What would be the advantages of using work-stealing at the fiber level
> instead of using it at the task level?
>
> I wonder if the steel and migrate functions shouldn't be an internal
> detail of the library and that the library should provide a fiber_pool.
> I'm wondering also if the algorithm shouldn't be replaced by an enum.
>
> What do you think?

BTW what is the state of a fiber after a steal_from() operation?

Vicente J. Botet Escriba

unread,
Jan 11, 2014, 12:33:01 PM1/11/14
to bo...@lists.boost.org
Le 06/01/14 14:07, Nat Goodspeed a écrit :
> Hi all,
>
> The review of Boost.Fiber by Oliver Kowalke begins today, Monday January
> 6th, and closes Wednesday January 15th.
>
>
Why do Boost.Fiber need to use Boost.Coroutine instead of using directly
Boost.Context?
It seems to me that the implementation would be more efficient if it
uses Boost.Context directly as a fiber is not a coroutine, isn't it?

Vicente J. Botet Escriba

unread,
Jan 11, 2014, 12:50:05 PM1/11/14
to bo...@lists.boost.org
Le 06/01/14 14:07, Nat Goodspeed a écrit :
> Hi all,
>
> The review of Boost.Fiber by Oliver Kowalke begins today, Monday January
> 6th, and closes Wednesday January 15th.
>
> -----------------------------------------------------
Boost.Thread interruption feature adds some overhead to all the
synchronization functions that are interruption_points.
It is too late for Boost.Thread, but what do you think about having a
simple fiber class and an interruptible::fiber class?

james

unread,
Jan 11, 2014, 1:20:51 PM1/11/14
to bo...@lists.boost.org
On 11/01/2014 17:19, Nat Goodspeed wrote:
> In other words, the case in which you migrate fibers from one thread
> to another is not the only case in which fibers must be thread-safe.
> If you have even one additional thread (A and B), and some fiber AF on
> thread A makes a request of thread B that might later cause fiber AF
> to change state, all fiber state changes must be thread-safe.
I would think it almost certain that a real-world program will have more
threads.

Given that a blocking system call is poisonous to throughput of all the
fibres
on a thread (especially if a work stealing scheduler isn't running) then
I would
expect most programs to delegate blocking calls to a 'real' thread pool and
use a future to deliver the result.

You need at least one thread-safe way to deliver data to a fibre, and I
suspect
that in practice both 'enqueue message' and 'complete future' are needed
for convenience. If you want just one thing, then the equivalent of a
Haskell
MVar is probably the most effective.

Oliver Kowalke

unread,
Jan 11, 2014, 1:38:34 PM1/11/14
to boost
2014/1/11 Vicente J. Botet Escriba <vicent...@wanadoo.fr>
>
> Could you elaborate? What is the problem you find?
>

for instance future.hpp from boost.thread has many conditional compilations

Nat Goodspeed

unread,
Jan 11, 2014, 1:40:30 PM1/11/14
to bo...@lists.boost.org
On Sat, Jan 11, 2014 at 12:50 PM, Vicente J. Botet Escriba
<vicent...@wanadoo.fr> wrote:

> Boost.Thread interruption feature adds some overhead to all the
> synchronization functions that are interruption_points.
> It is too late for Boost.Thread, but what do you think about having a simple
> fiber class and an interruptible::fiber class?

That's particularly interesting in light of recent remarks about the
cost of thread-safe fiber state management.

Going out on a limb...

Could we identify an underlying interruption-support operation and
tease it out into a policy class? Maybe "policy" is the wrong word
here: given the number of Fiber classes that would engage it, adding a
template parameter to each of them -- and requiring them all to be
identical for a given process -- feels like the wrong API. As with the
scheduling algorithm, what about replacing a library-default object?
Is the interruption-support overhead sufficiently large as to dwarf a
pointer indirection that could bypass it?

Similarly for fiber state management thread safety:

Could we identify a small set of low-level internal synchronization
operations and consolidate them into a policy class? Maybe in that
case it actually could be a template parameter to either the scheduler
or the fiber class itself. I'd still be interested in the possibility
of a runtime decision; but given a policy template parameter, I assume
it should be straightforward to provide a particular policy class that
would delegate to runtime.

Then again, too many parameters/options might make the library
confusing to use. Just a thought.

Oliver Kowalke

unread,
Jan 11, 2014, 1:40:46 PM1/11/14
to boost
2014/1/11 Vicente J. Botet Escriba <vicent...@wanadoo.fr>

> Boost.Thread interruption feature adds some overhead to all the
> synchronization functions that are interruption_points.
> It is too late for Boost.Thread, but what do you think about having a
> simple fiber class and an interruptible::fiber class?
>

boost.fiber already support interruption
(boost::fibers::fiber::interrupt()) - in contrast to boost.thread it is
simply a flag in an atomic variable.

Oliver Kowalke

unread,
Jan 11, 2014, 1:45:47 PM1/11/14
to boost
2014/1/11 Vicente J. Botet Escriba <vicent...@wanadoo.fr>

> What would be the advantages of using work-stealing at the fiber level
> instead of using it at the task level?
>

it is very simple because you migate a 'first-class' object, e.g. the fiber
already is like a continuation.


> I wonder if the steel and migrate functions shouldn't be an internal
> detail of the library and that the library should provide a fiber_pool.
>

fiber-stealing is not required in all cases and it has to be provided by
the fiber-scheduler hence it has to be part of the scheduler.


> I'm wondering also if the algorithm shouldn't be replaced by an enum.
>

sorry - I don't get it.

Oliver Kowalke

unread,
Jan 11, 2014, 1:47:32 PM1/11/14
to boost
2014/1/11 Vicente J. Botet Escriba <vicent...@wanadoo.fr>

> BTW what is the state of a fiber after a steal_from() operation?
>
because this is a feature of a certain fiber-scheduler it depends on its
implementation - at least not a fiber which is running (state RUNNING).
in the context of scheduler round_robin_ws only fibers in the ready-queue
can be stolen.

Oliver Kowalke

unread,
Jan 11, 2014, 1:48:41 PM1/11/14
to boost
2014/1/11 Vicente J. Botet Escriba <vicent...@wanadoo.fr>

> Why do Boost.Fiber need to use Boost.Coroutine instead of using directly
> Boost.Context?
>

re-using code (which is already tested etc.)


> It seems to me that the implementation would be more efficient if it uses
> Boost.Context directly as a fiber is not a coroutine, isn't it?
>

not really because you have to do all the stuff like boost.coroutine

Nat Goodspeed

unread,
Jan 11, 2014, 1:50:27 PM1/11/14
to bo...@lists.boost.org
On Sat, Jan 11, 2014 at 12:33 PM, Vicente J. Botet Escriba
<vicent...@wanadoo.fr> wrote:

> Why do Boost.Fiber need to use Boost.Coroutine instead of using directly
> Boost.Context?
> It seems to me that the implementation would be more efficient if it uses
> Boost.Context directly as a fiber is not a coroutine, isn't it?

Correct, a fiber is not a coroutine.

Oliver is also bringing a proposal to the ISO C++ concurrency study
group to introduce coroutines in the standard. Interestingly, he is
not bringing a context-library proposal: the lowest-level standard API
he is proposing is the coroutine API. But is the coroutine API
low-level enough, and general enough, to serve as a foundation for
higher-level abstractions such as fibers? You might regard the present
fiber implementation as a proof-of-concept.

Oliver asserts that using the Coroutine API rather than directly
engaging the Context API has only trivial effect on performance.

Oliver Kowalke

unread,
Jan 11, 2014, 1:51:08 PM1/11/14
to boost
2014/1/11 james <ja...@mansionfamily.plus.com>

> Given that a blocking system call is poisonous to throughput of all the
> fibres
> on a thread
>

I would try to prevent blocking system calls (at least set the NONBLOCK
option) and use
boost.asio instead (if possible).

Vicente J. Botet Escriba

unread,
Jan 11, 2014, 3:39:02 PM1/11/14
to bo...@lists.boost.org
Le 11/01/14 19:50, Nat Goodspeed a écrit :
> On Sat, Jan 11, 2014 at 12:33 PM, Vicente J. Botet Escriba
> <vicent...@wanadoo.fr> wrote:
>
>> Why do Boost.Fiber need to use Boost.Coroutine instead of using directly
>> Boost.Context?
>> It seems to me that the implementation would be more efficient if it uses
>> Boost.Context directly as a fiber is not a coroutine, isn't it?
> Correct, a fiber is not a coroutine.
>
> Oliver is also bringing a proposal to the ISO C++ concurrency study
> group to introduce coroutines in the standard. Interestingly, he is
> not bringing a context-library proposal: the lowest-level standard API
> he is proposing is the coroutine API. But is the coroutine API
> low-level enough, and general enough, to serve as a foundation for
> higher-level abstractions such as fibers? You might regard the present
> fiber implementation as a proof-of-concept.
>
> Oliver asserts that using the Coroutine API rather than directly
> engaging the Context API has only trivial effect on performance.
>

I don't use to believe "sur parole" performance assertions.
The previous version of Boost.Fiber, IIRC, used Boost.Context. Maybe it
is worth comparing the performances :)

Vicente

Vicente J. Botet Escriba

unread,
Jan 11, 2014, 3:43:35 PM1/11/14
to bo...@lists.boost.org
Le 11/01/14 19:48, Oliver Kowalke a écrit :
> 2014/1/11 Vicente J. Botet Escriba <vicent...@wanadoo.fr>
>
>> Why do Boost.Fiber need to use Boost.Coroutine instead of using directly
>> Boost.Context?
>>
> re-using code (which is already tested etc.)
Didn't the previous version of Boost.Fiber, used Boost.Context directly?
>
>
>> It seems to me that the implementation would be more efficient if it uses
>> Boost.Context directly as a fiber is not a coroutine, isn't it?
>>
> not really because you have to do all the stuff like boost.coroutine
>
I have not see directly to the current implementation so I can not
argument on this, but IIRC, Boost.Context was born as the minimal
interface that allowed to build coroutines, fibers, ... on top of it.

Anyway, this is an implementation detail and only some performance
figures could guide.

Best,
Vicente

Vicente J. Botet Escriba

unread,
Jan 11, 2014, 3:48:23 PM1/11/14
to bo...@lists.boost.org
Le 11/01/14 19:45, Oliver Kowalke a écrit :
> 2014/1/11 Vicente J. Botet Escriba <vicent...@wanadoo.fr>
>
>> What would be the advantages of using work-stealing at the fiber level
>> instead of using it at the task level?
>>
> it is very simple because you migate a 'first-class' object, e.g. the fiber
> already is like a continuation.
yes, but what are the advantages? Does it performs better? It is easy to
write them?
>
>
>> I wonder if the steel and migrate functions shouldn't be an internal
>> detail of the library and that the library should provide a fiber_pool.
>>
> fiber-stealing is not required in all cases and it has to be provided by
> the fiber-scheduler hence it has to be part of the scheduler.
>
What i sthe cost of a scheduler supporting stealing respect to one that
doesn't support it? The performances measures should show this also.
>> I'm wondering also if the algorithm shouldn't be replaced by an enum.
>>
> sorry - I don't get it.
>
I mean that if the algorithm interface is not used by the user, it is
enough to have an enum to distinguish between several possible scheduler
algorithms.

Vicente J. Botet Escriba

unread,
Jan 11, 2014, 3:52:00 PM1/11/14
to bo...@lists.boost.org
Le 11/01/14 19:40, Oliver Kowalke a écrit :
> 2014/1/11 Vicente J. Botet Escriba <vicent...@wanadoo.fr>
>
>> Boost.Thread interruption feature adds some overhead to all the
>> synchronization functions that are interruption_points.
>> It is too late for Boost.Thread, but what do you think about having a
>> simple fiber class and an interruptible::fiber class?
>>
> boost.fiber already support interruption
> (boost::fibers::fiber::interrupt()) - in contrast to boost.thread it is
> simply a flag in an atomic variable.
>
Do you mean that the cost of the managing interruptible threads is null?
Don't forget that you need to manage with the waiting condition
variables so that you can interrupt the fiber, which has some cost
independently of whether you use an atomic or a mutex.

The same argument you are using to support several schedulers should
apply here, as it is evident to me that the interruption management has
a performance cost.

Best,
Vicente

Vicente J. Botet Escriba

unread,
Jan 11, 2014, 3:55:43 PM1/11/14
to bo...@lists.boost.org
Le 11/01/14 19:38, Oliver Kowalke a écrit :
> 2014/1/11 Vicente J. Botet Escriba <vicent...@wanadoo.fr>
>> Could you elaborate? What is the problem you find?
>>
> for instance future.hpp from boost.thread has many conditional compilations
>
>
boost::future was proposed to Boost before the std::future was approved.
For compatibility reason we need to provide the original interface and
the standard one.
You don't have this problem as your library has not been approved yet.

Providing support for C++11 compilers seems to me a good point for a
Boost library.

Best,
Vicente

Vicente J. Botet Escriba

unread,
Jan 11, 2014, 3:59:19 PM1/11/14
to bo...@lists.boost.org
Le 11/01/14 21:52, Vicente J. Botet Escriba a écrit :
> Le 11/01/14 19:40, Oliver Kowalke a écrit :
>> 2014/1/11 Vicente J. Botet Escriba <vicent...@wanadoo.fr>
>>
>>> Boost.Thread interruption feature adds some overhead to all the
>>> synchronization functions that are interruption_points.
>>> It is too late for Boost.Thread, but what do you think about having a
>>> simple fiber class and an interruptible::fiber class?
>>>
>> boost.fiber already support interruption
>> (boost::fibers::fiber::interrupt()) - in contrast to boost.thread it is
>> simply a flag in an atomic variable.
>>
> Do you mean that the cost of the managing interruptible threads is null?
> Don't forget that you need to manage with the waiting condition
> variables so that you can interrupt the fiber, which has some cost
> independently of whether you use an atomic or a mutex.
>
Ah, I forgot. You need to access the fiber specific storage of the fiber
for each consdition_variable::wait operation, which is far from been an
operation without cost.

Thomas Heller

unread,
Jan 11, 2014, 4:04:45 PM1/11/14
to bo...@lists.boost.org
On Saturday, January 11, 2014 21:48:23 Vicente J. Botet Escriba wrote:
> Le 11/01/14 19:45, Oliver Kowalke a écrit :
> > 2014/1/11 Vicente J. Botet Escriba <vicent...@wanadoo.fr>
> >
> >> What would be the advantages of using work-stealing at the fiber level
> >> instead of using it at the task level?
> >
> > it is very simple because you migate a 'first-class' object, e.g. the
> > fiber
> > already is like a continuation.
>
> yes, but what are the advantages? Does it performs better? It is easy to
> write them?
>
> >> I wonder if the steel and migrate functions shouldn't be an internal
> >> detail of the library and that the library should provide a fiber_pool.
> >
> > fiber-stealing is not required in all cases and it has to be provided by
> > the fiber-scheduler hence it has to be part of the scheduler.
>
> What i sthe cost of a scheduler supporting stealing respect to one that
> doesn't support it? The performances measures should show this also.

This question can not be easily answered as this is purely application
specific. Non work stealing schedulers do include less overhead to schedule the
different threads while work stealing schedulers are able to mitigate certain
load imbalances, i.e. different execution time of the running tasks.

Vicente J. Botet Escriba

unread,
Jan 11, 2014, 4:12:47 PM1/11/14
to bo...@lists.boost.org
Le 11/01/14 19:40, Nat Goodspeed a écrit :
> On Sat, Jan 11, 2014 at 12:50 PM, Vicente J. Botet Escriba
> <vicent...@wanadoo.fr> wrote:
>
>> Boost.Thread interruption feature adds some overhead to all the
>> synchronization functions that are interruption_points.
>> It is too late for Boost.Thread, but what do you think about having a simple
>> fiber class and an interruptible::fiber class?
> That's particularly interesting in light of recent remarks about the
> cost of thread-safe fiber state management.
>
> Going out on a limb...
>
> Could we identify an underlying interruption-support operation and
> tease it out into a policy class? Maybe "policy" is the wrong word
> here: given the number of Fiber classes that would engage it, adding a
> template parameter to each of them -- and requiring them all to be
> identical for a given process -- feels like the wrong API. As with the
> scheduling algorithm, what about replacing a library-default object?
> Is the interruption-support overhead sufficiently large as to dwarf a
> pointer indirection that could bypass it?

The major cost is not on the fiber class but on the
condition_variable::wait operation. Next follows the implementation
found on CCIA

void interruptible_wait(std::condition_variable& cv,
std::unique_lock<std::mutex>& lk)

{
interruption_point();
this_thread_interrupt_flag.set_condition_variable(cv);
cv.wait(lk);
this_thread_interrupt_flag.clear_condition_variable();
interruption_point();

}

If a template parameter should be used I will vote for a boolean. C++
Concurrency in Action Whether the class fiber is parameterized or we
have two classes fiber and interruptible_fiber could be discussed.
> Similarly for fiber state management thread safety:
>
> Could we identify a small set of low-level internal synchronization
> operations and consolidate them into a policy class? Maybe in that
> case it actually could be a template parameter to either the scheduler
> or the fiber class itself. I'd still be interested in the possibility
> of a runtime decision; but given a policy template parameter, I assume
> it should be straightforward to provide a particular policy class that
> would delegate to runtime.
It is clear that a fiber that can communicate only with fibers on the
same thread would avoid any thread synchronization problems and perform
better. I'm sure that there are applications that fall into this more
restrictive design.

Here again can have a template parameter to state if the fiber is
intra-thread or inter-threads.
> Then again, too many parameters/options might make the library
> confusing to use. Just a thought.
>
Right. But for the time been we have just identified two parameters,
which are not too much.


Vicente

Rob Stewart

unread,
Jan 11, 2014, 9:45:55 PM1/11/14
to bo...@lists.boost.org
On Jan 11, 2014, at 11:33 AM, Nat Goodspeed <n...@lindenlab.com> wrote:

> On Sat, Jan 11, 2014 at 11:18 AM, Rob Stewart <robert...@comcast.net> wrote:
>
>> On Jan 11, 2014, at 11:01 AM, Nat Goodspeed <n...@lindenlab.com> wrote:
>>
>>> the bad case is when class X and class Y provide operator bool. A coder inadvertently writes:
>>>
>>> if (myXinstance == myYinstance) ...
>
>> That isn't an issue when the conversion operator is explicit, which Vicente suggested.
>
> http://permalink.gmane.org/gmane.comp.lib.boost.devel/248167

I understand that Oliver was trying for C++03 compatibility, but Vicente suggested an explicit conversion operator, and Oliver apparently didn't understand that. Furthermore, I think Vicente suggested conditional compilation to use the C++11 feature when available.

___
Rob

(Sent from my portable computation engine)

Rob Stewart

unread,
Jan 11, 2014, 9:48:54 PM1/11/14
to bo...@lists.boost.org
On Jan 11, 2014, at 11:25 AM, Oliver Kowalke <oliver....@gmail.com> wrote:

> 2014/1/11 Rob Stewart <robert...@comcast.net>
>
>> That isn't an issue when the conversion operator is explicit, which Vicente suggested.
>
> many other boost libraries use the safe_bool idiom - I believe it's not bad to follow this 'common' coding practice

Those libraries were written when there was no better choice, so they are not good examples for a new library.

james

unread,
Jan 12, 2014, 4:13:34 AM1/12/14
to bo...@lists.boost.org
That's not in any realistic way feasible. It might do for home-grown
code that
works on sockets directly, but you will be out of luck with almost any third
party database library or anything else that wraps up communication or
persistence in any way. It also impacts code that uses libraries that have
data structures protected by locks that might be held for extended periods
sometimes. Most of these will be more important to a project than
fibres, or
indeed boost.

Oliver Kowalke

unread,
Jan 12, 2014, 4:52:13 AM1/12/14
to boost
2014/1/12 james <ja...@mansionfamily.plus.com>

> That's not in any realistic way feasible. It might do for home-grown code
> that
> works on sockets directly, but you will be out of luck with almost any
> third
> party database library or anything else that wraps up communication or
> persistence in any way. It also impacts code that uses libraries that have
> data structures protected by locks that might be held for extended periods
> sometimes. Most of these will be more important to a project than fibres,
> or
> indeed boost.
>

the fiber lib is not a on-size-fits-all too

Oliver Kowalke

unread,
Jan 12, 2014, 4:53:04 AM1/12/14
to boost
s/too/tool/


2014/1/12 Oliver Kowalke <oliver....@gmail.com>

Niall Douglas

unread,
Jan 12, 2014, 11:38:56 AM1/12/14
to bo...@lists.boost.org
On 11 Jan 2014 at 10:21, Nat Goodspeed wrote:

> > * C++11 support needs improving. Others have mentioned more on this
> > than I.
>
> For my purposes in collating results, though, I'd ask you for a bit
> more detail here.

Sorry, that was sloppy wording on my part. I meant to say something
more like this:

* Direct support for C++11 needs improving.

And by that I mean three main items:

(i) Improved conformance with C++11 idioms.

(ii) Improved conformance with C++11 std::thread patterns.

(iii) Explicit #ifdef support with code for C++11 features. I didn't
look closely enough to see if these are already in there, but by this
I would mean move construction, initialiser lists, rvalue this
overloads, deleting operators where appropriate etc - the usual
stuff.

> As far as I can tell, you might be alluding to
> 'explicit operator bool' rather than the C++03 'operator safe_bool'
> trick. If the review might end up requesting more work from Oliver,
> it's only fair to be as specific as we can about what work is
> required. Otherwise it's sort of a "too many notes" level of critique
> -- not really actionable.

I understand entirely. I didn't go into detail because I didn't think
I could improve on what others have said, and I don't have the time
to contribute much more detail (I have maths coursework due next
week). Hopefully the above clarifies my position.

Niall

--
Currently unemployed and looking for work.
Work Portfolio: http://careers.stackoverflow.com/nialldouglas/



Niall Douglas

unread,
Jan 12, 2014, 11:43:03 AM1/12/14
to bo...@lists.boost.org
On 11 Jan 2014 at 17:29, Oliver Kowalke wrote:

> but I'm uncertain which one (at least if already added support for
> interruption) - but what
> about the others (many of them a conditionally compiled)?

Me personally I am very cool about (existing) interruption support. I
think thread interruption really ought to be part of an improved,
more generic C++ exception handling mechanism, and not bolted on top
by user code.

In other words, thread interuption in my opinion really ought to be
supported directly by the C++ runtime. But that's another discussion
you can read about on the ISO study groups (if you're really bored).

> > * Intel TSX support to avoid locks.
>
> may I contact you for some info regarding to this issue (I'd like to
> benefit from your knowledge)?

Of course! Though I fear I have little knowledge to share, except
that of how ignorant I am.

Oliver Kowalke

unread,
Jan 12, 2014, 1:39:20 PM1/12/14
to boost
2014/1/11 Vicente J. Botet Escriba <vicent...@wanadoo.fr>

> Le 11/01/14 19:45, Oliver Kowalke a écrit :
>
> 2014/1/11 Vicente J. Botet Escriba <vicent...@wanadoo.fr>
>>
>> What would be the advantages of using work-stealing at the fiber level
>>> instead of using it at the task level?
>>>
>>> it is very simple because you migate a 'first-class' object, e.g. the
>> fiber
>> already is like a continuation.
>>
> yes, but what are the advantages? Does it performs better? It is easy to
> write them?
>

because creating depended tasks would not block your thread-pool.
suppose you have a thread-pool of M threads and you create (without
fiber-support) many tasks N.
some of the tasks create other task, executed in the pool, and wait on the
results.
if you have enough tasks N>>M all your worker-threads of the pool would be
blocked.

something like:

void tsk() {
...
for( int i = 0; i<X;++i) {
...
packaged_task<> p(some_other_tsk);
future<> f = p.get_future();
spawn( p);
f.get(); // blocks worker-thread
...
}
...
}

With fibers the code above (using packaged_task<> and future<> from
boost.fiber) does not
block the worker-thread.

Vicente J. Botet Escriba

unread,
Jan 12, 2014, 5:50:12 PM1/12/14
to bo...@lists.boost.org
Le 12/01/14 19:39, Oliver Kowalke a écrit :
This is the kind of information, motivation and examples the user needs
on the documentation ;-)

Maybe you should add how it could be done without fibers so that the
thread doesn't blocks (you remember, we added it together on your
thread_pool library more than 3 years ago).

Vicente

Antony Polukhin

unread,
Jan 13, 2014, 12:09:36 AM1/13/14
to boost@lists.boost.org List
2014/1/11 Oliver Kowalke <oliver....@gmail.com>

> 2014/1/11 Rob Stewart <robert...@comcast.net>
>
> > That isn't an issue when the conversion operator is explicit, which
> > Vicente suggested.
> >
>
> many other boost libraries use the safe_bool idiom - I believe it's not bad
> to follow this 'common' coding practice
>

I'll try to stop the 'explicit bool operator problem' discussion by this
link:
http://www.boost.org/doc/libs/1_55_0/libs/utility/doc/html/explicit_operator_bool.html

Just use the BOOST_EXPLICIT_OPERATOR_BOOL macro and let the library
maintainer cope with explicit-bool-operator/safe_bool problem :)

--
Best regards,
Antony Polukhin

Oliver Kowalke

unread,
Jan 13, 2014, 2:36:39 AM1/13/14
to boost
2014/1/13 Antony Polukhin <anto...@gmail.com>

> 2014/1/11 Oliver Kowalke <oliver....@gmail.com>
>
> > 2014/1/11 Rob Stewart <robert...@comcast.net>
> >
> > > That isn't an issue when the conversion operator is explicit, which
> > > Vicente suggested.
> > >
> >
> > many other boost libraries use the safe_bool idiom - I believe it's not
> bad
> > to follow this 'common' coding practice
> >
>
> I'll try to stop the 'explicit bool operator problem' discussion by this
> link:
>
> http://www.boost.org/doc/libs/1_55_0/libs/utility/doc/html/explicit_operator_bool.html
>
> Just use the BOOST_EXPLICIT_OPERATOR_BOOL macro and let the library
> maintainer cope with explicit-bool-operator/safe_bool problem :)
>

that's fine - thank you, I'll change the code accordingly.

Nat Goodspeed

unread,
Jan 13, 2014, 9:32:58 AM1/13/14
to bo...@lists.boost.org, boost...@lists.boost.org
On Mon, Jan 6, 2014 at 8:07 AM, Nat Goodspeed <n...@lindenlab.com> wrote:

> The review of Boost.Fiber by Oliver Kowalke begins today, Monday January
> 6th, and closes Wednesday January 15th.
>
> ---------------------------------------------------
>
> Please always state in your review whether you think the library should be
> accepted as a Boost library!

I'm heartened by the level of interest I've seen so far. I hope those
of you who have been participating in the Fiber library discussions
will submit actual reviews by Wednesday. (Thank you, Niall, I
acknowledge your review.)

I'd like to make one more request. I've seen some questions and
concerns raised. To help me properly collate results, let me quote
from [1]:

"If you identify problems along the way, please note if they are
minor, serious, or showstoppers."

I'm sorry, I should have stated that in my initial review
announcement. It seems only fair to help Oliver prioritize.

Carry on!

[1] http://www.boost.org/community/reviews.html#Comments

Vicente J. Botet Escriba

unread,
Jan 13, 2014, 1:20:18 PM1/13/14
to bo...@lists.boost.org
Le 06/01/14 14:07, Nat Goodspeed a écrit :
> Hi all,
>
> The review of Boost.Fiber by Oliver Kowalke begins today, Monday January
> 6th, and closes Wednesday January 15th.
>
> -----------------------------------------------------
Hi, here it is my review.

I would like to have more time to review the implementation, but as
there are already serious concerns respect to the design and
documentation, I will do it if there is a new review.
> Please always state in your review whether you think the library should be
> accepted as a Boost library!
IMO the library needs some serious modifications before been include
into Boost. So, for the time been, my vote is no, the library is not yet
ready. I'm sure that Oliver could take care of most of the major points
and that the library would be accepted after taking in account these
points.
> Additionally please consider giving feedback on the following general
> topics:
>
> - What is your evaluation of the design?
The design is sound but has some details that must be fixed before
acceptation:

* (Showstopper) The interface must at least follows the interface of the
standard thread library (C++11) and if there are some limitations, they
must be explicitly documented.
* (Serious) Any difference respect Boost.Thread must also be documented
and the rationale explained.
* (Minor) priority and thread_affinity should be part of the fiber
attributes as well as the stack and allocator. This will help maintain
the interface similar to the Boost.Thread one.
* (Minor) The

void thread_affinity( bool req) noexcept;

could be named set_thread_affinity to follow the standard way.

* (Minor) I suggest to hide the algorithm functionality and introduce it once you have a fiber_pool proposal that uses work stealing.

* I you insists in providing it I suggest:
*(Minor) set_scheduling_algorithm should return the old scheduling algorithm.

algorithm*set_scheduling_algorithm( algorithm *);

and the function should be on this_thread namespace.

* The migration interface
*(Serious) The steal_from and migrate_to functions could be grouped
on a exchange function that would do the steal and the migration at
once. This allows intrusive implementations that could avoid to
delete/new. If the algorithm doesn't supports fiber migration a
exception could be thrown.
* (Minor) An alternative could be to use an enum for the algorithm
class and let the library manage internally with the scheduler.

* (Minor) The safe_bool idiom should be replaced by a explicit operator
bool on C++11 compilers.
* (Showstopper) The time related function should not be limited to a
specific clock.
* (Minor) The fiber_group should be removed as the design it is based
based on the new C++11 move-semantic feature. Maybe it could be replaced
by one based on
http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2013/n3711.pdf
* (Serious) Element queue::value_pop(); must be added to the queues
and the interface should follow
http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2013/n3533.html.
* (Minor) Barrier could include the completion function as Boost.Thread
and http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2013/n3817.html.
* (Serious) Interruptible and non-interruptible threads must be separated.
* (Serious) Intra-thread fibers should be provided (fibers that
synchronize only with fibers on the same thread)

* (Serious) future<>::then() and all the associated features must be added. http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2013/n3784.pdf


> - What is your evaluation of the implementation?
I have take a quick reading at.
I would need much more time to have a reasonable evaluation, which I
don't have :(

As others I would like to see performance test on the following
variations points

* Single/Multi-threaded fibers
* Interruptible/NonInterruptible fibers
* Stealing/No Stealing schedulers
*


- What is your evaluation of the documentation?

(Serious) The documentation is minimal and should be improved.

* (Minor) The link on Boost.Coroutine fails as well as the link to
Boost.Thread.
* (Serious) It must be clarified that the exceptions thrown by a fiber
function are caught by the fiber library and the program is terminated.
* (Minor) some examples showing how the affinity can be changed by the
owned of the thread and the thread itself would help.

* (Minor) Please don't document get/set functions as thread_affinity altogether.
* (Serious) The documentation must clarify the scope of the scheduler (thread specific).
* (Serious) The documentation must clarify how portable is the priority if it is specific of the scheduler.
* (Serious) You mustn't document the interface of the algorithm class that the user can not use.
* (Showstopper) async() documentation must be added and be compatible with the C++ standard.

* (Serious) A section on how to install and how to run the test and examples would help a lot to users that want to try the library.
It is not clear that the user must install fiber inside a Boost repository.
It is not clear in the documentation that the library is not header only and that the user needs to build it and link with it.

* (Serious) The boost/fiber/asio files are not documented. Are these a detail of the implementation?

- What is your evaluation of the potential usefulness of the library?

Very useful if the promised performances (C10k problem) are there. A
performance benchmark must be added.
> - Did you try to use the library? With what compiler? Did you have any
> problems?

(Serious) Yes and a section on how to install the library and how to run the tests would help me.



on MacOs with the following compilers with c++98 and c++11 modes.

darwin-4.7.1,clang-3.1,clang-3.2,darwin-4.7.2,darwin-4.8.0,darwin-4.8.1


I've run the tests.

* auto_ptr should be replaced.
* I've got this compile error
clang-darwin.compile.c++
../../../bin.v2/libs/fiber/test/test_futures_mt.test/clang-darwin-3.1xl/debug/link-static/threading-multi/test_futures_mt.o
In file included from test_futures_mt.cpp:13:
In file included from ../../../boost/fiber/all.hpp:17:
../../../boost/fiber/bounded_queue.hpp:570:29: error: void function
'push' should not return a value [-Wreturn-type]
if ( is_closed_() ) return queue_op_status::closed;
^ ~~~~~~~~~~~~~~~~~~~~~~~
../../../boost/fiber/bounded_queue.hpp:580:9: error: void function
'push' should not return a value [-Wreturn-type]
return queue_op_status::success;
^ ~~~~~~~~~~~~~~~~~~~~~~~~
2 errors generated.

The tests test_then and test_wait_for don't compile

> - How much effort did you put into your evaluation? A glance? A quick
> reading? In-depth study?

In-depth study of the documentation.

> - Are you knowledgeable about the problem domain?
>
>
Yes.

Best,
Vicente

Gavin Lambert

unread,
Jan 13, 2014, 9:51:21 PM1/13/14
to bo...@lists.boost.org
On 10/01/2014 20:26, Quoth Oliver Kowalke:
> 2014/1/9 Vicente J. Botet Escriba <vicent...@wanadoo.fr>
>
>> The algorithm class should be called scheduler.
>
>
> or manager?

"Manager" is almost as bad as "algorithm" -- it's too generic and vague.

Whenever you've described what "algorithm" actually does you've called
it a scheduler, so isn't that the better name?
It is loading more messages.
0 new messages