[Boost-users] Signals2 benchmark

569 views
Skip to first unread message

Joren Heit

unread,
Feb 5, 2015, 10:20:47 PM2/5/15
to boost...@lists.boost.org
Hi all,

New to this list, so I hope I'm at the right place, and not asking too much of you guys. Please inform me if either of those is true! ;-)

I have been working on a little toy project to implement a template-based Signal/Slot library and now that I have a usable end-product, I want to test it against boost::signals2 to get an idea of how well it performs. Therefore my first question is whether boost::signals2 is generally considered a high performance library.

Second, I'm a little reluctant to build a benchmark suite myself, because I want to be fair to both myself and the other party. I don't know the signals2 library at all, let alone what its virtues and shortcomings are. Some tips on what to test exactly would therefore be great, although I'm afraid this request might qualify for "asking too much". Initial tests show that my own library can be up to 10 times faster than the boost-implementation, which makes me even more suspicious that I might not be testing the right things (but at the same time prematurely proud).

Thanks in advance and cheers!

Joren

Vicente J. Botet Escriba

unread,
Feb 6, 2015, 1:20:05 AM2/6/15
to boost...@lists.boost.org
Le 06/02/15 00:15, Joren Heit a écrit :
Hi,

just a first question: is your implementation thread-safe?

Best,
Vicente
_______________________________________________
Boost-users mailing list
Boost...@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-users

Dominique Devienne

unread,
Feb 6, 2015, 2:49:54 AM2/6/15
to boost-users
On Fri, Feb 6, 2015 at 12:15 AM, Joren Heit <jore...@gmail.com> wrote:
I have been working on [...] a template-based Signal/Slot library [...],I want to test it against boost::signals2 to get an idea of how well it performs.
[...]. Initial tests show that my own library can be up to 10 times faster than the boost-implementation, [...]

This benchmark [1] might be of interest. --DD

PS: In particular, note how the one asterisk'd with "Library aims to be thread safe" are at the bottom.

Joren Heit

unread,
Feb 6, 2015, 11:57:13 AM2/6/15
to boost...@lists.boost.org

Ah, that's a nice one. I wonder why I haven't come across that benchmark myself.

I haven't yet made it thread safe, but I have been wondering about this. Does it have to implement the slot-calls (which can itself be thread-unsafe) in a safe manner to be considered thread-safe? Or does it only have to be safe with respect to its own data?

Cheers,
Joren

Tim Janus

unread,
Feb 6, 2015, 11:57:34 AM2/6/15
to boost...@lists.boost.org
Great reference!
But comparing thread-safe implementations with implementations that are not thread-safe seems a bit unfair to me.
A benchmark that uses the dummy_mutex policy [1] of signals2 would be very interesting.

Greetings
Tim

[1] http://www.boost.org/doc/libs/1_57_0/doc/html/signals2/rationale.html#idp430226096

Gottlob Frege

unread,
Feb 6, 2015, 12:31:24 PM2/6/15
to boost...@lists.boost.org
On Fri, Feb 6, 2015 at 3:15 AM, Joren Heit <jore...@gmail.com> wrote:
> Ah, that's a nice one. I wonder why I haven't come across that benchmark
> myself.
>
> I haven't yet made it thread safe, but I have been wondering about this.
> Does it have to implement the slot-calls (which can itself be thread-unsafe)
> in a safe manner to be considered thread-safe? Or does it only have to be
> safe with respect to its own data?
>

For thread safety, the tricky part, but the part you want is two-fold:

1. You do not want to hold a lock _while_ the slot is being called.
In general, you never want to hold a lock while calling unknown code
(ie virtual function, pointer to function, std::function,...), because
you can't be sure whether that unknown code might grab a lock itself,
call you back (ie disconnect from within the slot, etc), or whatever,
which easily leads to hard to track deadlocks.
2. If I disconnect a slot, I don't want that slot to fire after the
disconnection. Sounds simple, but with threads, what does "after"
really mean? Particularly if one thread is calling disconnect while
another thread is firing the signal? For our purposes, "after" means
that the caller, using global flags or whatever, can't tell that the
slot was called after the call to disconnect returned.

So 1 means no locks or synchronization, and 2 means you need a lock or
synchronization.

The general way to handle this is by breaking #1, but breaking it as
narrowly as possible - you still hold a lock while the slot is being
called, but it is not a lock protecting the whole list of slots, it is
a lock protecting the currently called slot only. So you can still
add/remove slots to the signal, but if you attempt to remove the
currently running slot, it needs to remove the slot from the list, but
*if not the same thread as the slot thread* block until that slot is
finished (otherwise the slot could see itself being called after
disconnect).

And there are other complications (IIRC) like not using one mutex per
slot (wasteful), etc.

Tony

Joren Heit

unread,
Feb 6, 2015, 12:45:36 PM2/6/15
to boost...@lists.boost.org
Thanks Tony for the thorough reply. I had just made sure that signals can be (dis)connected by multiple threads in a thread-safe manner, but you are right that there is much more to it. This will be hard to implement I think, but I'll do my best!

One thing I don't quite grasp yet, is the following. Suppose one thread disconnects a slot while another fires a signal connected to that slot. You say that the implementation must make sure that the signal is not fired after the disconnect-call returns. But won't this be undefined behaviour, as there is no way of knowing which will grab the lock first?

Joren



Bjorn Reese

unread,
Feb 6, 2015, 1:06:48 PM2/6/15
to boost...@lists.boost.org
On 02/06/2015 06:45 PM, Joren Heit wrote:

> One thing I don't quite grasp yet, is the following. Suppose one thread
> disconnects a slot while another fires a signal connected to that slot.
> You say that the implementation must make sure that the signal is not
> fired after the disconnect-call returns. But won't this be undefined
> behaviour, as there is no way of knowing which will grab the lock first?

The easiest solution would be to store the slot as a shared_ptr, and
make a copy of it (pin it) each time you call the slot.

Gottlob Frege

unread,
Feb 6, 2015, 1:38:38 PM2/6/15
to boost...@lists.boost.org
On Fri, Feb 6, 2015 at 12:45 PM, Joren Heit <jore...@gmail.com> wrote:
> Thanks Tony for the thorough reply. I had just made sure that signals can be
> (dis)connected by multiple threads in a thread-safe manner, but you are
> right that there is much more to it. This will be hard to implement I think,
> but I'll do my best!
>
> One thing I don't quite grasp yet, is the following. Suppose one thread
> disconnects a slot while another fires a signal connected to that slot. You
> say that the implementation must make sure that the signal is not fired
> after the disconnect-call returns. But won't this be undefined behaviour, as
> there is no way of knowing which will grab the lock first?
>
> Joren
>

Yes either can grab the lock first. And then you need to do the right
thing in either case.
If the disconnect grabs the lock first, the signal-firing, once it
obtains the lock, can't call the slot. ie either the slot is gone or
has been marked null/disconnected/something.
If the signal-firing grabs the lock first, no problem.

Gottlob Frege

unread,
Feb 6, 2015, 1:41:21 PM2/6/15
to boost...@lists.boost.org
On Fri, Feb 6, 2015 at 1:06 PM, Bjorn Reese <bre...@mail1.stofanet.dk> wrote:
> On 02/06/2015 06:45 PM, Joren Heit wrote:
>
>> One thing I don't quite grasp yet, is the following. Suppose one thread
>> disconnects a slot while another fires a signal connected to that slot.
>> You say that the implementation must make sure that the signal is not
>> fired after the disconnect-call returns. But won't this be undefined
>> behaviour, as there is no way of knowing which will grab the lock first?
>
>
> The easiest solution would be to store the slot as a shared_ptr, and
> make a copy of it (pin it) each time you call the slot.
>

So

Thread 1: call signal,... copy slot shared_ptr,...
Thread 2: disconnect, clean up resources, set global ptrUsedBySlot =
null (since it is no longer used, right?)
Thread 1: call slot
Thread 1: crash on null ptrUsedBySlot

Niall Douglas

unread,
Feb 6, 2015, 2:18:25 PM2/6/15
to boost...@lists.boost.org
On 6 Feb 2015 at 18:45, Joren Heit wrote:

> Thanks Tony for the thorough reply. I had just made sure that signals can
> be (dis)connected by multiple threads in a thread-safe manner, but you are
> right that there is much more to it. This will be hard to implement I
> think, but I'll do my best!
>
> One thing I don't quite grasp yet, is the following. Suppose one thread
> disconnects a slot while another fires a signal connected to that slot. You
> say that the implementation must make sure that the signal is not fired
> after the disconnect-call returns. But won't this be undefined behaviour,
> as there is no way of knowing which will grab the lock first?

An atomic increment/decrement use count maybe? Slow when contended,
but faster possibly than alternatives.

BTW you can fold a lock into a pointer by using its bottom bit as the
lock bit. I have an implementation in Boost.Spinlock if you want it.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/


Niall Douglas

unread,
Feb 6, 2015, 2:22:51 PM2/6/15
to boost...@lists.boost.org
On 6 Feb 2015 at 13:41, Gottlob Frege wrote:

> > The easiest solution would be to store the slot as a shared_ptr, and
> > make a copy of it (pin it) each time you call the slot.
>
> So
>
> Thread 1: call signal,... copy slot shared_ptr,...
> Thread 2: disconnect, clean up resources, set global ptrUsedBySlot =
> null (since it is no longer used, right?)
> Thread 1: call slot
> Thread 1: crash on null ptrUsedBySlot

shared_ptr provides some atomic_exchange overloads. You could
atomically swap a slot being deleted with an empty but valid slot
type which calls nothing.

You'd see some fragmentation over time, but to be honest if the slots
array fits into an L1 cache you actually don't care. My
concurrent_unordered_map keeps a linear array of hashes and pointers
to items and simply scans them. Deletion means poking a zero into the
pointer, insertion tries to fill a zero slot before bothering with
extension. It's very quick, within 20% of a std::unordered_map.

Gottlob Frege

unread,
Feb 6, 2015, 3:34:19 PM2/6/15
to boost...@lists.boost.org
On Fri, Feb 6, 2015 at 2:23 PM, Niall Douglas <s_sour...@nedprod.com> wrote:
> On 6 Feb 2015 at 13:41, Gottlob Frege wrote:
>
>> > The easiest solution would be to store the slot as a shared_ptr, and
>> > make a copy of it (pin it) each time you call the slot.
>>
>> So
>>
>> Thread 1: call signal,... copy slot shared_ptr,...
>> Thread 2: disconnect, clean up resources, set global ptrUsedBySlot =
>> null (since it is no longer used, right?)
>> Thread 1: call slot
>> Thread 1: crash on null ptrUsedBySlot
>
> shared_ptr provides some atomic_exchange overloads. You could
> atomically swap a slot being deleted with an empty but valid slot
> type which calls nothing.
>

That would work to track the "winner". But if the disconnect thread
comes in second, it still needs to wait for the slot thread to finish.
So you still need a waitable synchronization object of some kind.

Joren Heit

unread,
Feb 6, 2015, 8:11:36 PM2/6/15
to boost...@lists.boost.org

Thanks for the many responses and suggestions. However, I'm not sure if everything applies to my particular application. If it does, I don't think I fully understand...

My implementation provides an Emitter class (template), rather than a signal class. Signals are types which act as template parameters to the Emitter. For example:

using Event1 = Signal<void()>;
using Event2 = Signal<void(int)>;

Emitter<Event1, Event2> em; // normally, you'd probably derive from this
em.connect<Event1>(someFunction);
em.connect<Event2>(otherFunction);
em.emit<Event1>();
em.emit<Event2>(42);

Each Signal-type has its own corresponding vector of slots defined within the Emitter. Calling connect/disconnect adds/removes a slot to/from this vector and calling emit() results in iterating over it and calling all its slots (this is a read-only operation).

I can see trouble arising when thread 1 is iterating the vector while thread 2 is modifying it. Would it be an idea to have the emitting thread
1. lock the vector,
2. make a local copy,
3. unlock, and
4. iterate the copy?
This way, the modifying thread only needs to wait for the copy being made instead of every slot being called and executed. Does that make sense at all?

Thanks again for all the help. Scott Meyers was right in his lectures about this being a helpful community! Oh, and if my formatting is screwed up, I'm truly sorry, but I'm writing this on my phone.

Cheers,
Joren

Gottlob Frege

unread,
Feb 7, 2015, 1:07:29 AM2/7/15
to boost...@lists.boost.org
Yeah, you are on the right track. Making a copy of data while holding
a lock then processing the copy without the lock is often a good
strategy in threading.
However, in this case...

int * somePtr = nullptr;

void someFunction()
{
// can somePtr change after the check but before the set?
if (somePtr)
*somePtr = 17;
}

void cleanupPtr()
{
// this looks safe, but compilers and CPUs can reorder this code:
int * tmp = somePtr;
somePtr = null;
delete tmp;
}

void thread1()
{
em.emit<Event1>();
}

void thread2()
{
em.remove<Event1>(someFunction);
// now safe to cleanup (?)
cleanupPtr();
}

Now lets say the emit and the remove are happening "at the same time", and
- Thread1: emit gets to the lock first, makes a copy, and unlocks
- Thread2: remove comes in, gets the lock, removes someFunction, returns
- Thread1: calls someFunction as part of iterating over copy of list
- Thread1: someFunction checks somePtr, sees it is non-null, great! (?)
- Thread2: after returning from remove, calls cleanupPtr
- Thread1: either writes to deleted memory, or writes to null

Threading is fun!

> Thanks again for all the help. Scott Meyers was right in his lectures about
> this being a helpful community! Oh, and if my formatting is screwed up, I'm
> truly sorry, but I'm writing this on my phone.

You are actually borderline not (or no longer) talking about boost,
but your own code/implementation. Some might call that questionable
for a boost list.
But threading is fun!

P.S. why write your own - why not use boost? Because of performance?
Was that the original point?
P.P.S. correctness is typically better than performance. I've written
the world's (second) fastest square-root. It returns 17. Not very
accurate, but very fast.

Tony

Joren Heit

unread,
Feb 7, 2015, 2:20:56 AM2/7/15
to boost...@lists.boost.org

Hm I hate reasoning about edge-cases... they make life so much harder. Guess I'll have to think about this some more.

My original question was about how to benchmark my own version (fairly) against boost, hence this mailing list. As far as I was concerned, this discussion could have been over after the link to all the benchmarks. I never intended this discussion to be about thread safety. However, I can't stress enough that I'm very thankful for all the help! ;-)

The reason I wrote it in the first place was just for fun I guess. After having used the idiom in Qt, it seemed like a nice challenge. Now that it's almost finished, I'd like te see how it holds up against the big guys. Also, it's just a little different than what I've seen of the others, with what I think is a nice syntax.

Cheers,
Joren

P.S. Is that implementation of your sqrt() available on github? Seems interesting! ;-)

Eric Prud'hommeaux

unread,
Feb 7, 2015, 3:12:48 AM2/7/15
to boost...@lists.boost.org
* Tim Janus <timj...@frozen-code.de> [2015-02-06 10:21+0100]
> Am 06.02.2015 um 08:49 schrieb Dominique Devienne:
> >On Fri, Feb 6, 2015 at 12:15 AM, Joren Heit <jore...@gmail.com
> ><mailto:jore...@gmail.com>> wrote:
> >
> > I have been working on [...] a template-based Signal/Slot library
> > [...],I want to test it against boost::signals2 to get an idea of
> > how well it performs.
> >
> > [...]. Initial tests show that my own library can be up to 10
> > times faster than the boost-implementation, [...]
> >
> >
> >This benchmark [1] might be of interest. --DD
> >
> >PS: In particular, note how the one asterisk'd with "Library aims
> >to be thread safe" are at the bottom.
> >
> >https://github.com/NoAvailableAlias/nano-signal-slot/tree/master/benchmark#performance
> >
> >
> Great reference!
> But comparing thread-safe implementations with implementations that
> are not thread-safe seems a bit unfair to me.

Is it reallistic that folks would want a variant of signals that's not
threadsafe, trading some callback restrictions for performance?


> A benchmark that uses the dummy_mutex policy [1] of signals2 would
> be very interesting.
>
> Greetings
> Tim
>
> [1] http://www.boost.org/doc/libs/1_57_0/doc/html/signals2/rationale.html#idp430226096

> _______________________________________________
> Boost-users mailing list
> Boost...@lists.boost.org
> http://lists.boost.org/mailman/listinfo.cgi/boost-users


--
-ericP

office: +1.617.599.3509
mobile: +33.6.80.80.35.59

(er...@w3.org)
Feel free to forward this message to any list for any purpose other than
email address distribution.

There are subtle nuances encoded in font variation and clever layout
which can only be seen by printing this message on high-clay paper.

Tim Janus

unread,
Feb 7, 2015, 5:05:16 AM2/7/15
to boost...@lists.boost.org
Am 07.02.2015 um 09:12 schrieb Eric Prud'hommeaux:
> * Tim Janus <timj...@frozen-code.de> [2015-02-06 10:21+0100]
>> Am 06.02.2015 um 08:49 schrieb Dominique Devienne:
>>> On Fri, Feb 6, 2015 at 12:15 AM, Joren Heit <jore...@gmail.com
>>> <mailto:jore...@gmail.com>> wrote:
>>>
>>> I have been working on [...] a template-based Signal/Slot library
>>> [...],I want to test it against boost::signals2 to get an idea of
>>> how well it performs.
>>>
>>> [...]. Initial tests show that my own library can be up to 10
>>> times faster than the boost-implementation, [...]
>>>
>>>
>>> This benchmark [1] might be of interest. --DD
>>>
>>> PS: In particular, note how the one asterisk'd with "Library aims
>>> to be thread safe" are at the bottom.
>>>
>>> https://github.com/NoAvailableAlias/nano-signal-slot/tree/master/benchmark#performance
>>>
>>>
>> Great reference!
>> But comparing thread-safe implementations with implementations that
>> are not thread-safe seems a bit unfair to me.
> Is it reallistic that folks would want a variant of signals that's not
> threadsafe, trading some callback restrictions for performance?

Well, I think the majority of the audiences wont care. But I know at
least two groups
of developers who are always like: "I need the fasted solution and can
life with
a lot of restrictions" - Game developers and developers of realtime
programs.

>> A benchmark that uses the dummy_mutex policy [1] of signals2 would
>> be very interesting.
>>
>> Greetings
>> Tim
>>
>> [1] http://www.boost.org/doc/libs/1_57_0/doc/html/signals2/rationale.html#idp430226096
>> _______________________________________________
>> Boost-users mailing list
>> Boost...@lists.boost.org
>> http://lists.boost.org/mailman/listinfo.cgi/boost-users
>


---
Diese E-Mail wurde von Avast Antivirus-Software auf Viren geprüft.
http://www.avast.com

Niall Douglas

unread,
Feb 7, 2015, 11:19:49 AM2/7/15
to boost...@lists.boost.org
On 7 Feb 2015 at 3:12, Eric Prud'hommeaux wrote:

> > But comparing thread-safe implementations with implementations that
> > are not thread-safe seems a bit unfair to me.
>
> Is it reallistic that folks would want a variant of signals that's not
> threadsafe, trading some callback restrictions for performance?

I think a version which is compile time switchable is the ideal.
However, a non-thread safe implementation isn't particularly
interesting. It's easy to achieve high performance if you have a
giant mutex wrapping everything.

Niall Douglas

unread,
Feb 7, 2015, 11:20:10 AM2/7/15
to boost...@lists.boost.org
On 7 Feb 2015 at 8:20, Joren Heit wrote:

> My original question was about how to benchmark my own version (fairly)
> against boost, hence this mailing list. As far as I was concerned, this
> discussion could have been over after the link to all the benchmarks. I
> never intended this discussion to be about thread safety. However, I can't
> stress enough that I'm very thankful for all the help! ;-)

I think we ignored your original question as the question of its
correctness of thread safety is more important. When, and only when,
it is unit test demonstrated to be correct is it worth thinking about
benchmarking. You'll find to gain correctness/reliability there are a
ton of unfortunate tradeoffs that have to be made.

I think the reason we're being so helpful is because we all suspect
that signals and slots is one of those patterns which sits on the
cusp of being just simple enough to implement lock free vs just
complex enough it isn't worth implementing lock free. I suspect none
of us know the answer to that, but it's the kind of thing a certain
type of Boost programmer lies awake at night wondering.

I think we'd all also like to know a definitive answer. Perhaps Tony
you might consider a lock free signals and slots presentation for a
future BoostCon eh?

Klaim - Joël Lamotte

unread,
Feb 7, 2015, 3:12:09 PM2/7/15
to Boost users list

​This might be a slightly off-topic question, not sure, but it's related I think:
Does it seam useful to people using signals2 or similar library to consider another kind of signal type
which would take an Executor [1] concept instance on construction, system's executor (thread-pool)
by default, and would then use that executor for dispatching calls? (one task pushed by listener).
That is, when a signal is called, it pushes as many tasks in the executor as there is listeners and returns.
It makes the dispatching either "in order" if the executor is a strand or not if it's not.
This signal type would not make any guarantee about the which thread would call the listening objects.

It seems to me that in some cases I have encountered (highly concurrent task-based systems with a lot of message dispatching
between "actors"-like objects), 
this kind of model might be more interesting performance-scalability-wise than the current mutex-based dispatching of signals2.
I didn't try to compare and measure performance though and I might be totally wrong.


Michael Powell

unread,
Feb 7, 2015, 3:46:18 PM2/7/15
to boost...@lists.boost.org
On Sat, Feb 7, 2015 at 2:12 PM, Klaim - Joël Lamotte <mjk...@gmail.com> wrote:
>
> This might be a slightly off-topic question, not sure, but it's related I
> think:

I could be wrong, but isn't an executor exactly what a signal is
already, i.e. when you specify signal-of-function, and connect
to/disconnect from it? The signal/event source makes the callback,
which I assume are stored in a FIFO order.

I can't speak to callback ordering, per se. It seems like FIFO would
be a natural assumption. It would be equally interesting to specify
that strategy, allowing for LIFO, for example, or even prioritized
callbacks.

> Does it seam useful to people using signals2 or similar library to consider
> another kind of signal type
> which would take an Executor [1] concept instance on construction, system's
> executor (thread-pool)
> by default, and would then use that executor for dispatching calls? (one
> task pushed by listener).
> That is, when a signal is called, it pushes as many tasks in the executor as
> there is listeners and returns.
> It makes the dispatching either "in order" if the executor is a strand or
> not if it's not.
> This signal type would not make any guarantee about the which thread would
> call the listening objects.

I've considered that parallelism, concurrency, threading, reentrancy
to be the job of the callee. At the source of the signal/event, these
concerns are assumed, and the callee must handle such potential
collisions.

> It seems to me that in some cases I have encountered (highly concurrent
> task-based systems with a lot of message dispatching
> between "actors"-like objects),
> this kind of model might be more interesting performance-scalability-wise
> than the current mutex-based dispatching of signals2.
> I didn't try to compare and measure performance though and I might be
> totally wrong.

I haven't measured for performance myself, but it at least 'works' in
a multi-threaded situation, i.e. where I may be passing messages to a
pub/sub endpoint, which is subsequently packaged for deliver to
subscribers. This works pretty well, guarding the boundaries with
appropriate mutex-based or other locking.

> [1] https://github.com/chriskohlhoff/executors

Ben Pope

unread,
Feb 8, 2015, 1:21:32 AM2/8/15
to boost...@lists.boost.org
On Sunday, February 08, 2015 04:12 AM, Klaim - Joël Lamotte wrote:
>
> ​This might be a slightly off-topic question, not sure, but it's related
> I think:
> Does it seam useful to people using signals2 or similar library to
> consider another kind of signal type
> which would take an Executor [1] concept instance on construction,
> system's executor (thread-pool)
> by default, and would then use that executor for dispatching calls? (one
> task pushed by listener).

I think it would be nice to have an executor on the slot, too.

> That is, when a signal is called, it pushes as many tasks in the
> executor as there is listeners and returns.
> It makes the dispatching either "in order" if the executor is a strand
> or not if it's not.
> This signal type would not make any guarantee about the which thread
> would call the listening objects.

An executor in the slot lets the subscriber decide which thread calls
into its code.

If the slots are primarily synchronous, you might prefer an asynchronous
signal, if they are primarily asynchronous, a synchronous signal may
suffice.

This presents you with a fun game for return values.

I think the default for both should be synchronous though, otherwise the
benchmarks look worse :P

This then starts to look a lot more like observe_on and subscribe_on
from rxcpp:
https://github.com/Reactive-Extensions/RxCpp/
http://www.introtorx.com/Content/v1.0.10621.0/15_SchedulingAndThreading.html

Ben

Klaim - Joël Lamotte

unread,
Feb 8, 2015, 9:03:01 AM2/8/15
to Boost users list
On Sat, Feb 7, 2015 at 9:46 PM, Michael Powell <mwpow...@gmail.com> wrote:
On Sat, Feb 7, 2015 at 2:12 PM, Klaim - Joël Lamotte <mjk...@gmail.com> wrote:
>
> This might be a slightly off-topic question, not sure, but it's related I
> think:

I could be wrong, but isn't an executor exactly what a signal is
already, i.e. when you specify signal-of-function, and connect
to/disconnect from it? The signal/event source makes the callback,
which I assume are stored in a FIFO order. 
I can't speak to callback ordering, per se. It seems like FIFO would
be a natural assumption. It would be equally interesting to specify
that strategy, allowing for LIFO, for example, or even prioritized
callbacks.



An executor takes arbitrary tasks as input and only
guarantee that these tasks will be executed, under some constraints
defined by the specific executor type.
A signal dispatch it's call and parametters to a set of observers.

Basically, when the signal calls the observers, it's trigerring the execution of work/tasks.
I am then suggesting to dissociate the work done by the signal, that is trigerring the 
dispatching, and the actual way the observers would be called, which would depend on the
the executor kind.

 If you will, it's like the current signals2 had a hard-coded executor inside, and 
that some would want to pay for the flexibility of being able to specify the executor
on construction.

 
> Does it seam useful to people using signals2 or similar library to consider
> another kind of signal type
> which would take an Executor [1] concept instance on construction, system's
> executor (thread-pool)
> by default, and would then use that executor for dispatching calls? (one
> task pushed by listener).
> That is, when a signal is called, it pushes as many tasks in the executor as
> there is listeners and returns.
> It makes the dispatching either "in order" if the executor is a strand or
> not if it's not.
> This signal type would not make any guarantee about the which thread would
> call the listening objects.

I've considered that parallelism, concurrency, threading, reentrancy
to be the job of the callee. At the source of the signal/event, these
concerns are assumed, and the callee must handle such potential
collisions.


I agree. I think it would potentially affect more the scalability of the calls, not
the callees (which still have to be protected against concurrent calls as you point).
That is, if a signal with a lot of observers is called very often, it would dispatch a lot of 
obverver call tasks which, if pushed in a thread_pool, would be called ASAP without
blocking/locking the signals call function (much).

Michael Powell

unread,
Feb 8, 2015, 10:42:36 AM2/8/15
to boost...@lists.boost.org
On Sun, Feb 8, 2015 at 8:02 AM, Klaim - Joël Lamotte <mjk...@gmail.com> wrote:
>
>
> On Sat, Feb 7, 2015 at 9:46 PM, Michael Powell <mwpow...@gmail.com>
> wrote:
>>
>> On Sat, Feb 7, 2015 at 2:12 PM, Klaim - Joël Lamotte <mjk...@gmail.com>
>> wrote:
>> >
>> > This might be a slightly off-topic question, not sure, but it's related
>> > I
>> > think:
>>
>> I could be wrong, but isn't an executor exactly what a signal is
>> already, i.e. when you specify signal-of-function, and connect
>> to/disconnect from it? The signal/event source makes the callback,
>> which I assume are stored in a FIFO order.
>>
>> I can't speak to callback ordering, per se. It seems like FIFO would
>> be a natural assumption. It would be equally interesting to specify
>> that strategy, allowing for LIFO, for example, or even prioritized
>> callbacks.
>>
>
>
> An executor takes arbitrary tasks as input and only
> guarantee that these tasks will be executed, under some constraints
> defined by the specific executor type.
> A signal dispatch it's call and parametters to a set of observers.

Using the Boost.Signals2 for example, it is easy AFAIK to type-define
a signal, and re-use that type anywhere that signal is required. So a
listener could receive a signal, and subsequently do whatever it
wanted to with that message; dispatch it again, process it,
whatever...

> Basically, when the signal calls the observers, it's trigerring the
> execution of work/tasks.
> I am then suggesting to dissociate the work done by the signal, that is
> trigerring the
> dispatching, and the actual way the observers would be called, which would
> depend on the
> the executor kind.
>
> If you will, it's like the current signals2 had a hard-coded executor
> inside, and
> that some would want to pay for the flexibility of being able to specify the
> executor
> on construction.

I haven't looked at the signals2 code recently, except from recent
experience using Boost.Signals2. It would be interesting to inject a
Dispatcher handler. Default might be a synchronous call; one option
might be to provide an async/futures based dispatcher, for example.

See abive; my two cents, re: potentially injecting an asynchronous dispatcher.

Klaim - Joël Lamotte

unread,
Feb 8, 2015, 12:49:12 PM2/8/15
to Boost users list
On Sun, Feb 8, 2015 at 4:42 PM, Michael Powell <mwpow...@gmail.com> wrote:
On Sun, Feb 8, 2015 at 8:02 AM, Klaim - Joël Lamotte <mjk...@gmail.com> wrote:
>
>
> An executor takes arbitrary tasks as input and only
> guarantee that these tasks will be executed, under some constraints
> defined by the specific executor type.
> A signal dispatch it's call and parametters to a set of observers.

Using the Boost.Signals2 for example, it is easy AFAIK to type-define
a signal, and re-use that type anywhere that signal is required. So a
listener could receive a signal, and subsequently do whatever it
wanted to with that message; dispatch it again, process it,
whatever...


Yes but the executor would specify how the observers are notified, not how the observer's work is executed (which depends on the observer implementation indeed).
There is an important nuance here.

Michael Powell

unread,
Feb 8, 2015, 1:13:42 PM2/8/15
to boost...@lists.boost.org

Yes, I understand. See prior note concerning Dispatcher interfaces.
Not sure that there is such an animal in the current Boost.Signals2
version, per se. A default Dispatcher might be synchronous, whereas an
asynchronous Dispatcher could be provided. I am at best intermediate
knowledge where async, futures, promises are concerned, or if
something like that is even possible with Signals2.

Nevin Liber

unread,
Feb 8, 2015, 8:26:20 PM2/8/15
to boost...@lists.boost.org
On 7 February 2015 at 10:20, Niall Douglas <s_sour...@nedprod.com> wrote:
On 7 Feb 2015 at 3:12, Eric Prud'hommeaux wrote:

> > But comparing thread-safe implementations with implementations that
> > are not thread-safe seems a bit unfair to me.
>
> Is it reallistic that folks would want a variant of signals that's not
> threadsafe, trading some callback restrictions for performance?

I think a version which is compile time switchable is the ideal.

-1.

Unless you can always build everything from source (as opposed to linking against libraries built by others), this becomes a nightmare when trying to avoid ODR violations.
--
 Nevin ":-)" Liber  <mailto:ne...@eviloverlord.com(847) 691-1404

Dominique Devienne

unread,
Feb 9, 2015, 6:00:15 AM2/9/15
to boost-users
On Fri, Feb 6, 2015 at 8:18 PM, Niall Douglas <s_sour...@nedprod.com> wrote:
An atomic increment/decrement use count maybe? Slow when contended,
but faster possibly than alternatives.

BTW you can fold a lock into a pointer by using its bottom bit as the
lock bit. I have an implementation in Boost.Spinlock if you want it.

Niall Douglas

unread,
Feb 9, 2015, 8:00:49 PM2/9/15
to boost...@lists.boost.org
On 8 Feb 2015 at 19:25, Nevin Liber wrote:

> > > Is it reallistic that folks would want a variant of signals that's not
> > > threadsafe, trading some callback restrictions for performance?
> >
> > I think a version which is compile time switchable is the ideal.
> >
>
> -1.
>
> Unless you can always build everything from source (as opposed to linking
> against libraries built by others), this becomes a nightmare when trying to
> avoid ODR violations.

It would be a very poor implementation that had that problem Nevin.
You'd almost certainly use namespaces or template parameters to make
symbol unique the two implementations.

Nevin Liber

unread,
Feb 10, 2015, 2:08:38 AM2/10/15
to boost...@lists.boost.org
On 9 February 2015 at 19:00, Niall Douglas <s_sour...@nedprod.com> wrote:
On 8 Feb 2015 at 19:25, Nevin Liber wrote:

> > > Is it reallistic that folks would want a variant of signals that's not
> > > threadsafe, trading some callback restrictions for performance?
> >
> > I think a version which is compile time switchable is the ideal.
> >
>
> -1.
>
> Unless you can always build everything from source (as opposed to linking
> against libraries built by others), this becomes a nightmare when trying to
> avoid ODR violations.

It would be a very poor implementation that had that problem Nevin.

Parts of Boost already have that problem.

Suppose I'm using Boost.Fusion, and I need a 50 element Fusion vector.  If I want to change that, the documentation <http://www.boost.org/doc/libs/1_57_0/libs/fusion/doc/html/fusion/container/vector.html> says:

You may define the preprocessor constant FUSION_MAX_VECTOR_SIZE before including any Fusion header to change the default. Example:

#define FUSION_MAX_VECTOR_SIZE 20


So, without rebuilding the entire world, how do I increase the maximum size of a Fusion vector without an ODR violation??

This is a huge problem with global flags.

[Note:  I am not criticizing Boost.Fusion here, as they really didn't have a choice in a pre-variadic template world.]
 
You'd almost certainly use namespaces or template parameters to make
symbol unique the two implementations.

Again, how do you do this if you are using compile time switches?  Please post some sample code showing the technique you envision for users.

Michael Powell

unread,
Feb 10, 2015, 7:49:27 AM2/10/15
to boost...@lists.boost.org
On Tue, Feb 10, 2015 at 1:07 AM, Nevin Liber <ne...@eviloverlord.com> wrote:
> On 9 February 2015 at 19:00, Niall Douglas <s_sour...@nedprod.com>
> wrote:
>>
>> On 8 Feb 2015 at 19:25, Nevin Liber wrote:
>>
>> > > > Is it reallistic that folks would want a variant of signals that's
>> > > > not
>> > > > threadsafe, trading some callback restrictions for performance?
>> > >
>> > > I think a version which is compile time switchable is the ideal.
>> > >
>> >
>> > -1.
>> >
>> > Unless you can always build everything from source (as opposed to
>> > linking
>> > against libraries built by others), this becomes a nightmare when trying
>> > to
>> > avoid ODR violations.
>>
>> It would be a very poor implementation that had that problem Nevin.
>
>
> Parts of Boost already have that problem.
>
> Suppose I'm using Boost.Fusion, and I need a 50 element Fusion vector. If I
> want to change that, the documentation
> <http://www.boost.org/doc/libs/1_57_0/libs/fusion/doc/html/fusion/container/vector.html>
> says:
>
> You may define the preprocessor constant FUSION_MAX_VECTOR_SIZE before
> including any Fusion header to change the default. Example:
>
> #define FUSION_MAX_VECTOR_SIZE 20

I could be wrong, but isn't Fusion (or Spirit, etc) header-only?

> So, without rebuilding the entire world, how do I increase the maximum size
> of a Fusion vector without an ODR violation??

I'm not sure what you mean, ODR violation? To appreciate where you are
coming from, I assume you have at least built boost and have worked
with it to some extent?

From experience, the Boost parts that are 'static' in nature (whether
the actual libs themselves are static or dynamic) are built once.
Anything else is usually header only, calling upon those static parts
(i.e. for threading).

Also from experience, the Boost maintainers do a pretty good job
keeping a sane compile-vs-header only boundary.

> This is a huge problem with global flags.

I don't think it's the mountain you think it is.

Once you build the Boost libs in an iteration, or application life
cycle, unless there's been a Boost patch or upgrade, you generally
never need to touch it again.

Everything else is header-only. Yes, you sometimes declare fully or
partially specialized templates (or at least, I do), but this is a
'trivial' part of using template header-only resources.

> [Note: I am not criticizing Boost.Fusion here, as they really didn't have a
> choice in a pre-variadic template world.]
>
>>
>> You'd almost certainly use namespaces or template parameters to make
>> symbol unique the two implementations.
>
> Again, how do you do this if you are using compile time switches? Please
> post some sample code showing the technique you envision for users.

AFAIK, compile time switches are injected via whatever project headers
you are using. In a Microsoft Visual C++ world, for example, that is
the *vcsproj* file, to my knowledge, usually via project settings. But
once these are established, you focus can (and should) be on the
problem solution / application at hand.

> --
> Nevin ":-)" Liber <mailto:ne...@eviloverlord.com> (847) 691-1404
>

Niall Douglas

unread,
Feb 10, 2015, 8:23:49 AM2/10/15
to boost...@lists.boost.org
On 10 Feb 2015 at 1:07, Nevin Liber wrote:

> > > Unless you can always build everything from source (as opposed to linking
> > > against libraries built by others), this becomes a nightmare when trying
> > to
> > > avoid ODR violations.
> >
> > It would be a very poor implementation that had that problem Nevin.
> >
>
> Parts of Boost *already* have that problem.
>
> Suppose I'm using Boost.Fusion, and I need a 50 element Fusion vector. If
> I want to change that, the documentation <
> http://www.boost.org/doc/libs/1_57_0/libs/fusion/doc/html/fusion/container/vector.html>
> says:
>
> You may define the preprocessor constant FUSION_MAX_VECTOR_SIZE before
> including any Fusion header to change the default. Example:
>
> #define FUSION_MAX_VECTOR_SIZE 20
>
>
>
> So, without rebuilding the entire world, how do I increase the maximum size
> of a Fusion vector without an ODR violation??
>
> This is a huge problem with global flags.
>
> [Note: I am not criticizing Boost.Fusion here, as they really didn't have
> a choice in a pre-variadic template world.]
>
>
> > You'd almost certainly use namespaces or template parameters to make
> > symbol unique the two implementations.
> >
>
> Again, how do you do this if you are using compile time switches? Please
> post some sample code showing the technique you envision for users.

Assuming you actually want an answer here and are not baiting for the
sake of it as usual ...

Your example above requires much more than a useful ABI break as was
originally being discussed. A compile time selectable option for
whether thread safety is there or not can be easily encoded into a
boolean at the end of every template parameter, or via a compile time
thunk to two internal implementation namespaces.

If you have C++ 11, you can even let code change
FUSION_MAX_VECTOR_SIZE during compilation and get multiple Fusions
in the same translation unit if Fusion were built on top of my
BindLib framework. Each Fusion would not work with any other however
without additional bridge code. If you want a taste of the macro
programming involved for multi-ABI-in-the-same-TU use of BindLib,
check out
https://github.com/BoostGSoC13/boost.afio/blob/clang_reformat_test/inc
lude/boost/afio/config.hpp.

However what you ask for is the ability to have the compiler
regenerate code on the basis of a compile time option change. One
approach is to type erase code which needs to be regenerated such
that it can pass through precompiled parts, and thunk out via a
virtual function back into just in time compiler assembled
functionality. It's a lot of hassle, but std::function and std::bind
make it work.

Past that though, sans C++ Modules or performing magic via dynamic
JIT recompilation of clang ASTs, I don't believe it is
straightforward no. That's a language limitation of statically
compiled languages with the evolution of toolset we currently have.
You might find my 2014 C++ Now paper of interest here, I exactly
envisaged a future C++ toolset which could do exactly as you want.

Nevin Liber

unread,
Feb 10, 2015, 11:16:04 AM2/10/15
to boost...@lists.boost.org
On 10 February 2015 at 06:49, Michael Powell <mwpow...@gmail.com> wrote:
>
> #define FUSION_MAX_VECTOR_SIZE 20

I could be wrong, but isn't Fusion (or Spirit, etc) header-only?

Yes, but I'm not sure what that has to do with solving the ODR violation problem.  Quite the opposite; it contributes to the problem.
 
> So, without rebuilding the entire world, how do I increase the maximum size
> of a Fusion vector without an ODR violation??

I'm not sure what you mean, ODR violation?

ODR, or One Definition Rule, basically says that you cannot redefine a function or class within a program.  See <http://en.cppreference.com/w/cpp/language/definition> for a more detailed explanation, or N4296 section [basic.def.odr] to read the specifics of what the standard says about it.  It's fundamental to C++.
 
To appreciate where you are
coming from, I assume you have at least built boost and have worked
with it to some extent?

Yes.
 
> This is a huge problem with global flags.

I don't think it's the mountain you think it is.

Oh?
 
Once you build the Boost libs in an iteration, or application life
cycle, unless there's been a Boost patch or upgrade, you generally
never need to touch it again.

Everything else is header-only. Yes, you sometimes declare fully or
partially specialized templates (or at least, I do), but this is a
'trivial' part of using template header-only resources.

> [Note:  I am not criticizing Boost.Fusion here, as they really didn't have a
> choice in a pre-variadic template world.]
>
>>
>> You'd almost certainly use namespaces or template parameters to make
>> symbol unique the two implementations.
>
> Again, how do you do this if you are using compile time switches?  Please
> post some sample code showing the technique you envision for users.

AFAIK, compile time switches are injected via whatever project headers
you are using. In a Microsoft Visual C++ world, for example, that is
the *vcsproj* file, to my knowledge, usually via project settings. But
once these are established, you focus can (and should) be on the
problem solution / application at hand.

I don't see how any of that addresses the problem I mentioned.  If I use a third party library that

#define FUSION_MAX_VECTOR_SIZE 20

and I need to

#define FUSION_MAX_VECTOR_SIZE 50

how does that not lead to an ODR violation?  This is still an issue even if that third party library only used Boost.Fusion in its implementation.

Michael Powell

unread,
Feb 10, 2015, 11:53:54 AM2/10/15
to boost...@lists.boost.org
On Tue, Feb 10, 2015 at 10:15 AM, Nevin Liber <ne...@eviloverlord.com> wrote:
> On 10 February 2015 at 06:49, Michael Powell <mwpow...@gmail.com> wrote:
>>
>> >
>> > #define FUSION_MAX_VECTOR_SIZE 20
>>
>> I could be wrong, but isn't Fusion (or Spirit, etc) header-only?
>
>
> Yes, but I'm not sure what that has to do with solving the ODR violation
> problem. Quite the opposite; it contributes to the problem.
>
>>
>> > So, without rebuilding the entire world, how do I increase the maximum
>> > size
>> > of a Fusion vector without an ODR violation??
>>
>> I'm not sure what you mean, ODR violation?
>
> ODR, or One Definition Rule, basically says that you cannot redefine a
> function or class within a program. See
> <http://en.cppreference.com/w/cpp/language/definition> for a more detailed
> explanation, or N4296 section [basic.def.odr] to read the specifics of what
> the standard says about it. It's fundamental to C++.

ODR itself I understand. To my knowledge, you can't apply it to
template, header only libraries.

Abandon your notion of ODR. Where Fusion, Spirit, etc, are concerned,
AFAIK, it's irrelevant. These are C++ template-based, which means by
definition, their 'definition' may be different from instance to
instance, but the underlying pattern is consistent, interface driven,
etc. That's a fundamental SOLID principle.
It sounds to me as though there are secondary, or even tertiary+,
dependencies involved. You would need cooperation from your vendor, or
access to the source, in order to do what you want it seems.

HTH

> --
> Nevin ":-)" Liber <mailto:ne...@eviloverlord.com> (847) 691-1404
>

Nevin Liber

unread,
Feb 10, 2015, 1:54:15 PM2/10/15
to boost...@lists.boost.org
On 10 February 2015 at 07:23, Niall Douglas <s_sour...@nedprod.com> wrote:
Your example above requires much more than a useful ABI break as was
originally being discussed. A compile time selectable option for
whether thread safety is there or not can be easily encoded into a
boolean at the end of every template parameter,

Ah, okay.

My fault; I wasn't thinking "different type" when you said compile time selectable.

s/-1/+1

 Nevin :-)

Gottlob Frege

unread,
Feb 10, 2015, 5:09:37 PM2/10/15
to boost...@lists.boost.org
On Sat, Feb 7, 2015 at 11:18 AM, Niall Douglas
<s_sour...@nedprod.com> wrote:
>
> that signals and slots is one of those patterns which sits on the
> cusp of being just simple enough to implement lock free vs just
> complex enough it isn't worth implementing lock free. I suspect none
> of us know the answer to that, but it's the kind of thing a certain
> type of Boost programmer lies awake at night wondering.
>
> I think we'd all also like to know a definitive answer. Perhaps Tony
> you might consider a lock free signals and slots presentation for a
> future BoostCon eh?
>

I like that idea (and I've been looking for topics...). I've seen
this problem tackled too many times (with ad hoc solutions), it was
already on my list of "things to write once and for all".
But I hadn't considered it for BoostCon - that might actually be a
reason that would get me to finally write it.

It would, of course, not be completely lock-free - as mentioned, some
waiting is required in certain situations. But I think it could be
mostly lock-free (similar to non-allocating promise/future - actually
extremely similar I think (ie my gut instinct without digging into
it).

> Niall
>

Thanks!

Gottlob Frege

unread,
Feb 10, 2015, 5:22:25 PM2/10/15
to boost...@lists.boost.org
On Tue, Feb 10, 2015 at 11:53 AM, Michael Powell <mwpow...@gmail.com> wrote:
>
> ODR itself I understand. To my knowledge, you can't apply it to
> template, header only libraries.
>
> Abandon your notion of ODR. Where Fusion, Spirit, etc, are concerned,
> AFAIK, it's irrelevant. These are C++ template-based, which means by
> definition, their 'definition' may be different from instance to
> instance, but the underlying pattern is consistent, interface driven,
> etc. That's a fundamental SOLID principle.
>

1. I assure you Nevin understands ODR (as much as anyone understands
C++ - no one is an expert). Let's give him that much.
2. Fusion/Spirit/templates/... maybe, but in general "header only"
doesn't mean no ODR issues. The answer really is "it depends".

ie

template <typename T> struct SboContainer
{
T sbo[SBO_SMALL_AMOUNT];
T * more;
...
};

Imagine the above is a template for a container that uses a "small
buffer optimization" - for a small number of elements, they are local.
For more elements, it allocates.
Now imagine that SBO_SMALL_AMOUNT is a #define.
Now imagine that foo.cpp and bar.cpp both use SboContainer<int>, but
foo.cpp #defines SBO_SMALL_AMOUNT differently than bar.cpp (before
#including SboContainer.hpp).
Now imagine that foo.cpp passes a SboContainer<int> & to bar.cpp...
Bad things happen. ODR violation, undefined behaviour, cats getting
pregnant, etc.

Conversely, if we instead do

template <typename T, size_t BuffSize = SBO_SMALL_AMOUNT> struct SboContainer
{
T sbo[BuffSize];
T * more;
...
};

now SboContainer<int> from foo.cpp and SboContainer<int> from bar.cpp
may actually be different *types*, and instead of a runtime crash we
get a compile/link time error. Yay.

Parts of Boost have these ODR issues if you are not careful. Other
parts don't. So you need to "do it right" when designing/implementing
the API.
It sounds like Nigel has convinced Nevin that for a hypothetical
thread-safe + non-thread-safe "Signals3", we could/would do it the
right way.


Tony

Niall Douglas

unread,
Feb 10, 2015, 9:51:59 PM2/10/15
to boost...@lists.boost.org
On 10 Feb 2015 at 17:22, Gottlob Frege wrote:

> Parts of Boost have these ODR issues if you are not careful. Other
> parts don't. So you need to "do it right" when designing/implementing
> the API.

GCC 5.0 is going to start optimising on the basis of ODR, as in code
which breaks ODR gets optimised in undefined behaviour ways. Thinking
about my own code where I routinely have violated ODR in source files
for convenience under the assumption that source files have limited
interactions with other source files, this could turn into a real
nest of vipers akin to aliasing bugs.

Indeed only last night I violated ODR! FreeBSD's sysctl headers which
represent the kernel API most unfortunately define a struct thread.
This definition causes ambiguity with std::thread and boost::thread,
so to work around it I wrapped the BSD sysctl headers with:

#define thread freebsd_thread
#include <sys/types.h>
#include <sys/sysctl.h>
#include <sys/user.h>
#undef thread

Probably safe, but still an ODR violation :(. That's how easily
riddled with ODR a large mature C++ code base can become.

> It sounds like Nigel has convinced Nevin that for a hypothetical
> thread-safe + non-thread-safe "Signals3", we could/would do it the
> right way.

Nigel? Did we or did we not work together in the same office and team
for ten and a half months??? :)

Michael Powell

unread,
Feb 11, 2015, 9:34:41 AM2/11/15
to boost...@lists.boost.org
On Tue, Feb 10, 2015 at 8:51 PM, Niall Douglas
<s_sour...@nedprod.com> wrote:
> On 10 Feb 2015 at 17:22, Gottlob Frege wrote:
>
>> Parts of Boost have these ODR issues if you are not careful. Other
>> parts don't. So you need to "do it right" when designing/implementing
>> the API.
>
> GCC 5.0 is going to start optimising on the basis of ODR, as in code
> which breaks ODR gets optimised in undefined behaviour ways. Thinking
> about my own code where I routinely have violated ODR in source files
> for convenience under the assumption that source files have limited
> interactions with other source files, this could turn into a real
> nest of vipers akin to aliasing bugs.
>
> Indeed only last night I violated ODR! FreeBSD's sysctl headers which
> represent the kernel API most unfortunately define a struct thread.
> This definition causes ambiguity with std::thread and boost::thread,
> so to work around it I wrapped the BSD sysctl headers with:
>
> #define thread freebsd_thread
> #include <sys/types.h>
> #include <sys/sysctl.h>
> #include <sys/user.h>
> #undef thread

Here it's clear what we're talking about.

Apologies to the list for my confusion.

> Probably safe, but still an ODR violation :(. That's how easily
> riddled with ODR a large mature C++ code base can become.
>
>> It sounds like Nigel has convinced Nevin that for a hypothetical
>> thread-safe + non-thread-safe "Signals3", we could/would do it the
>> right way.
>
> Nigel? Did we or did we not work together in the same office and team
> for ten and a half months??? :)
>
> Niall
>
> --
> ned Productions Limited Consulting
> http://www.nedproductions.biz/
> http://ie.linkedin.com/in/nialldouglas/
>
>
>

Gottlob Frege

unread,
Feb 11, 2015, 12:17:33 PM2/11/15
to boost...@lists.boost.org
Wow, I wish I could say it was auto-correct. Funny thing is that I
reread that sentence a couple of times before sending it, because
something didn't seem right, but I thought it was the grammar. :-( I
can only say I've been getting about the same amount of sleep as you
lately, and haven't been thinking straight. Sorry!

Timothy,
I mean Tony.

Gavin Lambert

unread,
Feb 22, 2015, 6:41:12 PM2/22/15
to boost...@lists.boost.org
On 11/02/2015 05:15, Nevin Liber wrote:
> I don't see how any of that addresses the problem I mentioned. If I use
> a third party library that
>
> #define FUSION_MAX_VECTOR_SIZE 20
>
> and I need to
>
> #define FUSION_MAX_VECTOR_SIZE 50
>
> how does that not lead to an ODR violation? This is still an issue even
> if that third party library only used Boost.Fusion in its implementation.

Yes and no. If a third party used Boost.Fusion in its implementation,
*and* Boost.Fusion is a header-only library, then such usage is safe
provided that the third party library is consumed only as a DLL / shared
object file, not as a static library.

But change any one of those things:
1. linking statically rather than dynamically
2. having both parties link dynamically to a Boost.Fusion DLL/SO that
has the same name but was compiled with different settings
3. having the library expose Boost.Fusion in its own API
and you're setting yourself up for an ODR mess.

(Of course, linking the third-party library dynamically with a C++ API
has its own can of worms regarding compiler and CRT, as discussed in
another thread.)

Gavin Lambert

unread,
Feb 22, 2015, 6:49:26 PM2/22/15
to boost...@lists.boost.org
On 11/02/2015 11:22, Gottlob Frege wrote:
> It sounds like Nigel has convinced Nevin that for a hypothetical
> thread-safe + non-thread-safe "Signals3", we could/would do it the
> right way.

FWIW, existing Signals2 already does it the "right way", if I'm
following the conversation properly.

By default Signals2 gives you thread-safe signals, but if you want
non-thread-safe signals you merely need to supply a dummy mutex template
parameter. Thus both implementations can co-exist without any ODR
violations or magic #defines. (And you can use some alternate mutex
type if you wish, which I have found useful on occasion.)

It's possible that this limits some further performance optimisations
that could be done if you know that the mutex is a no-op -- but if
someone cared to make those optimisations it could be done easily enough
as a template specialisation without affecting anything else.

(Having said that, it could be an interesting exercise to make a
Signals3 that uses non-blocking atomic operations instead of mutexes
where feasible. I'm not sure how much benefit there would be in
real-world software though.)
Reply all
Reply to author
Forward
0 new messages