I have been working on [...] a template-based Signal/Slot library [...],I want to test it against boost::signals2 to get an idea of how well it performs.
[...]. Initial tests show that my own library can be up to 10 times faster than the boost-implementation, [...]
Ah, that's a nice one. I wonder why I haven't come across that benchmark myself.
I haven't yet made it thread safe, but I have been wondering about this. Does it have to implement the slot-calls (which can itself be thread-unsafe) in a safe manner to be considered thread-safe? Or does it only have to be safe with respect to its own data?
Cheers,
Joren
Thanks for the many responses and suggestions. However, I'm not sure if everything applies to my particular application. If it does, I don't think I fully understand...
My implementation provides an Emitter class (template), rather than a signal class. Signals are types which act as template parameters to the Emitter. For example:
using Event1 = Signal<void()>;
using Event2 = Signal<void(int)>;
Emitter<Event1, Event2> em; // normally, you'd probably derive from this
em.connect<Event1>(someFunction);
em.connect<Event2>(otherFunction);
em.emit<Event1>();
em.emit<Event2>(42);
Each Signal-type has its own corresponding vector of slots defined within the Emitter. Calling connect/disconnect adds/removes a slot to/from this vector and calling emit() results in iterating over it and calling all its slots (this is a read-only operation).
I can see trouble arising when thread 1 is iterating the vector while thread 2 is modifying it. Would it be an idea to have the emitting thread
1. lock the vector,
2. make a local copy,
3. unlock, and
4. iterate the copy?
This way, the modifying thread only needs to wait for the copy being made instead of every slot being called and executed. Does that make sense at all?
Thanks again for all the help. Scott Meyers was right in his lectures about this being a helpful community! Oh, and if my formatting is screwed up, I'm truly sorry, but I'm writing this on my phone.
Cheers,
Joren
Hm I hate reasoning about edge-cases... they make life so much harder. Guess I'll have to think about this some more.
My original question was about how to benchmark my own version (fairly) against boost, hence this mailing list. As far as I was concerned, this discussion could have been over after the link to all the benchmarks. I never intended this discussion to be about thread safety. However, I can't stress enough that I'm very thankful for all the help! ;-)
The reason I wrote it in the first place was just for fun I guess. After having used the idiom in Qt, it seemed like a nice challenge. Now that it's almost finished, I'd like te see how it holds up against the big guys. Also, it's just a little different than what I've seen of the others, with what I think is a nice syntax.
Cheers,
Joren
P.S. Is that implementation of your sqrt() available on github? Seems interesting! ;-)
I could be wrong, but isn't an executor exactly what a signal is
already, i.e. when you specify signal-of-function, and connect
to/disconnect from it? The signal/event source makes the callback,
which I assume are stored in a FIFO order.
I can't speak to callback ordering, per se. It seems like FIFO would
be a natural assumption. It would be equally interesting to specify
that strategy, allowing for LIFO, for example, or even prioritized
callbacks.
> Does it seam useful to people using signals2 or similar library to consider
> another kind of signal type
> which would take an Executor [1] concept instance on construction, system's
> executor (thread-pool)
> by default, and would then use that executor for dispatching calls? (one
> task pushed by listener).
> That is, when a signal is called, it pushes as many tasks in the executor as
> there is listeners and returns.
> It makes the dispatching either "in order" if the executor is a strand or
> not if it's not.
> This signal type would not make any guarantee about the which thread would
> call the listening objects.
I've considered that parallelism, concurrency, threading, reentrancy
to be the job of the callee. At the source of the signal/event, these
concerns are assumed, and the callee must handle such potential
collisions.
> It seems to me that in some cases I have encountered (highly concurrent
> task-based systems with a lot of message dispatching
> between "actors"-like objects),
> this kind of model might be more interesting performance-scalability-wise
> than the current mutex-based dispatching of signals2.
> I didn't try to compare and measure performance though and I might be
> totally wrong.
I haven't measured for performance myself, but it at least 'works' in
a multi-threaded situation, i.e. where I may be passing messages to a
pub/sub endpoint, which is subsequently packaged for deliver to
subscribers. This works pretty well, guarding the boundaries with
appropriate mutex-based or other locking.
> [1] https://github.com/chriskohlhoff/executors
I think it would be nice to have an executor on the slot, too.
> That is, when a signal is called, it pushes as many tasks in the
> executor as there is listeners and returns.
> It makes the dispatching either "in order" if the executor is a strand
> or not if it's not.
> This signal type would not make any guarantee about the which thread
> would call the listening objects.
An executor in the slot lets the subscriber decide which thread calls
into its code.
If the slots are primarily synchronous, you might prefer an asynchronous
signal, if they are primarily asynchronous, a synchronous signal may
suffice.
This presents you with a fun game for return values.
I think the default for both should be synchronous though, otherwise the
benchmarks look worse :P
This then starts to look a lot more like observe_on and subscribe_on
from rxcpp:
https://github.com/Reactive-Extensions/RxCpp/
http://www.introtorx.com/Content/v1.0.10621.0/15_SchedulingAndThreading.html
Ben
On Sat, Feb 7, 2015 at 2:12 PM, Klaim - Joël Lamotte <mjk...@gmail.com> wrote:
>
> This might be a slightly off-topic question, not sure, but it's related I
> think:
I could be wrong, but isn't an executor exactly what a signal is
already, i.e. when you specify signal-of-function, and connect
to/disconnect from it? The signal/event source makes the callback,
which I assume are stored in a FIFO order.
I can't speak to callback ordering, per se. It seems like FIFO would
be a natural assumption. It would be equally interesting to specify
that strategy, allowing for LIFO, for example, or even prioritized
callbacks.
> Does it seam useful to people using signals2 or similar library to consider
> another kind of signal type
> which would take an Executor [1] concept instance on construction, system's
> executor (thread-pool)
> by default, and would then use that executor for dispatching calls? (one
> task pushed by listener).
> That is, when a signal is called, it pushes as many tasks in the executor as
> there is listeners and returns.
> It makes the dispatching either "in order" if the executor is a strand or
> not if it's not.
> This signal type would not make any guarantee about the which thread would
> call the listening objects.
I've considered that parallelism, concurrency, threading, reentrancy
to be the job of the callee. At the source of the signal/event, these
concerns are assumed, and the callee must handle such potential
collisions.
Using the Boost.Signals2 for example, it is easy AFAIK to type-define
a signal, and re-use that type anywhere that signal is required. So a
listener could receive a signal, and subsequently do whatever it
wanted to with that message; dispatch it again, process it,
whatever...
> Basically, when the signal calls the observers, it's trigerring the
> execution of work/tasks.
> I am then suggesting to dissociate the work done by the signal, that is
> trigerring the
> dispatching, and the actual way the observers would be called, which would
> depend on the
> the executor kind.
>
> If you will, it's like the current signals2 had a hard-coded executor
> inside, and
> that some would want to pay for the flexibility of being able to specify the
> executor
> on construction.
I haven't looked at the signals2 code recently, except from recent
experience using Boost.Signals2. It would be interesting to inject a
Dispatcher handler. Default might be a synchronous call; one option
might be to provide an async/futures based dispatcher, for example.
See abive; my two cents, re: potentially injecting an asynchronous dispatcher.
On Sun, Feb 8, 2015 at 8:02 AM, Klaim - Joël Lamotte <mjk...@gmail.com> wrote:
>
>
> An executor takes arbitrary tasks as input and only
> guarantee that these tasks will be executed, under some constraints
> defined by the specific executor type.
> A signal dispatch it's call and parametters to a set of observers.
Using the Boost.Signals2 for example, it is easy AFAIK to type-define
a signal, and re-use that type anywhere that signal is required. So a
listener could receive a signal, and subsequently do whatever it
wanted to with that message; dispatch it again, process it,
whatever...
Yes, I understand. See prior note concerning Dispatcher interfaces.
Not sure that there is such an animal in the current Boost.Signals2
version, per se. A default Dispatcher might be synchronous, whereas an
asynchronous Dispatcher could be provided. I am at best intermediate
knowledge where async, futures, promises are concerned, or if
something like that is even possible with Signals2.
On 7 Feb 2015 at 3:12, Eric Prud'hommeaux wrote:
> > But comparing thread-safe implementations with implementations that
> > are not thread-safe seems a bit unfair to me.
>
> Is it reallistic that folks would want a variant of signals that's not
> threadsafe, trading some callback restrictions for performance?
I think a version which is compile time switchable is the ideal.
An atomic increment/decrement use count maybe? Slow when contended,
but faster possibly than alternatives.
BTW you can fold a lock into a pointer by using its bottom bit as the
lock bit. I have an implementation in Boost.Spinlock if you want it.
On 8 Feb 2015 at 19:25, Nevin Liber wrote:
> > > Is it reallistic that folks would want a variant of signals that's not
> > > threadsafe, trading some callback restrictions for performance?
> >
> > I think a version which is compile time switchable is the ideal.
> >
>
> -1.
>
> Unless you can always build everything from source (as opposed to linking
> against libraries built by others), this becomes a nightmare when trying to
> avoid ODR violations.
It would be a very poor implementation that had that problem Nevin.
You may define the preprocessor constant
FUSION_MAX_VECTOR_SIZEbefore including any Fusion header to change the default. Example:#define FUSION_MAX_VECTOR_SIZE 20
You'd almost certainly use namespaces or template parameters to make
symbol unique the two implementations.
>
> #define FUSION_MAX_VECTOR_SIZE 20
I could be wrong, but isn't Fusion (or Spirit, etc) header-only?
> So, without rebuilding the entire world, how do I increase the maximum size
> of a Fusion vector without an ODR violation??
I'm not sure what you mean, ODR violation?
To appreciate where you are
coming from, I assume you have at least built boost and have worked
with it to some extent?
> This is a huge problem with global flags.
I don't think it's the mountain you think it is.
Once you build the Boost libs in an iteration, or application life
cycle, unless there's been a Boost patch or upgrade, you generally
never need to touch it again.
Everything else is header-only. Yes, you sometimes declare fully or
partially specialized templates (or at least, I do), but this is a
'trivial' part of using template header-only resources.
> [Note: I am not criticizing Boost.Fusion here, as they really didn't have a
> choice in a pre-variadic template world.]
>
>>
>> You'd almost certainly use namespaces or template parameters to make
>> symbol unique the two implementations.
>
> Again, how do you do this if you are using compile time switches? Please
> post some sample code showing the technique you envision for users.
AFAIK, compile time switches are injected via whatever project headers
you are using. In a Microsoft Visual C++ world, for example, that is
the *vcsproj* file, to my knowledge, usually via project settings. But
once these are established, you focus can (and should) be on the
problem solution / application at hand.
Your example above requires much more than a useful ABI break as was
originally being discussed. A compile time selectable option for
whether thread safety is there or not can be easily encoded into a
boolean at the end of every template parameter,