core.async and performance

1,409 views
Skip to first unread message

kandre

unread,
Nov 28, 2013, 11:08:52 PM11/28/13
to clo...@googlegroups.com
Hi there,
I've started playing with core.async but I am not sure if I'm using it the way it was intended to.
Running a simple benchmark  with two go-blocks (one writing an event to a channel, the other one reading it out) seems quite slow:

(time (let [c (chan 100) stop (chan)]
  (go 
    (dotimes [i 100000]
      (>! c i)))
  (go
    (while (<! c))
    (>! stop true))
  (<!! stop)))

"Elapsed time: 1226.072003ms"

I presume the way I am using core.async is fundamentally flawed so I'd be greatful if someone would point it out to me.

Cheers
Andreas

Thomas Heller

unread,
Nov 29, 2013, 4:09:56 AM11/29/13
to clo...@googlegroups.com
Hey,

I'm actually surprised you get to "stop" at all. You send a couple items onto the channel but don't close it, therefore the 2nd go block will potentially block forever waiting for more.

I'm far from an expert in core.async but I think the solution would be to close! the channel and as a suggestion: (go) blocks themselves return channels that "block" until they are completed. You could write this as:

(time
 (let [c (chan 100)]
   (go 
    (dotimes [i 100000]
      (>! c i))
    (close! c))
   ;; go and wait for its result
   (<!! (go
         (while (<! c))
         :done))))

HTH,
/thomas

Sean Corfield

unread,
Nov 29, 2013, 11:40:34 AM11/29/13
to clo...@googlegroups.com
On Fri, Nov 29, 2013 at 1:09 AM, Thomas Heller <th.h...@gmail.com> wrote:
> I'm actually surprised you get to "stop" at all. You send a couple items
> onto the channel but don't close it, therefore the 2nd go block will
> potentially block forever waiting for more.

Indeed. When I tried Andreas' code in the REPL, it didn't terminate.

> I'm far from an expert in core.async but I think the solution would be to
> close! the channel and as a suggestion: (go) blocks themselves return
> channels that "block" until they are completed. You could write this as:

Your code completes in around 330ms for me.
--
Sean A Corfield -- (904) 302-SEAN
An Architect's View -- http://corfield.org/
World Singles, LLC. -- http://worldsingles.com/

"Perfection is the enemy of the good."
-- Gustave Flaubert, French realist novelist (1821-1880)

Thomas Heller

unread,
Nov 29, 2013, 12:04:31 PM11/29/13
to clo...@googlegroups.com
Ah forgot that the core.async folks mentioned that if you want performance its best to use real threads.

(time
 (let [c (chan 100)]
   (thread
    (dotimes [i 100000]
      (>!! c i))
    (close! c))
   (prn (<!! (thread (loop [i nil]
                       (if-let [x (<!! c)]
                         (recur x)
                         i)))))))


finishes in about 100ms which seems reasonable, just can't have too many of those threads open.



--
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clo...@googlegroups.com
Note that posts from new members are moderated - please be patient with your first post.
To unsubscribe from this group, send email to
clojure+u...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
---
You received this message because you are subscribed to a topic in the Google Groups "Clojure" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/clojure/m6bqGd8vQZQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to clojure+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Timothy Baldridge

unread,
Nov 29, 2013, 12:16:51 PM11/29/13
to clo...@googlegroups.com
This is all good advice. Also notice that these examples don't really match real life use cases of core.async. Here you only have two threads, where the execution time is dominated by message passing. In most situations you'll have dozens (or hundreds) of gos, with actual work being done in each logical thread. In these cases, I highly doubt the performance of core.async will be the bottleneck. 

But give it a try on a real project and let tell us how it goes. 

Timothy 


You received this message because you are subscribed to the Google Groups "Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clojure+u...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.



--
“One of the main causes of the fall of the Roman Empire was that–lacking zero–they had no way to indicate successful termination of their C programs.”
(Robert Firth)

kandre

unread,
Nov 29, 2013, 4:13:16 PM11/29/13
to clo...@googlegroups.com
Thanks for all the replies. I accidentally left out the close! When I contrived the example. I am using core.async for a discrete event simulation system. There are hundreds of go blocks all doing little but putting a sequence of events onto a channel and one go block advancing taking these events and advancing the time similar to simpy.readthedocs.org/

The basic one car example under the previous link executes about 10 times faster than the same example using core.a sync.

Ben Mabey

unread,
Nov 29, 2013, 5:52:22 PM11/29/13
to clo...@googlegroups.com
On Fri Nov 29 14:13:16 2013, kandre wrote:
> Thanks for all the replies. I accidentally left out the close! When I contrived the example. I am using core.async for a discrete event simulation system. There are hundreds of go blocks all doing little but putting a sequence of events onto a channel and one go block advancing taking these events and advancing the time similar to simpy.readthedocs.org/
>
> The basic one car example under the previous link executes about 10 times faster than the same example using core.a sync.
>

Hi Andreas,
I've been using core.async for DES as well since I think the
process-based approach is useful. I could try doing the same
simulation you're attempting to see how my approach compares
speed-wise. Are you talking about the car wash or the gas station
simulation? Posting a gist of what you have will be helpful so I can
use the same parameters.

-Ben




kandre

unread,
Nov 29, 2013, 6:01:08 PM11/29/13
to clo...@googlegroups.com
I think I can provide you with a little code snipped. 
I am talking about the very basic car example (driving->parking->driving). Running the sim using core.async takes about 1s for 10^5 steps whereas the simpy version takes less than 1s for 10^6 iterations on my vm.
Cheers
Andreas

kandre

unread,
Nov 29, 2013, 7:04:59 PM11/29/13
to clo...@googlegroups.com
Here is the gist: https://gist.github.com/anonymous/7713596
Please not that there's no ordering of time for this simple example and there's only one event (timeout). This is not what I intend to use but it shows the problem.
Simulating 10^5 steps this way takes ~1.5s

Cheers
Andreas

Ben Mabey

unread,
Nov 29, 2013, 10:13:23 PM11/29/13
to clo...@googlegroups.com
On Fri Nov 29 17:04:59 2013, kandre wrote:
> Here is the gist: https://gist.github.com/anonymous/7713596
> Please not that there's no ordering of time for this simple example
> and there's only one event (timeout). This is not what I intend to use
> but it shows the problem.
> Simulating 10^5 steps this way takes ~1.5s
>
> Cheers
> Andreas
>
> On Saturday, 30 November 2013 09:31:08 UTC+10:30, kandre wrote:
>
> I think I can provide you with a little code snipped.
> I am talking about the very basic car example
> (driving->parking->driving). Running the sim using core.async
> takes about 1s for 10^5 steps whereas the simpy version takes less
> than 1s for 10^6 iterations on my vm.
> Cheers
> Andreas
>
> On Saturday, 30 November 2013 09:22:22 UTC+10:30, Ben Mabey wrote:
>
> On Fri Nov 29 14:13:16 2013, kandre wrote:
> > Thanks for all the replies. I accidentally left out the
> close! When I contrived the example. I am using core.async for
> a discrete event simulation system. There are hundreds of go
> blocks all doing little but putting a sequence of events onto
>
> a channel and one go block advancing taking these events and
> advancing the time similar to simpy.readthedocs.org/
> <http://simpy.readthedocs.org/>
> >
> > The basic one car example under the previous link executes
> about 10 times faster than the same example using core.a sync.
> >
>
> Hi Andreas,
> I've been using core.async for DES as well since I think the
> process-based approach is useful. I could try doing the same
> simulation you're attempting to see how my approach compares
> speed-wise. Are you talking about the car wash or the gas
> station
> simulation? Posting a gist of what you have will be helpful
> so I can
> use the same parameters.
>
> -Ben
>
>
>
>
> --
> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clo...@googlegroups.com
> Note that posts from new members are moderated - please be patient
> with your first post.
> To unsubscribe from this group, send email to
> clojure+u...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
> ---
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to clojure+u...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.

I've verified your results and compared it with an implementation using
my library. My version runs 1.25x faster than yours and that is with
an actual priority queue behind the scheduling for correct
simulation/time semantics. However, mine is still 2x slower than the
simpy version. Gist with benchmarks:

https://gist.github.com/bmabey/7714431

simpy is a mature library with lots of performance tweaking and I have
done no optimizations so far. My library is a thin wrapping around
core.async with a few hooks into the internals and so I would expect
that most of the time is being spent in core.async (again, I have done
zero profiling to actually verify this). So, it may be that core.async
is slower than python generators for this particular use case. I
should say that this use case is odd in that our task is a serial one
and so we don't get any benefit from having a threadpool to multiplex
across (in fact the context switching may be harmful).

In my case the current slower speeds are vastly outweighed by the
benefits:
* can run multiple simulations in parallel for sensitivity analysis
* I plan on eventually targeting Clojurescript for visualization
(right now an event stream from JVM is used)
* ability to leverage CEP libraries for advanced stats
* being integrated into my production systems via channels which does
all the real decision making in the sims.
This means I can do sensitivity analysis on different policies
using actual production code. A nice side benefit of this is that I
get a free integration test. :)

Having said all that I am still exploring the use of core.async for DES
and have not yet replaced my event-based simulator. I most likely will
replace at least parts of my simulations that have a lot of nested
call-backs that make things hard to reason about.

-Ben

Cedric Greevey

unread,
Nov 29, 2013, 10:33:05 PM11/29/13
to clo...@googlegroups.com
Have you checked for other sources of performance hits? Boxing, var lookups, and especially reflection.

I'd expect a reasonably optimized Clojure version to outperform a Python version by a very large factor -- 10x just for being JITted JVM bytecode instead of interpreted Python, times another however-many-cores-you-have for core.async keeping all your processors warm vs. Python and its GIL limiting the Python version to single-threaded performance. If your Clojure version is 2.5x *slower* then it's probably capable of a *hundredfold* speedup somewhere, which suggests reflection (typically a 10x penalty if happening heavily in inner loops) *and* another sizable performance degrader* are combining here. Unless, again, you're measuring mostly overhead and not real workload on the Clojure side, but not on the Python side. Put a significant load into each goroutine in both versions and compare them then, see if that helps the Clojure version much more than the Python one for some reason.

* The other degrader would need to multiply with, not just add to, the reflection, too. That suggests either blocking (reflection making that worse by reflection in one thread/go holding up progress systemwide for 10x as long as without reflection) or else excess/discarded work (10x penalty for reflection, times 10x as many calls as needed to get the job done due to transaction retries, poor algo, or something, would get you a 100-fold slowdown -- but retries of swap! or dosync shouldn't be a factor if you're eschewing those in favor of go blocks for coordination...)




For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
---
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send

For more options, visit https://groups.google.com/groups/opt_out.

I've verified your results and compared it with an implementation using my library.  My version runs 1.25x faster than yours and that is with an actual priority queue behind the scheduling for correct simulation/time semantics.  However, mine is still 2x slower than the simpy version.  Gist with benchmarks:

https://gist.github.com/bmabey/7714431

simpy is a mature library with lots of performance tweaking and I have done no optimizations so far.  My library is a thin wrapping around core.async with a few hooks into the internals and so I would expect that most of the time is being spent in core.async (again, I have done zero profiling to actually verify this).  So, it may be that core.async is slower than python generators for this particular use case.  I should say that this use case is odd in that our task is a serial one and so we don't get any benefit from having a threadpool to multiplex across (in fact the context switching may be harmful).

In my case the current slower speeds are vastly outweighed by the benefits:
* can run multiple simulations in parallel for sensitivity analysis
* I plan on eventually targeting Clojurescript for visualization (right now an event stream from JVM is used)
* ability to leverage CEP libraries for advanced stats
* being integrated into my production systems via channels which does all the real decision making in the sims.
    This means I can do sensitivity analysis on different policies using actual production code.  A nice side benefit of this is that I get a free integration test. :)

Having said all that I am still exploring the use of core.async for DES and have not yet replaced my event-based simulator.  I most likely will replace at least parts of my simulations that have a lot of nested call-backs that make things hard to reason about.


-Ben

--
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clo...@googlegroups.com
Note that posts from new members are moderated - please be patient with your first post.
To unsubscribe from this group, send email to

For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- You received this message because you are subscribed to the Google Groups "Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clojure+unsubscribe@googlegroups.com.

Cedric Greevey

unread,
Nov 29, 2013, 10:48:11 PM11/29/13
to clo...@googlegroups.com
Hmm. Another possibility, though remote, is that the Clojure version manages to trigger pathological worst-case behavior in the JIT and/or hardware (frequent cache misses, usually-wrong branch prediction, etc.) and the Python version doesn't (no JIT and maybe the interpreter is simply too slow to make processor caches and branch prediction count for much, other than that the interpreter itself would be slower than it already is without these).

Ben Mabey

unread,
Nov 29, 2013, 11:03:23 PM11/29/13
to clo...@googlegroups.com
On 11/29/13, 8:33 PM, Cedric Greevey wrote:
Have you checked for other sources of performance hits? Boxing, var lookups, and especially reflection.
As I said, I haven't done any optimization yet. :)  I did check for reflection though and didn't see any.


I'd expect a reasonably optimized Clojure version to outperform a Python version by a very large factor -- 10x just for being JITted JVM bytecode instead of interpreted Python, times another however-many-cores-you-have for core.async keeping all your processors warm vs. Python and its GIL limiting the Python version to single-threaded performance.
This task does not benefit from the multiplexing that core.async provides, at least not in the case of a single simulation which has no clear logical partition that can be run in parallel.  The primary benefit that core.async is providing in this case is to escape from call-back hell.


If your Clojure version is 2.5x *slower* then it's probably capable of a *hundredfold* speedup somewhere, which suggests reflection (typically a 10x penalty if happening heavily in inner loops) *and* another sizable performance degrader* are combining here. Unless, again, you're measuring mostly overhead and not real workload on the Clojure side, but not on the Python side. Put a significant load into each goroutine in both versions and compare them then, see if that helps the Clojure version much more than the Python one for some reason.

Yeah, I think a real life simulation may have different results than this micro-benchmark.


For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
---
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send

For more options, visit https://groups.google.com/groups/opt_out.

I've verified your results and compared it with an implementation using my library.  My version runs 1.25x faster than yours and that is with an actual priority queue behind the scheduling for correct simulation/time semantics.  However, mine is still 2x slower than the simpy version.  Gist with benchmarks:

https://gist.github.com/bmabey/7714431

simpy is a mature library with lots of performance tweaking and I have done no optimizations so far.  My library is a thin wrapping around core.async with a few hooks into the internals and so I would expect that most of the time is being spent in core.async (again, I have done zero profiling to actually verify this).  So, it may be that core.async is slower than python generators for this particular use case.  I should say that this use case is odd in that our task is a serial one and so we don't get any benefit from having a threadpool to multiplex across (in fact the context switching may be harmful).

In my case the current slower speeds are vastly outweighed by the benefits:
* can run multiple simulations in parallel for sensitivity analysis
* I plan on eventually targeting Clojurescript for visualization (right now an event stream from JVM is used)
* ability to leverage CEP libraries for advanced stats
* being integrated into my production systems via channels which does all the real decision making in the sims.
    This means I can do sensitivity analysis on different policies using actual production code.  A nice side benefit of this is that I get a free integration test. :)

Having said all that I am still exploring the use of core.async for DES and have not yet replaced my event-based simulator.  I most likely will replace at least parts of my simulations that have a lot of nested call-backs that make things hard to reason about.


-Ben

--
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clo...@googlegroups.com
Note that posts from new members are moderated - please be patient with your first post.
To unsubscribe from this group, send email to

For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- You received this message because you are subscribed to the Google Groups "Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clojure+u...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.
--
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clo...@googlegroups.com
Note that posts from new members are moderated - please be patient with your first post.
To unsubscribe from this group, send email to

For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
---
You received this message because you are subscribed to the Google Groups "Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clojure+u...@googlegroups.com.

Cedric Greevey

unread,
Nov 29, 2013, 11:16:24 PM11/29/13
to clo...@googlegroups.com
On Fri, Nov 29, 2013 at 11:03 PM, Ben Mabey <b...@benmabey.com> wrote:
On 11/29/13, 8:33 PM, Cedric Greevey wrote:
Have you checked for other sources of performance hits? Boxing, var lookups, and especially reflection.
As I said, I haven't done any optimization yet. :)  I did check for reflection though and didn't see any.


I'd expect a reasonably optimized Clojure version to outperform a Python version by a very large factor -- 10x just for being JITted JVM bytecode instead of interpreted Python, times another however-many-cores-you-have for core.async keeping all your processors warm vs. Python and its GIL limiting the Python version to single-threaded performance.
This task does not benefit from the multiplexing that core.async provides, at least not in the case of a single simulation which has no clear logical partition that can be run in parallel.  The primary benefit that core.async is providing in this case is to escape from call-back hell.

Hmm. Then you're still looking for a 25-fold slowdown somewhere. It's hard to get Clojure to run that slow *without* reflection, unless you're hitting one of those cases where parallelizing actually makes things worse. Hmm; core.async will be trying to multithread your code, even while the nature of the task is limiting it to effectively serial performance anyway due to blocking. Perhaps you're getting some of the slowdown from context switches that aren't buying you anything for what they cost? The GIL-afflicted Python code wouldn't be impacted by the cost of context switches, by contrast.

kandre

unread,
Nov 29, 2013, 11:42:49 PM11/29/13
to clo...@googlegroups.com
I am simulation a network of roads, sources and sinks of materials, and trucks hauling between sinks and sources. There is not much of a workload - the complexity arises from having hundreds of trucks going through their states and queuing at the sources/sinks. So the bulk of the simulation consists of putting events on a priority queue. 
Maybe channels are simply not the right tool for that - or maybe I should just be happy with being able to simulate 10^5 trucks in a few seconds ;)

Ben Mabey

unread,
Nov 30, 2013, 12:10:10 AM11/30/13
to clo...@googlegroups.com
I had expected the context-switching to take a hit but I never tried to see how much of a hit it is.  I just did and I got  a 1.62x speed improvement[1] which means the clojure version is only 1.2x slower than the simpy version. :)

Right now the thread pool in core.async is hardcoded in.  So for this experiment I hacked in a fixed thread pool of size one.  I asked about having the thread pool for core.async be swappable/parametrized at the conj during the unsession and the idea was not received well.  For most use cases I think the current thread pool is fine but for this particular one it appears it is not...

-Ben

1. Full benchmark... compare to times here: https://gist.github.com/bmabey/7714431
WARNING: Final GC required 5.486725933787122 % of runtime
WARNING: Final GC required 12.905903007134539 % of runtime
Evaluation count : 6 in 6 samples of 1 calls.
             Execution time mean : 392.457499 ms
    Execution time std-deviation : 8.225849 ms
   Execution time lower quantile : 384.192999 ms ( 2.5%)
   Execution time upper quantile : 401.027249 ms (97.5%)
                   Overhead used : 1.847987 ns

kandre

unread,
Nov 30, 2013, 12:30:45 AM11/30/13
to clo...@googlegroups.com
Maybe I'll just use my simpy models for now and wait for clj-sim ;)
Any chance of sharing?
Cheers
Andreas

Ben Mabey

unread,
Nov 30, 2013, 1:07:33 AM11/30/13
to clo...@googlegroups.com
On Fri Nov 29 22:30:45 2013, kandre wrote:
> --
> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clo...@googlegroups.com
> Note that posts from new members are moderated - please be patient
> with your first post.
> To unsubscribe from this group, send email to
> clojure+u...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
> ---
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to clojure+u...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.

I'm planning on open sourcing clj-sim but it is not quite ready. It
should be soon but you shouldn't put your project on hold for it. :)

I should point out that while clj-sim provides process-based simulation
similar to simula and simpy it does not give you the ability for the
processes to look at each others state. This is of course very
limiting since at certain decision points other processes need to know
the state of others (e.g. how many workers are available). I've
thought of ways of making the state of the bound variables in a process
accessible to others but I don't think that is a good idea (and very
un-clojurey). Instead, I've been using the processes to generate
events that feed into my production system that builds up the state
again (generally in a single atom or perhaps datomic) which it then
makes decisions on which in turn effects the simulated processes.
Viewing the simulation processes as one big generator of stochastic
events for my production system provides a nice separation of concerns.
This approach may not be the best for someone who just wants to write
a simulation though so I'm going to give it a bit more thought.

I'll let you know when I have released something and you can try it out.

-Ben

Timothy Baldridge

unread,
Nov 30, 2013, 10:49:44 AM11/30/13
to clo...@googlegroups.com
There's several ways the performance of this code can be improved. 

Firstly, these are exactly the same (samantically), but the latter is much faster: (go (>! c v)) and (put! c v). Use put! whenever possible. 

Secondly, the function "event" uses <!! (blocking take), but it's used inside a go later on. Don't do this, it can cause issues in the go's fixed thread pool. 

I think you can also improve this code quite a bit by reducing the number of puts/takes the system is doing. So consider changing this:

(defn event [env type val]
(let [rc (async/chan)]
(async/<!!
(async/go
(async/>! (:queue @env)
{:type type :val val :rc rc :time (:now @env)})
(async/<! rc)
(async/close! rc))))) (event ....) To this:
(defn event [env type val]
(let [rc (async/chan)]
(put!
(:queue @env)
{:type type :val val :rc rc :time (:now @env)})) rc)) (<! (event ...))
I have not benchmarked this new code, but it should run much faster. Timothy


--
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clo...@googlegroups.com
Note that posts from new members are moderated - please be patient with your first post.
To unsubscribe from this group, send email to
clojure+u...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
---
You received this message because you are subscribed to the Google Groups "Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clojure+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

kandre

unread,
Nov 30, 2013, 7:08:54 PM11/30/13
to clo...@googlegroups.com
That indeed provided a huge speed-up (0.5s compared to 1.5 originally).
Cheers Timothy

Mathias Picker

unread,
Dec 1, 2013, 8:16:45 AM12/1/13
to clo...@googlegroups.com
Did you look into Pulsar https://github.com/puniverse/pulsar ?

I'm using core.async in the browser, but I don't see it as a multithreading mechanism. Pulsar puts an erlang-like api around the quasar

lightweight threads and actors for java. Looks really nice, and seems a good fit for dse type apps, but I haven't used it yet, just have it on my todo list to check out.


Look here https://groups.google.com/forum/#!msg/clojure/1xxxTti6Vi0/VjIngrSnG8MJ for an interesting performance comparison for a core.async implemented on top of pulsar, which might give you some hints how both projects might fare for your use case.

Cheers, Mathias

Reply all
Reply to author
Forward
0 new messages