--
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clo...@googlegroups.com
Note that posts from new members are moderated - please be patient with your first post.
To unsubscribe from this group, send email to
clojure+u...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
---
You received this message because you are subscribed to a topic in the Google Groups "Clojure" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/clojure/m6bqGd8vQZQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to clojure+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
You received this message because you are subscribed to the Google Groups "Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clojure+u...@googlegroups.com.
The basic one car example under the previous link executes about 10 times faster than the same example using core.a sync.
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
---
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to clojure+unsubscribe@googlegroups.com.
I've verified your results and compared it with an implementation using my library. My version runs 1.25x faster than yours and that is with an actual priority queue behind the scheduling for correct simulation/time semantics. However, mine is still 2x slower than the simpy version. Gist with benchmarks:
https://gist.github.com/bmabey/7714431
simpy is a mature library with lots of performance tweaking and I have done no optimizations so far. My library is a thin wrapping around core.async with a few hooks into the internals and so I would expect that most of the time is being spent in core.async (again, I have done zero profiling to actually verify this). So, it may be that core.async is slower than python generators for this particular use case. I should say that this use case is odd in that our task is a serial one and so we don't get any benefit from having a threadpool to multiplex across (in fact the context switching may be harmful).
In my case the current slower speeds are vastly outweighed by the benefits:
* can run multiple simulations in parallel for sensitivity analysis
* I plan on eventually targeting Clojurescript for visualization (right now an event stream from JVM is used)
* ability to leverage CEP libraries for advanced stats
* being integrated into my production systems via channels which does all the real decision making in the sims.
This means I can do sensitivity analysis on different policies using actual production code. A nice side benefit of this is that I get a free integration test. :)
Having said all that I am still exploring the use of core.async for DES and have not yet replaced my event-based simulator. I most likely will replace at least parts of my simulations that have a lot of nested call-backs that make things hard to reason about.
-Ben
--
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clo...@googlegroups.com
Note that posts from new members are moderated - please be patient with your first post.
To unsubscribe from this group, send email to
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- You received this message because you are subscribed to the Google Groups "Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clojure+unsubscribe@googlegroups.com.
Have you checked for other sources of performance hits? Boxing, var lookups, and especially reflection.
I'd expect a reasonably optimized Clojure version to outperform a Python version by a very large factor -- 10x just for being JITted JVM bytecode instead of interpreted Python, times another however-many-cores-you-have for core.async keeping all your processors warm vs. Python and its GIL limiting the Python version to single-threaded performance.
If your Clojure version is 2.5x *slower* then it's probably capable of a *hundredfold* speedup somewhere, which suggests reflection (typically a 10x penalty if happening heavily in inner loops) *and* another sizable performance degrader* are combining here. Unless, again, you're measuring mostly overhead and not real workload on the Clojure side, but not on the Python side. Put a significant load into each goroutine in both versions and compare them then, see if that helps the Clojure version much more than the Python one for some reason.
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
---
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to clojure+u...@googlegroups.com.
I've verified your results and compared it with an implementation using my library. My version runs 1.25x faster than yours and that is with an actual priority queue behind the scheduling for correct simulation/time semantics. However, mine is still 2x slower than the simpy version. Gist with benchmarks:
https://gist.github.com/bmabey/7714431
simpy is a mature library with lots of performance tweaking and I have done no optimizations so far. My library is a thin wrapping around core.async with a few hooks into the internals and so I would expect that most of the time is being spent in core.async (again, I have done zero profiling to actually verify this). So, it may be that core.async is slower than python generators for this particular use case. I should say that this use case is odd in that our task is a serial one and so we don't get any benefit from having a threadpool to multiplex across (in fact the context switching may be harmful).
In my case the current slower speeds are vastly outweighed by the benefits:
* can run multiple simulations in parallel for sensitivity analysis
* I plan on eventually targeting Clojurescript for visualization (right now an event stream from JVM is used)
* ability to leverage CEP libraries for advanced stats
* being integrated into my production systems via channels which does all the real decision making in the sims.
This means I can do sensitivity analysis on different policies using actual production code. A nice side benefit of this is that I get a free integration test. :)
Having said all that I am still exploring the use of core.async for DES and have not yet replaced my event-based simulator. I most likely will replace at least parts of my simulations that have a lot of nested call-backs that make things hard to reason about.
-Ben
--
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clo...@googlegroups.com
Note that posts from new members are moderated - please be patient with your first post.
To unsubscribe from this group, send email to
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- You received this message because you are subscribed to the Google Groups "Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clojure+u...@googlegroups.com.
--
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clo...@googlegroups.com
Note that posts from new members are moderated - please be patient with your first post.
To unsubscribe from this group, send email to
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
---
You received this message because you are subscribed to the Google Groups "Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clojure+u...@googlegroups.com.
As I said, I haven't done any optimization yet. :) I did check for reflection though and didn't see any.On 11/29/13, 8:33 PM, Cedric Greevey wrote:
Have you checked for other sources of performance hits? Boxing, var lookups, and especially reflection.
This task does not benefit from the multiplexing that core.async provides, at least not in the case of a single simulation which has no clear logical partition that can be run in parallel. The primary benefit that core.async is providing in this case is to escape from call-back hell.
I'd expect a reasonably optimized Clojure version to outperform a Python version by a very large factor -- 10x just for being JITted JVM bytecode instead of interpreted Python, times another however-many-cores-you-have for core.async keeping all your processors warm vs. Python and its GIL limiting the Python version to single-threaded performance.
(defn event [env type val](let [rc (async/chan)](async/<!!(async/go(async/>! (:queue @env){:type type :val val :rc rc :time (:now @env)})(async/<! rc)(async/close! rc))))) (event ....) To this:I have not benchmarked this new code, but it should run much faster. Timothy(defn event [env type val](let [rc (async/chan)](put!(:queue @env){:type type :val val :rc rc :time (:now @env)})) rc)) (<! (event ...))
--
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clo...@googlegroups.com
Note that posts from new members are moderated - please be patient with your first post.
To unsubscribe from this group, send email to
clojure+u...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
---
You received this message because you are subscribed to the Google Groups "Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clojure+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
Look here https://groups.google.com/forum/#!msg/clojure/1xxxTti6Vi0/VjIngrSnG8MJ for an interesting performance comparison for a core.async implemented on top of pulsar, which might give you some hints how both projects might fare for your use case.
Cheers, Mathias