Modifying matrices - what are my risks?

50 views
Skip to first unread message

Mars0i

unread,
Apr 4, 2014, 3:28:10 PM4/4/14
to numerica...@googlegroups.com
I'm in the process of rewriting a (functional-ish but imperative) agent-based Common Lisp simulation program in Clojure.  Each agent has a small (50x50, 100x100) neural network matrix and some core.matrix vecotrs that will repeatedly be modified in response to communication between agents.

When I first started learning Clojure, I was a little concerned that (a) speed and code simplicity would require imperative modification of matrices, but (b) this would require using Clojure's state management operators.  I'd already avoided state problems in CL.  It turns out, though, I can completely ignore the usual Clojure state operators.  Procedures such as 'mset!' are just traditional imperative operators.  Am I missing something?  What are the risks?

Thanks!

For those who want more information, here are further details:

A. I use the obvious method of separating communication steps from neural network settling steps; these four steps are iterated in sequence:

(1) Neural networks are partially settled using matrix multiplication.  (New activation values can be stored back into the old activation vector imperatively using mset!, or the entire vector can be updated using assoc to create a "new" agent record.)
(2) The states of neural networks in each agent chooses what will be communicated by each agent. 
(3) Then all agents communicate; each agent's neural networks, etc. are modified in response to communication.

B. There is no significant benefit to allowing communication to happen asynchronously rather than as a separate step.

C. It's unlikely that it will be beneficial to try to run each agent's operations on different CPU cores.  There's a random element to communication, so I do 50 to 200 runs of a simulation for each set of parameters.  It seems more efficient to use multiple cores for simultaneous simulation runs than to split a single run between cores.  (Nevertheless, if in the future, it is beneficial to split agents between cores, I don't think it will be difficult, for example, to parallelize steps 1 and 2.  Maybe this would be as simple as replacing map or doseq with pmap.  It's not clear to me that I'd need any kind of protection/synchronization operators even then.)

If you read this far, double thanks!

Mike Anderson

unread,
Apr 4, 2014, 10:45:24 PM4/4/14
to numerica...@googlegroups.com
Yes mset! and friends are fairly traditional imperative operations. 

They are pretty fast as well, though be aware that it is much better in general to use a bulk operation like add! rather than doing mset! for each individual element (it's better to only pay the protocol dispatch once....)

As always, mutable state introduces complexity, so I wouldn't use it unless you are sure that you need it. core.matrix is designed so that you can use it in an immutable / functional style most of the time, and only drop down to mutation when you *really* need performance.

Some risks:
- Mutable ops in core.matrix aren't guaranteed to be thread safe (depends on the underlying implementation, but most aren't...)
- Usual risks about needing defensive copying when passing parameters
- Mutable ops will fail on immutable arrays. You need to be somewhat careful about this...
- Laziness can throw some curve-balls at you: Clojure code isn't always executed in the order you expect.....





--
You received this message because you are subscribed to the Google Groups "Numerical Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email to numerical-cloj...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Mars0i

unread,
Apr 7, 2014, 1:31:10 AM4/7/14
to numerica...@googlegroups.com, mi...@mikera.net
Thanks Mike.  Good tips.  I think watching out for laziness gotcha's will be what I most need to worry about.  Even when I don't think I have to worry about it, I could be wrong!  Hadn't been thinking about it, although it's bitten me before, so it's good that you mentioned it.

Reply all
Reply to author
Forward
0 new messages