This is something that is probably well known, but I fell into this
pitfall recently while working with Brian and I think it would be
appropriate to mention here as well.
The discussion on the numpy mailing list on the subject can be found
here:
http://mail.scipy.org/pipermail/numpy-discussion/2008-December/039179.html
In brief, when using numpy.random to generate random numbers in an
individual process using the multiprocessing module, "the different
instances of the RNG are created by forking from the main process".
This causes all sub-processes to use the same seed and generate the
same sequence of numbers.
The output of the following program:
http://pastebin.com/QhfSzgLy
is:
n: 0 r: 0.985201182352
n: 1 r: 0.985201182352
n: 2 r: 0.985201182352
n: 3 r: 0.985201182352
n: 4 r: 0.985201182352
n: 5 r: 0.985201182352
n: 6 r: 0.985201182352
n: 7 r: 0.985201182352
n: 8 r: 0.985201182352
n: 9 r: 0.985201182352
In brian now, this causes problems when trying to run something like
the following:
http://pastebin.com/WrWsP3tL
The 4 neurons always behave exactly the same between them and fire at
the same times.
This can be remedied by importing numpy and adding
"numpy.random.seed()" before defining the neuron, e.g., before line 17
in the second example.
Again, sorry if I'm wasting space with common knowledge, but I spent
about a day trying to figure this out after stumbling on the numpy
discussion thread. I figured since brian uses numpy's RNG that people
using multiprocessing may fall into a similar trap.