I've had some similar issues where even though the Python objects have
been deleted, Python's garbage collection doesn't get called often
enough. After calling clear(True), you could try doing:
import gc
gc.collect()
This forces Python to do a garbage collection.
This solved a similar problem I had, but it may be that your issue is
something else that is specific to multiprocessing. If it doesn't work -
perhaps you could post a minimal example that reproduces the problem?
Also, you may already know this, but to use Brian efficiently, you
should have as many neurons as possible in the network. If your
simulation runs with different parameter sets are independent and each
one is a smallish network, you could probably gain in efficiency by
vectorising your simulation (e.g. if you have N neurons and M runs you
can make a single run with N*M neurons). Our initial experiments with
this show that you need to have around 10000+ neurons in the network
before using multiprocessing to subdivide the computations becomes
worthwhile, and you don't get significant gains until you have quite a
few more than that.
Dan
On 08/06/2010 22:00, tom wrote:
> Hi all,
>
> I'am currently running a simulation with many different parameters
> using the multiprocessing package. In general, the simulation runs
> fine. However, despite clearing the Brian objects [using clear(True)]
> there is a continuous memory build up which ultimately results is a
> memory error.
>
> A kind of 'work around' is to explicitly terminate the pool (see
> below); let's say computing the first 100 parameters, and generating a
> new pool thereafter with the next 100 parameters. This is, however,
> not very efficient.
> As I just recently moved to Python I'am sure I missed something. Are
> there any suggestions to prevent the memory build up?
>
> I'am using Python(x,y) 2.6.5.1 on 64Bit Win7.
>
> The basic structure of the simulation is like this:
> ...
> import multiprocessing
> ...
> def myBrianSim( current_parameters ):
> clear(True)
> # build the simulation and set up the monitors
> # run the simulation
> # plot and save graphs
> # save relevant monitor and custom data
>
> if __name__==�__main__�:
Of course - no need to spend time optimising your code to use
vectorisation if it's already fast enough! :)
Another possible issue might be that clear(True) only clears object that
would be included in the network if you called run(). So, for example,
if you did something like:
def f(p):
... run simulation ...
return results
for p in params:
clear(True)
x = f(p)
...
This might still have a memory build up, because clear(True) only clears
objects that are created in the for p in params loop, i.e. no objects at
all (although garbage collection in the for loop should still fix that
problem).
Dan