Parallel processing error

33 views
Skip to first unread message

dkoller

unread,
May 31, 2021, 3:36:35 AM5/31/21
to ANNarchy

Hello all,

I encountered what appears to be a memory error when simulating a custom neuron model with parallel_run():

python(13321,0x700011e2b000) malloc: *** error for object 0x7fa122ffffe0: pointer being freed was not allocated
python(13321,0x700011e2b000) malloc: *** set a breakpoint in malloc_error_break to debug
[1]    13321 abort      python scripts/simulations/01_simulation.py 
/Users/dk/miniforge3/envs/ANNarchy/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 2 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

In an attempt to find a minimal working example that reproduces the error, I found that this also occurs with the built-in neuron models if the i_offset variable is non-zero:

from ANNarchy import *

def simulation(idx, net):
    net.simulate(100.)

if __name__ == '__main__':
    p1 = Population(name='p1', geometry=10000, neuron=IF_cond_exp)
    p1.i_offset = 1.0

    networks = []
    for i in range(2):
        networks.append(Network(everything=True))
        networks[i].compile()

    parallel_run(method=simulation, networks=networks, max_processes=2)

The error does not occur if I set max_processes=1. I couldn’t figure out why this is the case and would appreciate any help!

Best, Dominik

julien...@gmail.com

unread,
May 31, 2021, 4:20:16 AM5/31/21
to ANNarchy
Thanks for the report. The MWE works by me. What is your setup? MacOS and conda?
Reply all
Reply to author
Forward
0 new messages