Kill all workers and start fresh

19 views
Skip to first unread message

Katharine Doubleday

unread,
Jan 10, 2018, 3:26:47 PM1/10/18
to scoop-users
Hello all,

I am wondering if there is a straightforward way in SCOOP to ensure that each process only completes one task before being killed and a new process spawning in its place. I am working on a genetic algorithm, in which each individual in the population is evaluated on a separate process.  Each of those evaluations takes a significant amount of memory (due to a memory leak that unfortunately cannot be addressed at present), so I want to ensure that workers are not reused when I move from one generation to the next. Instead, I want to kill and restart all processes during the transition to the next generation. I previously used the multiprocessing package, where this was a straightforward fix:

pool = multiprocessing.Pool(processes, maxtasksperchild=1)

So far it looks like I need play with adding calls to ScoopApp.close and ScoopApp.run at each generation, but if anyone has any insight or suggestions, that would be much appreciated.

Cheers,
Kate

Derek Tishler

unread,
Jan 10, 2018, 7:57:52 PM1/10/18
to scoop-users
I tried to review the scoop docs and source but beyond messing with map's timeout(sounds like a bad idea for a general fix) I am unsure how to solve this properly.

It did, however, remind me of a weird case I had to address recently. In a trading system I had to run my evolution 100% externally 1 gen at a time. So as my trading system ran on a loop for each new data step, it would launch the evolution process externally and run like a normal deap program(using scoop) for a single generation with checkpointing

# how I launched a scoop evo from a diff python scipt 
        start_time = time.time()
        if platform.system() == 'Windows':
            command = ['python','external_evolution_np.py'] #windows
        else:
            command = ['python external_evolution_np.py'] #linux 
        process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)#, stderr=subprocess.STDOUT)
        for line in process.stdout:
            elapsed_time = time.time() - start_time
            print line.replace("\n","") + "\t\t%0.2f s"%elapsed_time
            self.Log(line + "\t%0.2f s"%elapsed_time)
        process.wait()
        if process.returncode != 0:
            self.SetHoldings(self.symbol, 0.) # shutdown
            exit() # failed to run evolutiion cycle externally

Katharine Ann Doubleday

unread,
Jan 12, 2018, 10:05:38 AM1/12/18
to scoop...@googlegroups.com
Hi Derek,

Thanks for the suggestion. I haven't added checkpointing yet, and that looks like quite a feasible workaround.

Thank you,
Kate

--
Vous recevez ce message, car vous êtes abonné à un sujet dans le groupe Google Groupes "scoop-users".
Pour vous désabonner de ce sujet, visitez le site https://groups.google.com/d/topic/scoop-users/NXYTqh7XWWk/unsubscribe.
Pour vous désabonner de ce groupe et de tous ses sujets, envoyez un e-mail à l'adresse scoop-users+unsubscribe@googlegroups.com.
Pour obtenir davantage d'options, consultez la page https://groups.google.com/d/optout.



--
Kate Doubleday
Ph.D. Student
Electrical, Computer, and Energy Engineering
University of Colorado Boulder
Katharine...@Colorado.edu
Reply all
Reply to author
Forward
0 new messages