Ok...my example(I applied it arbitrarily to
onemax_mp.py example) here gets a bit messy so please bear with me. I could us a hand cleaning refining as I think I got mixed up trying to combine Ray's ActorPool with decorators.
A big note her, like with Scoop, is
parallel can be really slow for inexpensive evals due to network overhead so expect the below example to run slower in Ray mode(change at top via use_old_method bool). If you actually wanted to use it for a case of quick evals and large pops,you would need to batch out groups individuals to remote workers to iterate though.
I posted here for more discussion regarding ray vs scoop:
https://github.com/DEAP/deap/issues/404You can get around this pickle issue for things like ephemeral constants or delta penalties by using
Ray vs multiprocessing or SCOOP. This will required a newer version( > 3.5 i think) of python. I switched to ray as scoop out of development and unsupported and I required cluster support. I highly suggest Ray!
I was able to test...but it was a bit of a hack job(very messy to apply the decorator...I must have overcomplicated it). To recreate map via ray I use the ActorPool, but it is pop eval specific vs useable for islands and other map cases in Deap examples. Ray, like scoop, has overheard so this is expectedly slower for inexpensive evaluations. but I was able to show that it has same behavior with the old eval+decorator vs using ray+decorator:
Example of using Ray to scale population evaluations and get around pickle issues for deap, such as DeltaPenalty: