import multiprocessing
...
cores = multiprocessing.cpu_count()
pool = multiprocessing.Pool(cores)
parallelized_results = pool.map(compute, verticals)
...
Any advice how to solve the problem?
Thank you,
Marius
I solved the problem by implementing my own protocol for data exchange using stdin/stdout and an adapter which starts separate Python process for each script execution. I can start as many processes as needed and the performance is very good. Finally CPU usage is 100% when there are multiple scripts running in parallel.
It was pretty easy to do the data exchange using JSON which is easy to parse in both Java and Python.
I hope this helps.
Best Regards,
Marius