Hey all,
Im using the multiprocessing package to split up a large computing job using a worker pool and apply_async to my working function. All the arrays in question are in shared memory, to eliminate the communication time. I am having one issue however. Some times when i run the function in question, my cpu usage goes to near 100 percent, all 12 cores are in the 80-90% range. some times however each core only uses about 16-30% usage. As you can guess this drastically slows down the computation time. Thing is I cant find any reason for the difference. I can even stop the tread several times, with ctl-c, and eventually ill get near 100% usage. It will work at that level for a number of runs of the function, then will randomly run slowly again. Does any one have ideas on what could be causing this?
nate
--
You received this message because you are subscribed to the Google Groups "Orlando Python Users Group" group.
To post to this group, send email to
orlan...@googlegroups.com.
To unsubscribe from this group, send email to
orlando-pug...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/orlando-pug?hl=en.