[gsoc] Multi-queue scheduler

39 views
Skip to first unread message

Jessica Yu

unread,
Aug 18, 2014, 8:49:51 PM8/18/14
to plan9...@googlegroups.com
Hello all!

The mqs scheduler is functional and project code is publicly available
on bitbucket: http://bitbucket.org/flaming-toast/mqs-nix, on branch
*stable-final*. There's still a bunch of cruft branches from
testing/bottleneck hunting.

I've posted a more detailed analysis on the new scheduler's behavior
here: http://flaming-toast.github.io/2014/08/18/plan-9-multiqueue-scheduler-stats-and-updates/

Here's a quick run-down of its structure and features:
Torus configuration:
The new scheduler arranges cpus in a torus-esque configuration, where
each cpu is connected to N neighbors, depending on the dimension of
the torus. This reduces run queue contention, and prevents cpus from
targeting a single idle cpu for push migration operations.

Fork balancing:
Newly forked processes are queued on idle or least loaded neighbors.

Load balancing / push migration scheme:
The load balancer utilizes a push migration scheme and an imbalance is
checked for every 0.5ms. The load balancer focuses primarily on
pushing to idle cpus, then compares loads if there are none. If a
neighboring cpu is idle and a cpu has processes on its run queue,
then a push is initiated.

As for performance, the tl;dr version is this: the new scheduler is
more resilient in continuous high loads, whereas the old scheduler
fluctuates greatly in performance but can be quicker than the new
scheduler at times. I encourage you to read the link above if you'd
like to learn more about how it behaves compared to the old 9atom
scheduler.

I will be continuing work on the scheduler with my mentor over the
fall; we are hoping to be able to run more tests on larger machines,
since this summer's work was done on a 4-core machine. It will be very
interesting to see how the new scheduler stacks up against the old one
on machines with >8 cores, where run queue contention is likely to be
more visible.

That's all for now, thanks everyone!
Jessica
Reply all
Reply to author
Forward
0 new messages