why not implement CFS(completely fair schedule) in GO

537 views
Skip to first unread message

陆家靖

unread,
Aug 18, 2015, 11:54:41 AM8/18/15
to golang-nuts
The scheduler for goroutine is pre-emptive,

so an empty for loop in goroutine will stuck the process if i set runtime.MAXGOPROCS(1)

the re-schedule will happen only if i call any method in for-lop or the goroutine call runtime.Gosched() by itself

so i wonder if the CFS could be used for schedule machanism

if some goroutine run for a long time, the sched will switch to others

Ian Lance Taylor

unread,
Aug 18, 2015, 12:18:10 PM8/18/15
to 陆家靖, golang-nuts
There are two distinct issues here, and I'm not sure which one you are
asking.

One issue is that, as you say, a goroutine that runs an infinite loop
without making any function calls or allocating any memory or using
any map or channel operations can currently run forever without being
preempted. That is a characteristic of the current gc implementation.
We could change it. I expect that one day we will. The only question
is how to do it most efficiently.

A different issue is the scheduling algorithm used to select among
runnable goroutines. The current scheduling algorithm is reasonably
simple. We could use a more sophisticated one, but we wouldn't want
to simply adopt some standard algorithm wholesale. For Go it will
always be appropriate to consider channel operations when making
scheduling decisions.

Ian

Roberto Zanotto

unread,
Aug 18, 2015, 12:56:25 PM8/18/15
to golang-nuts, lujiaj...@gmail.com
It could be interesting to know why fair scheduling is not guaranteed by the language spec. Do you have some insight on the pros/cons or a pointer to a discussion?

Ian Lance Taylor

unread,
Aug 18, 2015, 1:43:43 PM8/18/15
to Roberto Zanotto, golang-nuts, 陆家靖
On Tue, Aug 18, 2015 at 9:56 AM, Roberto Zanotto <roby...@gmail.com> wrote:
> It could be interesting to know why fair scheduling is not guaranteed by the
> language spec. Do you have some insight on the pros/cons or a pointer to a
> discussion?

As far as the language spec is concerned, the scheduler is a quality
of implementation issue.

Also, at least in the current implementation the Go scheduler is
inherently dependent on the operating system's thread scheduler, so
there isn't much the language can promise about that.

Ian



> On Tuesday, August 18, 2015 at 6:18:10 PM UTC+2, Ian Lance Taylor wrote:
>>
>> On Tue, Aug 18, 2015 at 7:20 AM, 陆家靖 <lujiaj...@gmail.com> wrote:
>> >
>> > The scheduler for goroutine is pre-emptive,
>> >
>> > so an empty for loop in goroutine will stuck the process if i set
>> > runtime.MAXGOPROCS(1)
>> >
>> > the re-schedule will happen only if i call any method in for-lop or the
>> > goroutine call runtime.Gosched() by itself
>> >
>> > so i wonder if the CFS could be used for schedule machanism
>> >
>> > if some goroutine run for a long time, the sched will switch to others
>>
>> There are two distinct issues here, and I'm not sure which one you are
>> asking.
>>
>> One issue is that, as you say, a goroutine that runs an infinite loop
>> without making any function calls or allocating any memory or using
>> any map or channel operations can currently run forever without being
>> preempted. That is a characteristic of the current gc implementation.
>> We could change it. I expect that one day we will. The only question
>> is how to do it most efficiently.
>>
>> A different issue is the scheduling algorithm used to select among
>> runnable goroutines. The current scheduling algorithm is reasonably
>> simple. We could use a more sophisticated one, but we wouldn't want
>> to simply adopt some standard algorithm wholesale. For Go it will
>> always be appropriate to consider channel operations when making
>> scheduling decisions.
>>
>> Ian
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

unread,
Aug 18, 2015, 3:15:09 PM8/18/15
to golang-nuts, lujiaj...@gmail.com
Cons: Preemption of goroutines in a single OS process, as well as non-preemptive multi-threading of goroutines with shared memory, render Go programs less safe unless the runtime and generated code make extensive use of mutexes and of compare&swap. Without extensive usage of mutexes and compare&swap, only single-threaded Go implementations with cooperative scheduling are truly safe in the sense of the language spec; multi-process Go programs without shared memory, where each OS process runs one thread and has cooperative scheduling, are also safe.

David Chase

unread,
Aug 19, 2015, 12:04:53 PM8/19/15
to golang-nuts, lujiaj...@gmail.com
I have some experience with this, having implemented a best-effort fair scheduler in a Java implementation with similar threading system to Go's.
Putting scheduling checks in loops is a compiler issue; I'm a little surprised that Go works as well as it does without this, but results are results, and they look decently good so far.
Checking in a loop has a little bit of an overhead, but loops can be unrolled; there's no need to check at every single iteration.

There are places where fairness is out of your control -- if the OS does something silly with your threads, what do you do? -- and there are places a little unfairness is generally thought to improve system performance.  For example, we found that on the event that Thread A exited a (Java) monitor and noticed that Thread B would like to enter, that performance was generally improved by handing the rest of B's quantum directly to A (these threads were like Goroutines -- the internal "worker thread" would set aside the state of Thread A and adopt the state of Thread B, and make it Go).  There are at least two reasons for this -- one was that it had the heuristic effect of driving contended monitors into the uncontended state, and the other was the belief that if A and B were interested in the same locked object , that it was likely that they would be interested in the same cache lines as well, and the virtualized threading implementation meant that the two "threads" would then execute on the very same core against the very same cache.

However, best-effort fairness had very favorable effects on things like distribution of transaction completion times; they would tend to cluster nicely around the average, instead of being smeared out so that you were always ensured that there would be a non-trivial number of timing-out losers.

As far as the hazards of exposing MP bugs, I think it's 2015.  We bought the multiprocessor PC on which I implemented that Java runtime back in 1997.  Phones have multiprocessors in them now.  The language provides the tools for MP safety, and it's a better bet to use those than to rely loops not running concurrently with other code.
Reply all
Reply to author
Forward
0 new messages