I don't think the full generality of an OS scheduler is required or desired; the set of scheduler targets in a go programme is far more controlled than with an OS. If a normal (high priority) goroutine depends on a batch (low priority) goroutine, it just waits. By waiting, it stops competing with the CPU, which should give cycles to the batch goroutine. If your dashboard is still unresponsive, go fix your dependencies. So, no to priority inversion.
Note that the way I see this being used is that most goroutines will be normal (the default), and only specially tagged ones will be batch. So, generally, helper goroutines will be normal.
If you think this approach isn't feasible, then do you have an alternative suggestion to address the class of problems I described?
You received this message because you are subscribed to the Google Groups "golang-dev" group.
How do you figure it's confusing? The API seems quite clear to me. If you're saying that dealing with scheduling decisions is confusing, then that would be the case for a solution entirely in user code. In my view, adding a hook to the runtime ensures full coverage and fewer surprises.
On Fri, Mar 10, 2017 at 9:31 AM, Richard Gooch <rg+go...@safe-mbox.com> wrote:How do you figure it's confusing? The API seems quite clear to me. If you're saying that dealing with scheduling decisions is confusing, then that would be the case for a solution entirely in user code. In my view, adding a hook to the runtime ensures full coverage and fewer surprises.I'm with Brad, this is confusing. The runtime is about to block your goroutine, so it must send on a channel. What if that send blocks?
Which channel is the runtime supposed to send its "my block notification is about to block" notification to? Or doesn't it?
I'd also like to understand how this API would solve the original problem. How do you propose to use this API to set priorities?
Currently we have LockOsThread to lock current goroutine to thread.
What if we have a way to declare that every goroutine, spawned from this one, will stay on the same thread forever?
It will solve a original problem: just lock http serve goroutine to thread, and all spawned goroutines for client connections will work on this separate thread.
Also it will allow to use single-threaded datastructures for in-memory storages while still having concurency with goroutines. (Ok, this case is not easy to implement, cause goroutine may preempt in many points)
So, in your proposal, who writes to the channel when we have
determined that the goroutine has blocked? What happens if nothing is
reading from the channel? (Note that obviously the answer can not be
"the system monitor thread blocks writing to the channel.")
I think a modified version of this would work. The mechanism you described makes it difficult when goroutines do not share a convenient common ancestor and also when you have multiple CPUs and thus want to pin the goroutines to a pool of OS threads rather than a specific thread.
An API which allows you to create a pool of OS threads and lets you pin a goroutine to the pool of threads would work and I think it would be easy to support a variety of goroutine groupings and hierarchies.
If the pool is created by inheriting the properties of the thread the creating goroutine is running on, that would also allow other cool tricks like putting some of your goroutines in a separate mount namespace.