goroutine priorities

5,079 views
Skip to first unread message

Richard Gooch

unread,
Aug 20, 2016, 1:06:37 AM8/20/16
to golang-dev
  Hi, all. There's currently no way to set priorities on goroutines, which makes it difficult to write a programme which can effectively balance throughput and responsiveness. Here is a relatively straight-foward use-case: a HTTP server which presents a dashboard showing information about the server. We'd like servicing of this dashboard to be responsive. The server also makes TLS-encrypted connections to other servers and issues RPC requests over those connections. Each of these RPC over TLS calls is run inside a goroutine and there are typically over 10 times the number of actives calls (and hence goroutines) than there are CPUs. Even 100 times more. The current behaviour is that the dashboard is painfully slow. If the priority of the goroutines running the RPC transactions could be lowered, this would solve this problem.

Reducing the number of concurrent RPC goroutines improves the dashboard responsiveness, but comes at the cost of RPC throughput. Furthermore, it is difficult to tune the number of concurrent RPC goroutines since they rapidly alternate between blocking on the network and performing computations (mostly the TLS crypto code or the GOB encoder/decoder). The fraction of time they spend blocking versus computing is unpredictable because it depends on network congestion and the responsiveness of the various RPC callees. Also, due to the rapid alternating between blocking and computing, pinning each RPC goroutine to an OS thread and lowering the priority of those threads has limited effect, since now there are a much larger number of OS threads which are competing for the CPU. Even niced, this large number of OS threads competes heavily with the dashboard. It's also much more expensive than using goroutines.

Note that I'm not so concerned about RPC latency, throughput matters more. Dashboard latency does matter, of course. For my purposes, a very coarse grained set of priority levels would suffice. In fact, just two levels (normal and batch) would be fine. Batch goroutines would only get CPU time if there are no normal goroutines wanting the CPU.

Is this something that would be acceptable to include in the runtime?

Regards,

Richard....

Brad Fitzpatrick

unread,
Aug 20, 2016, 2:35:55 PM8/20/16
to Richard Gooch, golang-dev
Seems very unlikely to be included.

What happens when a high priority goroutine depends on its work being completed by a helper goroutine which also services low priority goroutines? (this isn't hypothetical and I find frequent)

Are you going to do priority inversion? How? Based on channel sends? How do you know when the work is complete and should end the priority boost?

etc.


--
You received this message because you are subscribed to the Google Groups "golang-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Richard Gooch

unread,
Aug 20, 2016, 2:58:34 PM8/20/16
to golang-dev, rg+go...@safe-mbox.com
I don't think the full generality of an OS scheduler is required or desired; the set of scheduler targets in a go programme is far more controlled than with an OS. If a normal (high priority) goroutine depends on a batch (low priority) goroutine, it just waits. By waiting, it stops competing with the CPU, which should give cycles to the batch goroutine. If your dashboard is still unresponsive, go fix your dependencies. So, no to priority inversion.

Note that the way I see this being used is that most goroutines will be normal (the default), and only specially tagged ones will be batch. So, generally, helper goroutines will be normal.

If you think this approach isn't feasible, then do you have an alternative suggestion to address the class of problems I described?

I have an alternative suggestion: allow the creation of OS thread pools and pinning goroutines to thread pools. Then, one could create a thread pool with lower priority and pin the batch goroutines to that thread pool. The normal goroutines wouldn't be quite as responsive as with the original approach I suggested, but at least there would be some relief from the current behaviour.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+...@googlegroups.com.

Brad Fitzpatrick

unread,
Aug 20, 2016, 4:09:01 PM8/20/16
to Richard Gooch, golang-dev
On Sat, Aug 20, 2016 at 11:58 AM, Richard Gooch <rg+go...@safe-mbox.com> wrote:
I don't think the full generality of an OS scheduler is required or desired; the set of scheduler targets in a go programme is far more controlled than with an OS. If a normal (high priority) goroutine depends on a batch (low priority) goroutine, it just waits. By waiting, it stops competing with the CPU, which should give cycles to the batch goroutine. If your dashboard is still unresponsive, go fix your dependencies. So, no to priority inversion.

Note that the way I see this being used is that most goroutines will be normal (the default), and only specially tagged ones will be batch. So, generally, helper goroutines will be normal.

If you think this approach isn't feasible, then do you have an alternative suggestion to address the class of problems I described?

Don't run at 100% CPU?

runtime.LockOSThread + syscall.Setpriority maybe for your batch routines?
 
Other than that, not really.

Richard Gooch

unread,
Aug 20, 2016, 8:39:24 PM8/20/16
to golang-dev, rg+go...@safe-mbox.com
Just "not running at 100%" CPU is problematic. We're paying by the hour for CPU, so we have a strong motivation to avoid idle time. We basically make a throughput versus cost tradeoff. Unless we pay > 10x for CPU, the programme is going to be using 100% of the CPU. Sure, we'd get 10x throughput, but it's not worth it.

Pinning the batch goroutines each to their own OS thread doesn't work so well. We'd have hundreds of OS threads. So we'd throw out the benefit of lightweight goroutines. Also, as I pointed out in my first post, all those OS threads would compete with the OS thread for the dashboard. Even if their priorities were lowered, they would compete since there would be so many of them.

I've tried pinning the HTTP server and dashboard goroutines to OS threads, but that didn't help.

Brad Fitzpatrick

unread,
Aug 20, 2016, 8:54:10 PM8/20/16
to Richard Gooch, golang-dev
I meant you could write your own minimal scheduler just for your batch stuff.

You'd have N normal (GOMAXPROCS) threads running at normal priority managed by the Go routine, and then your few (and presumably CPU-heavy) batch goroutines could be managed on LockOSThread-locked threads, pulling from a shared chan func() of batch work to do. It does impose restrictions on how they're used (e.g. they can't block) but if they're few enough and simple enough, just computing stuff, it might work.

Not saying it's the ultimate answer, but it might work.



--
You received this message because you are subscribed to the Google Groups "golang-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+unsubscribe@googlegroups.com.

Richard Gooch

unread,
Aug 20, 2016, 9:06:54 PM8/20/16
to golang-dev, rg+go...@safe-mbox.com
That won't work in the case I described. There are only a couple of normal goroutines (the HTTP server and dashboard goroutine, as well as any handlers that libraries spawn under the covers). There are hundreds of goroutines doing batch work (RPC transactions), and they constantly flip between blocking (on the network) and running crypto code (since the connections are TLS encrypted). If I limit the number of concurrent RPC transactions to the number of CPUs, the CPUs spend a lot of time idle, since they block on the network/remote endpoint. If I limit the number of concurrent RPC transactions to 10x the number of CPUs, the dashboard takes > seconds to redraw. With no RPC transactions in flight the dashboad redraws in a tiny fraction of a second (it's simple HTML and is easy to compute). Even with the 10x #CPUs limit on concurrent RPC transactions, the programme sometimes uses all the CPU and sometimes has idle CPU, depending on what's happening in the network and with the remote endpoints. If I boost the limit on concurrent RPC transactions the idle CPU goes to zero but the dashboard can takes minutes to redraw.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+...@googlegroups.com.

Brad Fitzpatrick

unread,
Aug 20, 2016, 9:10:01 PM8/20/16
to Richard Gooch, golang-dev
It'd be interesting if you could hack up a change to the scheduler to add two run queues (normal and batch), a runtime package call to modify the current goroutine's priority, and report back with any numbers from your application.

To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+unsubscribe@googlegroups.com.

Richard Gooch

unread,
Aug 20, 2016, 9:24:55 PM8/20/16
to golang-dev, rg+go...@safe-mbox.com
I could give that a try. I haven't poked around the runtime package before, so it won't be quick :-)

kennyl...@gmail.com

unread,
Aug 21, 2016, 2:40:37 AM8/21/16
to golang-dev
Alternatively, make a data aggregator process that perform massively parallel RPC requests to gather the data for your dashboard, which could then be fetched with 1 short request at very high intervals. The aggregator request would never block for long, as the aggregator just sends its current image of the stats. If priority needs adjustment, the process can be reniced.

A batch queue might have purposes, but it might never run if all goroutines never block simultaneously. If they do, there will have to be logic to occasionally schedule "normally" from the batch queue, which in case of short batch queue and long normal queue might actually result in the batch jobs executing more often than normal jobs. It'll end up quite quirky.

On a side note: you mention paying for CPU time by the hour, yet mention that this is a dashboard. As a dashboard will run indefinitely (and usually be quite a low-power process, but it is hard to know what you mean by "dashboard"), this seems like a very expensive way to host the dashboard...

Michael Jones

unread,
Aug 21, 2016, 9:37:44 AM8/21/16
to kennyl...@gmail.com, golang-dev
Maybe “go fast xxx()” and “go slow xxx()”


Richard Gooch

unread,
Mar 8, 2017, 12:11:30 PM3/8/17
to golang-dev, rg+go...@safe-mbox.com
To follow up on this: rather than hack the scheduler/runtime, I put together something simple that I can use without having to maintain patches for each new Go release :-) It demonstrates the principle I described (goroutines with batch priority).

My approach is: create a CpuSharer with GrabCpu and ReleaseCpu methods and create a Dialer which wraps net.Dialer. The CpuSharer has NumCPU slots available, thus limiting the number of concurrent runnable goroutines.

My Dialer wraps calls to blocking methods with ReleaseCpu/GrabCpu pairs (blocking methods are: the Dial method and the Read and Write methods of the connection). This is not ideal, since I have to take care to wrap all potentially blocking methods that my herd of goroutines may call and I also have to be careful to match up grab and release calls. I would of course prefer that the runtime manages this for me :-). Another limitation is that I've got short windows where a goroutine releases the CPU for others and then calls some blocking method but is not yet blocked by the runtime. Again, if the runtime implemented this functionality, this would be better.

Normal goroutines such as those service the dashboard don't participate in the CpuSharer pool, so they compete with fewer runnable goroutines and thus are more responsive.

So now, even if I have 100 times more goroutines than I have CPUs, my dashboard is quite responsive. Each of those goroutines is running a GOB decoder over a TLS connection, so they switch back and forth between computing (crypto operations, GOB decode, reflect, reflect and more reflect) and blocking on Read. Because I've wrapped the Read, none of the library code had to be changed, but I have more control over the CPU the library code uses. It used to takes hundreds of milliseconds to several seconds (depending on how many herdbeasts were thundering at once) to render the dashboard. With this scheme, despite the limitations, it's 10 to hundreds of milliseconds. This is good enough for now. It's stopped being painful.

While I would like to see support for this in the runtime, some thought is required on how to make this generic. For example, I've implemented a simple FIFO scheduler. Grab requests succeed in the order in which they were made (i.e. the first goroutine which tries to grab a CPU is the first one to start running once a CPU is available). Another scheduling policy which interests me is the "drive to completion" model, where goroutines are are placed in a priority queue and the highest priority (first to start) goroutine gets first pick when a CPU becomes available. In my workload, I start a goroutine for each transaction. When the transaction completes, the goroutine terminates. I think the "drive to completion" policy will also reduce the memory consumption, as there will be fewer halfway "in-flight" transactions.

For anyone who's interested in the code I threw together to prove out the concept:
https://godoc.org/github.com/rgooch/Dominator/lib/cpusharer
https://godoc.org/github.com/rgooch/Dominator/lib/net

Russ Cox

unread,
Mar 8, 2017, 12:18:19 PM3/8/17
to Richard Gooch, golang-dev
FWIW, being able to do this in user code and customize as appropriate for the app (as you have) seems a lot better than trying to get a policy (or a policy language) into the runtime.

Russ


To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+unsubscribe@googlegroups.com.

Richard Gooch

unread,
Mar 8, 2017, 12:27:13 PM3/8/17
to golang-dev, rg+go...@safe-mbox.com
I definitely agree that you don't want some complex description of the scheduling policy rammed into the language. However, I think there may be a way to split the responsibilities, letting the runtime/scheduler do what they do best and leave the policy in user code. Instead of explict "batch mode" support, add support for launching a goroutine with an optional channel or function that is read from/called when the runtime wants to put the goroutine on the run queue.

This would allow user code to control and implement the policy for who gets the CPU, and the language specification and implementation remains simple and generic.

Richard Gooch

unread,
Mar 10, 2017, 11:28:54 AM3/10/17
to golang-dev, rg+go...@safe-mbox.com, r...@golang.org
So, would you be open to an API such as:
    func SetSchedBlockChannel(ch chan<- struct{})
    func SetSchedRunChannel(ch chan<- struct{})

or:
    func SetSchedBlockFunc(func())
    func SetSchedRunFunc(func())

SetSchedBlock* is used by the scheduler to notify when the goroutine is about to block.
SetSchedRun* is used by the scheduler to notify/wait when the goroutine unblocks and wants the CPU.

If this has a reasonable potential to be accepted, then I'll assign some engineering resources to it.

Brad Fitzpatrick

unread,
Mar 10, 2017, 12:18:45 PM3/10/17
to Richard Gooch, golang-dev, r...@golang.org
I'm going to guess that is too confusing and magic and a pain to maintain and thus not acceptable. 

Richard Gooch

unread,
Mar 10, 2017, 12:31:19 PM3/10/17
to golang-dev, rg+go...@safe-mbox.com, r...@golang.org
How do you figure it's confusing? The API seems quite clear to me. If you're saying that dealing with scheduling decisions is confusing, then that would be the case for a solution entirely in user code. In my view, adding a hook to the runtime ensures full coverage and fewer surprises.

As for being "magic and "a pain to maintain", this looks like a simple way to give goroutines some control over their scheduling policy. Control over the scheduler is a must for real-world systems. Most solve it with priority levels, but that complicates the scheduler. My suggestion leaves the complexity up to the user and doesn't impose noticeable costs on the default case.

Keith Randall

unread,
Mar 10, 2017, 1:47:09 PM3/10/17
to Richard Gooch, golang-dev, Russ Cox
On Fri, Mar 10, 2017 at 9:31 AM, Richard Gooch <rg+go...@safe-mbox.com> wrote:
How do you figure it's confusing? The API seems quite clear to me. If you're saying that dealing with scheduling decisions is confusing, then that would be the case for a solution entirely in user code. In my view, adding a hook to the runtime ensures full coverage and fewer surprises.


I'm with Brad, this is confusing.  The runtime is about to block your goroutine, so it must send on a channel.  What if that send blocks?

Which channel is the runtime supposed to send its "my block notification is about to block" notification to?  Or doesn't it?
 
I'd also like to understand how this API would solve the original problem.  How do you propose to use this API to set priorities?

To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+unsubscribe@googlegroups.com.

Ian Lance Taylor

unread,
Mar 10, 2017, 2:15:16 PM3/10/17
to Keith Randall, Richard Gooch, golang-dev, Russ Cox
On Fri, Mar 10, 2017 at 10:47 AM, 'Keith Randall' via golang-dev
<golan...@googlegroups.com> wrote:
>
> On Fri, Mar 10, 2017 at 9:31 AM, Richard Gooch <rg+go...@safe-mbox.com>
> wrote:
>>
>> How do you figure it's confusing? The API seems quite clear to me. If
>> you're saying that dealing with scheduling decisions is confusing, then that
>> would be the case for a solution entirely in user code. In my view, adding a
>> hook to the runtime ensures full coverage and fewer surprises.
>>
>
> I'm with Brad, this is confusing. The runtime is about to block your
> goroutine, so it must send on a channel. What if that send blocks?
>
> Which channel is the runtime supposed to send its "my block notification is
> about to block" notification to? Or doesn't it?
>
> I'd also like to understand how this API would solve the original problem.
> How do you propose to use this API to set priorities?

It is not clear to me how this could work in general. The runtime
writes to a channel telling the goroutine it is about to be preempted.
Then it sets the prempt flag for the goroutine. The goroutine isn't
even going to see the notification, because it will block as soon as
it calls select or reads from the channel.

If the scheduler has to notify a goroutine, and then let it run for a
while, and then set the preempt flag, that makes scheduling
significantly more complicated. And it's still random whether the
goroutine would even notice before it gets preempted.

I think I'm missing something.

Ian

Keith Randall

unread,
Mar 10, 2017, 2:48:16 PM3/10/17
to Ian Lance Taylor, Richard Gooch, golang-dev, Russ Cox
I think either you or I are confused.  I was talking about blocking, like reading from a network socket.  The channel would presumably be read from another goroutine (a master user-level scheduler goroutine, presumably).  You're talking about preemption and having the same goroutine read from the channel.  I'm not sure which the original poster is talking about.

Richard Gooch

unread,
Mar 10, 2017, 9:33:27 PM3/10/17
to golang-dev, rg+go...@safe-mbox.com, r...@golang.org
On Friday, 10 March 2017 10:47:09 UTC-8, Keith Randall wrote:


On Fri, Mar 10, 2017 at 9:31 AM, Richard Gooch <rg+go...@safe-mbox.com> wrote:
How do you figure it's confusing? The API seems quite clear to me. If you're saying that dealing with scheduling decisions is confusing, then that would be the case for a solution entirely in user code. In my view, adding a hook to the runtime ensures full coverage and fewer surprises.


I'm with Brad, this is confusing.  The runtime is about to block your goroutine, so it must send on a channel.  What if that send blocks?

It blocks. This blocking operation would not generate a block notification.
 
Which channel is the runtime supposed to send its "my block notification is about to block" notification to?  Or doesn't it?

Blocking notifications are not recursive.
 
I'd also like to understand how this API would solve the original problem.  How do you propose to use this API to set priorities?

I get a notification that a goroutine is unblocked and wants the CPU. The runtime blocks that goroutine until I say "OK" (which could be quite a while later depending on my scheduling policy). My user code scheduler can implement priorities, if that's useful to me. The runtime doesn't see any of that. All it needs is to allow me to interject when transitioning between runnable and blocked.
 

Richard Gooch

unread,
Mar 10, 2017, 9:39:44 PM3/10/17
to golang-dev, ia...@golang.org, rg+go...@safe-mbox.com, r...@golang.org

I am not talking about messing with the pre-emption logic. I am only talking about adding hooks to the runnable/blocked transitions. That allows me to exert control over the goroutines that are participating in the co-operative pool.
 

Ian Lance Taylor

unread,
Mar 11, 2017, 12:16:25 AM3/11/17
to Richard Gooch, golang-dev, Russ Cox
I'm confused about which goroutines are writing to the channel and
which goroutines are reading from the channel. Goroutine G1 is
running, and some other goroutine decides that it needs to be
preempted. Who sends what on which channels?

Ian

Richard Gooch

unread,
Mar 11, 2017, 1:40:24 AM3/11/17
to golang-dev, rg+go...@safe-mbox.com, r...@golang.org

Again, I'm not talking about preemption. I'm only talking about adding hooks for the transition between runnable and blocking. The goroutine registers a pair of channels which the scheduler will talk to for these transitions. I expect that the application will create a managing goroutine (the application scheduler) which listens/writes to the channels. The application scheduler thus has control over which (and how many) goroutines get onto the run queue. It's brilliantly simple :-)

Preemption doesn't come into play. Once you transition to runnable, you may or may not be preempted, but you're always runnable (i.e. on the run queue), so there is no transition just because you get preempted.

Ian Lance Taylor

unread,
Mar 11, 2017, 3:59:26 PM3/11/17
to Richard Gooch, golang-dev, Russ Cox
On Fri, Mar 10, 2017 at 10:40 PM, Richard Gooch
Thanks, that helps a little.

I think one of the difficulties I'm having in understanding this
proposal is how to connect it to the current Go runtime. Currently,
there are operating system threads and there is a set, of size
GOMAXPROCS, of "being able to run a goroutine." Threads move in and
out of the set based on what the goroutines they are executing do.
When thinking about the Go scheduler it helps to think about
goroutines and threads separately, although of course goroutines
always execute on threads.

Let's say that a goroutine executes some system call that, as it
happens, does not complete immediately. When it enters the system
call it does not know it is going to block; the obvious case here is
network I/O, which is sometimes ready to go and sometimes must wait.
So the goroutine is not going to send a notification on a channel that
it is going to block. What happens today (and it has been different
in the past) is that there is a system monitor thread (not goroutine)
that wakes up every so often (the wakeup frequency depends on various
things). The system monitor thread will check on all the executing
goroutines. It will observe that one of the goroutines entered a
system call some time ago but has not come out, so it is presumably
blocked. Because goroutines run on threads, this of course means that
the thread is blocked in a system call. The system monitor thread
takes away that thread's "right to run a goroutine" and grants it to a
different thread (creating a new thread if necessary). That other
thread will look at the list of runnable goroutines, and start
executing one. When the thread/goroutine blocked in the system call
return from the call, the thread will try to reacquire the "right to
run a goroutine;" if it fails, the goroutine will be put on the queue
of goroutines waiting to run, and the thread will go to sleep until
the system monitor thread wakes it up. This is all complicated by the
concurrent garbage collector, but probably not in a relevant way.

So, in your proposal, who writes to the channel when we have
determined that the goroutine has blocked? What happens if nothing is
reading from the channel? (Note that obviously the answer can not be
"the system monitor thread blocks writing to the channel.")

Ian

Yura Sokolov

unread,
Mar 12, 2017, 7:28:00 AM3/12/17
to golang-dev
Excuse me for ortogonal proposal.

Currently we have LockOsThread to lock current goroutine to thread.
What if we have a way to declare that every goroutine, spawned from this one, will stay on the same thread forever?

It will solve a original problem: just lock http serve goroutine to thread, and all spawned goroutines for client connections will work on this separate thread.

Also it will allow to use single-threaded datastructures for in-memory storages while still having concurency with goroutines. (Ok, this case is not easy to implement, cause goroutine may preempt in many points)

Richard Gooch

unread,
Mar 12, 2017, 11:48:20 PM3/12/17
to golang-dev, rg+go...@safe-mbox.com, r...@golang.org

 Thanks for the detailed description of what's happening under the covers. I always wondered how you handled blocking system calls (for operations where there isn't an equivalent non-blocking variant).

So, in your proposal, who writes to the channel when we have
determined that the goroutine has blocked?  What happens if nothing is
reading from the channel?  (Note that obviously the answer can not be
"the system monitor thread blocks writing to the channel.")

Given the complexity above, I think the simplest approach is to declare all system calls as being potentially blocking and send the notification when a goroutine is about to enter a system call, and send the corresponding "unblocked" notifcation when the system call returns. Remember that the goal is to exert control (limits) over the number of runnable goroutines which are chewing the CPU. System calls tend to use little CPU time, as they usually block.

Richard Gooch

unread,
Mar 13, 2017, 12:04:10 AM3/13/17
to golang-dev

I think a modified version of this would work. The mechanism you described makes it difficult when goroutines do not share a convenient common ancestor and also when you have multiple CPUs and thus want to pin the goroutines to a pool of OS threads rather than a specific thread.

An API which allows you to create a pool of OS threads and lets you pin a goroutine to the pool of threads would work and I think it would be easy to support a variety of goroutine groupings and hierarchies.

If the pool is created by inheriting the properties of the thread the creating goroutine is running on, that would also allow other cool tricks like putting some of your goroutines in a separate mount namespace.

Leonardo Santagada

unread,
Mar 13, 2017, 4:37:40 AM3/13/17
to Richard Gooch, golang-dev
On Mon, Mar 13, 2017 at 5:04 AM, Richard Gooch <rg+go...@safe-mbox.com> wrote:

I think a modified version of this would work. The mechanism you described makes it difficult when goroutines do not share a convenient common ancestor and also when you have multiple CPUs and thus want to pin the goroutines to a pool of OS threads rather than a specific thread.

An API which allows you to create a pool of OS threads and lets you pin a goroutine to the pool of threads would work and I think it would be easy to support a variety of goroutine groupings and hierarchies.

If the pool is created by inheriting the properties of the thread the creating goroutine is running on, that would also allow other cool tricks like putting some of your goroutines in a separate mount namespace.

This seem simple enough and would work having goroutine groups with different priorities, but I wonder if it can be done without messing with the language much... maybe a pooledGoroutine(f,  pool) or some other function that internally would make the goroutine be launched on the specific thread pool so no changes in the language are needed. If there is consensus that this would be a good idea someone would need to really work on this proposal.

This does remind me of facebok wangle concurrency pools  https://code.facebook.com/posts/215466732167400/wangle-an-asynchronous-c-networking-and-rpc-library/

--

Leonardo Santagada

Russ Cox

unread,
Mar 13, 2017, 1:44:16 PM3/13/17
to Leonardo Santagada, Richard Gooch, golang-dev
It's not obvious to me that this problem is encountered commonly enough to be worth the complexity of additional mechanism in the runtime, especially when you've demonstrated a decent solution that doesn't involve the runtime. At the least, we'd need more examples of a variety of real programs hitting this kind of problem.

Russ



--

she...@coolpage.com

unread,
Jan 22, 2018, 8:46:31 PM1/22/18
to golang-dev
This or Richard Gooch’s simplified API proposal is needed to enable more control over _lockless_ concurrency design:

https://stackoverflow.com/questions/30646391/does-runtime-lockosthread-allow-child-goroutines-to-run-in-same-os-thread#comment83771585_30646391

(click the links until you end up at Github)

she...@coolpage.com

unread,
Feb 9, 2018, 6:50:15 AM2/9/18
to golang-dev
Furthermore another very important use case is eliminating ”spaghetti of listeners” with event-based programming (e.g. GUIs) by employing a functional reactive paradigm (which inherently should have the spawned goroutines running in the same OS thread):

https://github.com/keean/zenscript/issues/17#issuecomment-364370599

I’m going to open a (or add this to any preexisting) Github issues thread on this matter.

she...@coolpage.com

unread,
Feb 9, 2018, 8:35:16 AM2/9/18
to golang-dev

she...@coolpage.com

unread,
Feb 9, 2018, 9:41:16 PM2/9/18
to golang-dev
Since the go issues thread was closed due to the inability of participants to cherish mutual respect and patience of exposition, I’m sharing my conclusion about the thread:

https://github.com/keean/zenscript/issues/35#issuecomment-364614070

That will end my participation in this thread. Thank you.

she...@coolpage.com

unread,
Feb 12, 2018, 8:45:01 AM2/12/18
to golang-dev
Someone asked me in private what specifically about the many variants of FRP that I thought might require restricting goroutines to a single OS thread. I decided to reply publicly because the prior publicly linked discussion derailed and I wasn't able to follow-up publicly there with later insights and conclusions.

The discussion of what *I* had in mind for FRP continued:

https://github.com/keean/zenscript/issues/17#issuecomment-364790961

I mentioned that the inability to have first-class control over the continuation is perhaps weakness of goroutines and I stated that I’m not sure yet if for what I want whether I need the option for all continuations to run in the same OS thread:

https://github.com/keean/zenscript/issues/17#issuecomment-364725226

And what *we* may want for FRP became more focused with a concise example here:

https://github.com/keean/zenscript/issues/17#issuecomment-364913864

I may be misapplying the term FRP??

I suppose the point about restricting to a single OS thread distills down to that if we have mutable state or other order dependent invariants in an algorithm combined with continuations, then we can’t use goroutines to help model continuations if the assumptions of our algorithm is that of a single OS thread. Even with referential transparency, I’m not sure if we can avoid order dependent effects. After all that is the point of modeling effects is that even though we’re better declaring the invariants, that doesn’t mean that implicit parallelism is allowed by the invariants. So here we have this wonderfully efficient continuations of green threads in Go, but afaik they’re limited analogously to a delimited continuations and we can’t use them outside of an M:N parallelism model. To some extent both of these deficiencies can probably be overcome with significant scaffolding boilerplate but as I had mentioned in the derailed Go issues thread, I think there’s race conditions (c.f. my 3rd link above for simplified example of a race condition in this context) created down in the OS level that can’t be overcome with scaffolding. This is all very roughed out in my head, so I may have mistakes. At least hopefully that gives readers the gist of my concern, whether valid or not.

Apologies for my part in derailing that thread. I did say in that thread that it was 5am for me (implying to please be patient). I still don’t understand why the rabid rush to beat down anyone who wants to analyze whether M:N parallelism is sufficient to address all needs of concurrent programming. Why can’t we all get along and enjoy discussing ideas on their merits with a modicum of patience? I thought about this and I guess they get slammed with 100s of frivolous feature requests and have to shoot them down efficiently. Maybe not laziness, just overload. But one would hope that in an open source context, there’s many eyeballs and we could relax a bit and let the process run more decentralized. But I guess projects do require top-down organization. Sigh, I don’t have a solution. Cheers.

matthe...@gmail.com

unread,
Feb 12, 2018, 10:50:51 AM2/12/18
to golang-dev
The proposal is still open and has received significant discussion.

Apologies for my part in derailing that thread.

Interpreting written communication is difficult. The code of conduct is a base we all agree on to participate here. I don’t think characterizing people outside of a technical context helps us understand and discuss the proposal.

I still don’t understand why the rabid rush to beat down anyone who wants to analyze whether M:N parallelism is sufficient to address all needs of concurrent programming. Why can’t we all get along and enjoy discussing ideas on their merits with a modicum of patience?

The goal of a proposal issue is enough detail for a design document (https://github.com/golang/proposal/) but this proposal doesn’t have all of the tradeoffs enumerated concisely and doesn't seem to be developing toward a draft doc or implementation.

I personally would start a golang-nuts thread on the topic to narrow down the tradeoffs, but maybe golang-dev would be agreeable to continuing the discussion here.

Matt

she...@coolpage.com

unread,
Feb 13, 2018, 5:31:32 AM2/13/18
to golang-dev
Finally got some holistic coherence of the issues:

https://github.com/keean/zenscript/issues/17#issuecomment-365214882

So we don’t need to restrict a set of goroutines to the same OS thread, but there’s an argument for the request to restrict a set of goroutines to running round-robin, i.e. remove parallelism from the set.

Afaics, the compelling aspect of the argument is that copying and messages (of Hoare’s CSP) doesn’t not resolve the unsafety issue of parallelism. It’s more complex than @bcmills’ and @ianlancetaylor’s oversimplification which led to the immediate rejection of my proposal. Imperativity is inherent in programming regardless (even with referential transparency and immutability, because there are other order dependent effects such as I/O). So for provable safety either you go to Rust’s tsuris or you may consider my idea of relaxing the invariants to non-parallelized concurrency with declared context switch opportunities. I think this may enable more flexible reasoning about the safety invariants. Those two guys apparently think that by discouraging any reasoning about safety invariants, that somehow the invariants can be made to disappear, but that’s not reality.

The proposal is not open to me to add to. Seems only contributors can comment in that Go issues thread. I’ll agree the proposal was not well studied before raising the issues thread.

Okay I will end the discussion here unless someone indicates it should continue here. I will consider starting a golang-nuts discussion if I feel I have adequate sleep and time to handle it well. Thank you.
Reply all
Reply to author
Forward
0 new messages