If I want to build a service that for some reason doesn't gain from multi threading, and would anyway require an abundance of synchronization from channels or mutexes, is there then a better/prettier way to do this than using a channel (and the contention overhead that implies) between the http.HandlerFunc and the service, or setting GOMAXPROCS to 1 (which feels a bit like an ugly hack, for some reason).
What I would like is a setting for maximum concurrently running HandlerFuncs, but lacking that, what is the most convenient way?
Martin
Martin
--
On Oct 10, 2012 9:43 PM, "Kyle Lemons" <kev...@google.com> wrote:
> Concurrency is not parallelism. By default, your Go program runs single-threaded right now. You can still have more than one goroutine running concurrently, however, even with only one thread.
My problem is that I want neither. And I don't want my program to start exhibiting race conditions when GOMAXPROCS is removed some day in the future.
And the only way to avoid concurrency with the stdlib servers seems to be to funnel all requests through a mutex or channel, which seems a bit roundabout.
In other environments you work hard to get the kind of concurrency Go offers for free. But it is oddly hard to get rid of it when it isn't wanted.
Martin
--
On Oct 10, 2012 9:43 PM, "Kyle Lemons" <kev...@google.com> wrote:
> Concurrency is not parallelism. By default, your Go program runs single-threaded right now. You can still have more than one goroutine running concurrently, however, even with only one thread.My problem is that I want neither. And I don't want my program to start exhibiting race conditions when GOMAXPROCS is removed some day in the future.
And the only way to avoid concurrency with the stdlib servers seems to be to funnel all requests through a mutex or channel, which seems a bit roundabout.
In other environments you work hard to get the kind of concurrency Go offers for free. But it is oddly hard to get rid of it when it isn't wanted.
> On Wed, Oct 10, 2012 at 3:53 AM, Martin Bruse <zond...@gmail.com> wrote:
>>
>> If I want to build a service that for some reason doesn't gain from multi threading, and would anyway require an abundance of synchronization from channels or mutexes, is there then a better/prettier way to do this than using a channel (and the contention overhead that implies) between the http.HandlerFunc and the service, or setting GOMAXPROCS to 1 (which feels a bit like an ugly hack, for some reason).
>>
>> What I would like is a setting for maximum concurrently running HandlerFuncs, but lacking that, what is the most convenient way?
>>
>> Martin
>>
>> --
>>
>>
>
>
--
If only one core is available you never get lock contention, since the last goroutine will have released the lock and gone to sleep before the next one tries to acquire it. Or does the scheduler switch in places I don't know about?
It's just my guess that having multiple simultaneously running goroutines costs so much due to lock contention, but feel free to correct me :)
--
In principle, according to specs, yes.
But if I understand the current implementation correctly it IS actually cooperative multitasking right now?
In principle, according to specs, yes.
But if I understand the current implementation correctly it IS actually cooperative multitasking right now?
In principle, according to specs, yes.
But if I understand the current implementation correctly it IS actually cooperative multitasking right now?
--
On Fri, Oct 12, 2012 at 1:08 AM, Martin Bruse <zond...@gmail.com> wrote:In principle, according to specs, yes.
But if I understand the current implementation correctly it IS actually cooperative multitasking right now?
That still doesn't mean that it waits to reschedule your goroutine until you release the lock. Any other action you take (system call, a.k.a. printf or logging or net or disk i/o, channel, even memory allocation or a defer) within the critical section could call out to the scheduler.
> The setting for GOMAXPROCS is the problem with your bencmark:
Problem, I don't know if it's a problem. It's the way locks behave when one has multiple threads...
> BenchmarkSingle 10000000 205 ns/op
> BenchmarkMulti 2000000 905 ns/op
> BenchmarkMultiSingle 10000000 195 ns/op
>
> The first is your single test with one consumer/one producer/GOMAXPROCS=1.
> The second is your test with one consumer/four producers/GOMAXPROCS=4.
> The third is your test with one consumer/four producers/GOMAXPROCS=1.
And I am not surprised. I tried that as well. With only one goroutine active at a time there is no contention and almost no overhead from the locking..
> This is a more realistic run of the benchmark.
Yes, and it is what I want.
But, from the docs:
"This call will go away when the scheduler improves."
What I want (again) is the simplest way to limit the number of active goroutines spawned by a service. And I want it to work even "when the scheduler improves".
Using a channel or other mutex WILL incur extra costs (perhaps not noticeable before the scheduler decides on its own to increase GOMAXPROCS), but maybe it is the best way to future-proof the solution.
Using (the default) GOMAXPROCS=1 is simplest and probably cheapest performance wise, but is not guaranteed to work when GOMAXPROCS disappears. And it will not guarantee safety even for maps and slices when/if the scheduler turns preemptive.
My problem is a luxury problem, in other languages this problem might not exist, but nevertheless I want to.create a good solution.
I have more or less decided to use an RWMutex around everything my service does, but damn it's a lot of extra code when I want it fine grained around each getter and setter.
I am talking about the benchmark I linked to, nothing else.
That benchmark was run on current hardware, compiled with a current compiler.
I am not trying to fight the language, no matter how trendy it is to
accuse people of it.
> The setting for GOMAXPROCS is the problem with your bencmark:
Problem, I don't know if it's a problem. It's the way locks behave when one has multiple threads...
> BenchmarkSingle 10000000 205 ns/op
> BenchmarkMulti 2000000 905 ns/op
> BenchmarkMultiSingle 10000000 195 ns/op
>
> The first is your single test with one consumer/one producer/GOMAXPROCS=1.
> The second is your test with one consumer/four producers/GOMAXPROCS=4.
> The third is your test with one consumer/four producers/GOMAXPROCS=1.And I am not surprised. I tried that as well. With only one goroutine active at a time there is no contention and almost no overhead from the locking..
> This is a more realistic run of the benchmark.
Yes, and it is what I want.
But, from the docs:
"This call will go away when the scheduler improves."
What I want (again) is the simplest way to limit the number of active goroutines spawned by a service. And I want it to work even "when the scheduler improves".
Using a channel or other mutex WILL incur extra costs (perhaps not noticeable before the scheduler decides on its own to increase GOMAXPROCS), but maybe it is the best way to future-proof the solution.
Using (the default) GOMAXPROCS=1 is simplest and probably cheapest performance wise, but is not guaranteed to work when GOMAXPROCS disappears. And it will not guarantee safety even for maps and slices when/if the scheduler turns preemptive.
My problem is a luxury problem, in other languages this problem might not exist, but nevertheless I want to.create a good solution.
I have more or less decided to use an RWMutex around everything my service does, but damn it's a lot of extra code when I want it fine grained around each getter and setter.
If you don't want contention then don't create a situation wherein there will be contention http://play.golang.org/p/3seVsKCcZk