Single threaded server

440 views
Skip to first unread message

Martin Bruse

unread,
Oct 10, 2012, 6:53:23 AM10/10/12
to golan...@googlegroups.com

If I want to build a service that for some reason doesn't gain from multi threading, and would anyway require an abundance of synchronization from channels or mutexes, is there then a better/prettier way to do this than using a channel (and the contention overhead that implies) between the http.HandlerFunc and the service, or setting GOMAXPROCS to 1 (which feels a bit like an ugly hack, for some reason).

What I would like is a setting for maximum concurrently running HandlerFuncs, but lacking that, what is the most convenient way?

Martin

Kyle Lemons

unread,
Oct 10, 2012, 3:43:24 PM10/10/12
to Martin Bruse, golan...@googlegroups.com
Concurrency is not parallelism.  By default, your Go program runs single-threaded right now.  You can still have more than one goroutine running concurrently, however, even with only one thread.

Martin

--
 
 

bryanturley

unread,
Oct 10, 2012, 4:12:08 PM10/10/12
to golan...@googlegroups.com
The real question here is why would you want "to build a service that for some reason doesn't gain from multi threading"  ?
Don't hide your baskets from your eggs.

Martin Bruse

unread,
Oct 10, 2012, 5:23:51 PM10/10/12
to golan...@googlegroups.com

On Oct 10, 2012 9:43 PM, "Kyle Lemons" <kev...@google.com> wrote:
> Concurrency is not parallelism.  By default, your Go program runs single-threaded right now.  You can still have more than one goroutine running concurrently, however, even with only one thread.

My problem is that I want neither. And I don't want my program to start exhibiting race conditions when GOMAXPROCS is removed some day in the future.

And the only way to avoid concurrency with the stdlib servers seems to be to funnel all requests through a mutex or channel, which seems a bit roundabout.

In other environments you work hard to get the kind of concurrency Go offers for free. But it is oddly hard to get rid of it when it isn't wanted.

Brad Fitzpatrick

unread,
Oct 10, 2012, 5:37:33 PM10/10/12
to Martin Bruse, golan...@googlegroups.com
A channel overhead isn't as high as you're thinking.  The go http package will automatically create at least one goroutine for each incoming request, and will already use channels behind the scenes.

But if you really want to run all your handlers one at a time, you'll need EVEN MORE channels to fight it:

package main

import (
"fmt"
"net/http"
"runtime"
"sync"
"syscall"
)

var (
reqc = make(chan req)
once sync.Once
)

type req struct {
w     http.ResponseWriter
r     *http.Request
donec chan struct{}
}

func main() {
counter := 0
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
counter++ // I can now assume this is safe! At the loss of multicore!
fmt.Fprintf(w, "I am thread %d; counter = %d", syscall.Gettid(), counter)
})
http.ListenAndServe(":8080", http.HandlerFunc(forceSameGoroutineHandler))
}

func forceSameGoroutineHandler(w http.ResponseWriter, r *http.Request) {
once.Do(handleRequests)
donec := make(chan struct{})
reqc <- req{w, r, donec}
<-donec
}

func handleRequests() {
go func() {
runtime.LockOSThread()
for req := range reqc {
http.DefaultServeMux.ServeHTTP(req.w, req.r)
close(req.donec)
}
}()
}


On Wed, Oct 10, 2012 at 3:53 AM, Martin Bruse <zond...@gmail.com> wrote:

Martin

--
 
 

bryanturley

unread,
Oct 10, 2012, 5:38:11 PM10/10/12
to golan...@googlegroups.com
That is what I was asking why don't you want it (concurrency)?

go seems like a strange language to use if you don't want concurrency.....
I don't think you can use the standard library without it being concurrent behind your back no matter how many locks you wrap it in.

To me that is like saying I need to get some exercise I am going to drive for 5 miles.

Rémy Oudompheng

unread,
Oct 10, 2012, 5:59:22 PM10/10/12
to Martin Bruse, golan...@googlegroups.com
On 2012/10/10 Martin Bruse <zond...@gmail.com> wrote:
> On Oct 10, 2012 9:43 PM, "Kyle Lemons" <kev...@google.com> wrote:
>> Concurrency is not parallelism. By default, your Go program runs
>> single-threaded right now. You can still have more than one goroutine
>> running concurrently, however, even with only one thread.
>
> My problem is that I want neither. And I don't want my program to start
> exhibiting race conditions when GOMAXPROCS is removed some day in the
> future.

You can have race conditions even with GOMAXPROCS=1.
Hopefully Dmitry just finished pushing the race condition detection to
the standard library and it will be available in Go 1.1

Rémy.

Kyle Lemons

unread,
Oct 10, 2012, 6:19:00 PM10/10/12
to Martin Bruse, golan...@googlegroups.com
On Wed, Oct 10, 2012 at 2:23 PM, Martin Bruse <zond...@gmail.com> wrote:

On Oct 10, 2012 9:43 PM, "Kyle Lemons" <kev...@google.com> wrote:
> Concurrency is not parallelism.  By default, your Go program runs single-threaded right now.  You can still have more than one goroutine running concurrently, however, even with only one thread.

My problem is that I want neither. And I don't want my program to start exhibiting race conditions when GOMAXPROCS is removed some day in the future.

It's not that hard to avoid race conditions, though.  Let the handlers run concurrently, and communicate with channels.  If you truly have something shared, add methods to it which grab a lock.  At tip, we even have a race detector that can run during your tests (or on a compiled binary) that will tell you when you've done something wong.

And the only way to avoid concurrency with the stdlib servers seems to be to funnel all requests through a mutex or channel, which seems a bit roundabout.

In other environments you work hard to get the kind of concurrency Go offers for free. But it is oddly hard to get rid of it when it isn't wanted.

> On Wed, Oct 10, 2012 at 3:53 AM, Martin Bruse <zond...@gmail.com> wrote:


>>
>> If I want to build a service that for some reason doesn't gain from multi threading, and would anyway require an abundance of synchronization from channels or mutexes, is there then a better/prettier way to do this than using a channel (and the contention overhead that implies) between the http.HandlerFunc and the service, or setting GOMAXPROCS to 1 (which feels a bit like an ugly hack, for some reason).
>>
>> What I would like is a setting for maximum concurrently running HandlerFuncs, but lacking that, what is the most convenient way?
>>
>> Martin
>>
>> --
>>  
>>  
>
>

--
 
 

Martin Bruse

unread,
Oct 11, 2012, 2:23:14 AM10/11/12
to Brad Fitzpatrick, golan...@googlegroups.com
Oh, yeah, something like that.

And yeah, channel or lock overhead isn't that bad, but contention is
quite bad: http://pastie.org/5033612

PASS
BenchmarkSingle 10000000 246 ns/op
BenchmarkMulti 2000000 1078 ns/op

Of course, even if the operation costs four time as much when doing it
with four goroutines, the actual cost is only about 750ns. Maybe I
should just accept that and move on :)

Martin Bruse

unread,
Oct 11, 2012, 2:24:56 AM10/11/12
to Rémy Oudompheng, golan...@googlegroups.com
Of course, but at the moment (perhaps only with this scheduler, or is
this specced?) I can at least know where tasks will switch, and I
won't cause any maps or slices to get race-trashed.

Martin Bruse

unread,
Oct 11, 2012, 2:26:43 AM10/11/12
to Kyle Lemons, golan...@googlegroups.com
True, not that hard. But it's a lot of extra work. But I guess I'll
just have to accept it and move on :D

Rémy Oudompheng

unread,
Oct 11, 2012, 2:28:10 AM10/11/12
to Martin Bruse, Brad Fitzpatrick, golan...@googlegroups.com
You have measured the overhead of using GOMAXPROCS=4. What did you
want to illustrate?

Rémy.


2012/10/11, Martin Bruse <zond...@gmail.com>:
> --
>
>
>

Martin Bruse

unread,
Oct 11, 2012, 2:35:40 AM10/11/12
to Rémy Oudompheng, Brad Fitzpatrick, golan...@googlegroups.com
I wanted to show the cost of contention over a single channel.

The two benchmarks do the same amount of work, but in one of them
there is no contention at all while in the other there is.

Have I misunderstood something?

Rémy Oudompheng

unread,
Oct 11, 2012, 2:39:27 AM10/11/12
to Martin Bruse, Brad Fitzpatrick, golan...@googlegroups.com
On 2012/10/11 Martin Bruse <zond...@gmail.com> wrote:
> I wanted to show the cost of contention over a single channel.
>
> The two benchmarks do the same amount of work, but in one of them
> there is no contention at all while in the other there is.
>
> Have I misunderstood something?

Yes, you have used different values for GOMAXPROCS, so you have
measured the combined effect of contention on a channel and
GOMAXPROCS=4. Thus it is difficult to say which of the two changes had
the performance impact.

Rémy.

Martin Bruse

unread,
Oct 11, 2012, 2:55:29 AM10/11/12
to Rémy Oudompheng, Brad Fitzpatrick, golan...@googlegroups.com
I think you are wrong, because I believe I must have GOMAXPROCS > 1 to
have any contention at all.

How could there be lock contention if only one goroutine runs at a
time? It would always unlock before the next goroutine gets there.

Oskar Karlsson

unread,
Oct 11, 2012, 4:43:49 AM10/11/12
to golan...@googlegroups.com, Rémy Oudompheng, Brad Fitzpatrick
I think you have misunderstood GOMAXPROCS.

http://golang.org/src/pkg/runtime/debug.go?s=980:1006#L14

GOMAXPROCS is the number CPU cores available, not the number of currently allocated goroutines. 

Martin Bruse

unread,
Oct 12, 2012, 3:07:05 AM10/12/12
to golan...@googlegroups.com
Of course, but the number of actually simultaneously running goroutines is limited to the number of available cpu cores.

If only one core is available you never get lock contention, since the last goroutine will have released the lock and gone to sleep before the next one tries to acquire it. Or does the scheduler switch in places I don't know about?

It's just my guess that having multiple simultaneously running goroutines costs so much due to lock contention, but feel free to correct me :)

Jonathan Chayce Dickinson

unread,
Oct 12, 2012, 4:02:25 AM10/12/12
to Martin Bruse, golan...@googlegroups.com
You will get lock contention. Remember that a goroutine can be scheduled out absolutely anywhere - not only outside of a lock. Even though "things are not actually happening at the same time" doesn't mean you should expect it to behave like that - the exact same reason you can multitask on single-core machines.

How it actually works: http://en.wikipedia.org/wiki/Computer_multitasking#Preemptive_multitasking.2Ftime-sharing

If you want to avoid contention, use lock-free structures or channels.

Jonathan



--



Martin Bruse

unread,
Oct 12, 2012, 4:08:34 AM10/12/12
to Jonathan Chayce Dickinson, golan...@googlegroups.com

In principle, according to specs, yes.

But if I understand the current implementation correctly it IS actually cooperative multitasking right now?

minux

unread,
Oct 12, 2012, 4:12:15 AM10/12/12
to Martin Bruse, Jonathan Chayce Dickinson, golan...@googlegroups.com
On Fri, Oct 12, 2012 at 4:08 PM, Martin Bruse <zond...@gmail.com> wrote:

In principle, according to specs, yes.

But if I understand the current implementation correctly it IS actually cooperative multitasking right now?

But that might change in the future.
It seems like you're fight the language.

Kyle Lemons

unread,
Oct 12, 2012, 4:13:05 AM10/12/12
to Martin Bruse, Jonathan Chayce Dickinson, golan...@googlegroups.com
On Fri, Oct 12, 2012 at 1:08 AM, Martin Bruse <zond...@gmail.com> wrote:

In principle, according to specs, yes.

But if I understand the current implementation correctly it IS actually cooperative multitasking right now?

That still doesn't mean that it waits to reschedule your goroutine until you release the lock.  Any other action you take (system call, a.k.a. printf or logging or net or disk i/o, channel, even memory allocation or a defer) within the critical section could call out to the scheduler. 
--
 
 

minux

unread,
Oct 12, 2012, 4:14:57 AM10/12/12
to Kyle Lemons, Martin Bruse, Jonathan Chayce Dickinson, golan...@googlegroups.com
On Fri, Oct 12, 2012 at 4:13 PM, Kyle Lemons <kev...@google.com> wrote:
On Fri, Oct 12, 2012 at 1:08 AM, Martin Bruse <zond...@gmail.com> wrote:

In principle, according to specs, yes.

But if I understand the current implementation correctly it IS actually cooperative multitasking right now?

That still doesn't mean that it waits to reschedule your goroutine until you release the lock.  Any other action you take (system call, a.k.a. printf or logging or net or disk i/o, channel, even memory allocation or a defer) within the critical section could call out to the scheduler. 
Exactly. 

Martin Bruse

unread,
Oct 12, 2012, 4:29:46 AM10/12/12
to minux, golan...@googlegroups.com
I am talking about the benchmark I linked to, nothing else.

That benchmark was run on current hardware, compiled with a current compiler.

I am not trying to fight the language, no matter how trendy it is to
accuse people of it.

Martin Bruse

unread,
Oct 12, 2012, 4:32:03 AM10/12/12
to Kyle Lemons, Jonathan Chayce Dickinson, golan...@googlegroups.com
I am still talking about the benchmark I published.

The only lock used there was in the channel.

No system call, printf, logging, net, disk etc etc happened in the
"channel <-" or "-> channel", unless I am really mistaken about what
happens within channels.

Jim Whitehead II

unread,
Oct 12, 2012, 5:34:30 AM10/12/12
to Martin Bruse, Kyle Lemons, Jonathan Chayce Dickinson, golan...@googlegroups.com
On Fri, Oct 12, 2012 at 10:32 AM, Martin Bruse <zond...@gmail.com> wrote:
> I am still talking about the benchmark I published.
>
> The only lock used there was in the channel.
>
> No system call, printf, logging, net, disk etc etc happened in the
> "channel <-" or "-> channel", unless I am really mistaken about what
> happens within channels.

The setting for GOMAXPROCS is the problem with your bencmark:

BenchmarkSingle 10000000 205 ns/op
BenchmarkMulti 2000000 905 ns/op
BenchmarkMultiSingle 10000000 195 ns/op

The first is your single test with one consumer/one producer/GOMAXPROCS=1.
The second is your test with one consumer/four producers/GOMAXPROCS=4.
The third is your test with one consumer/four producers/GOMAXPROCS=1.

This is a more realistic run of the benchmark.

- Jim

Martin Bruse

unread,
Oct 12, 2012, 12:11:18 PM10/12/12
to Jim Whitehead II, golan...@googlegroups.com, Jonathan Chayce Dickinson, Kyle Lemons

> The setting for GOMAXPROCS is the problem with your bencmark:

Problem, I don't know if it's a problem. It's the way locks behave when one has multiple threads...

> BenchmarkSingle 10000000               205 ns/op
> BenchmarkMulti   2000000               905 ns/op
> BenchmarkMultiSingle    10000000               195 ns/op
>
> The first is your single test with one consumer/one producer/GOMAXPROCS=1.
> The second is your test with one consumer/four producers/GOMAXPROCS=4.
> The third is your test with one consumer/four producers/GOMAXPROCS=1.

And I am not surprised. I tried that as well. With only one goroutine active at a time there is no contention and almost no overhead from the locking..

> This is a more realistic run of the benchmark.

Yes, and it is what I want.

But, from the docs:

"This call will go away when the scheduler improves."

What I want (again) is the simplest way to limit the number of active goroutines spawned by a service. And I want it to work even "when the scheduler improves".

Using a channel or other mutex WILL incur extra costs (perhaps not noticeable before the scheduler decides on its own to increase GOMAXPROCS), but maybe it is the best way to future-proof the solution.

Using (the default) GOMAXPROCS=1 is simplest and probably cheapest performance wise, but is not guaranteed to work when GOMAXPROCS disappears. And it will not guarantee safety even for maps and slices when/if the scheduler turns preemptive.

My problem is a luxury problem, in other languages this problem might not exist, but nevertheless I want to.create a good solution.

I have more or less decided to use an RWMutex around everything my service does, but damn it's a lot of extra code when I want it fine grained around each getter and setter.

roger peppe

unread,
Oct 12, 2012, 1:01:36 PM10/12/12
to Martin Bruse, Jim Whitehead II, golan...@googlegroups.com, Jonathan Chayce Dickinson, Kyle Lemons
What you're doing sounds quite a bit like premature optimisation. Are you
sure that locking overhead is really going to be problem for you in practice?

DisposaBoy

unread,
Oct 12, 2012, 2:50:58 PM10/12/12
to golan...@googlegroups.com, minux


On Friday, October 12, 2012 9:30:13 AM UTC+1, Martin Bruse wrote:
I am talking about the benchmark I linked to, nothing else.

That benchmark was run on current hardware, compiled with a current compiler.

I am not trying to fight the language, no matter how trendy it is to
accuse people of it.



If you don't want contention then don't create a situation wherein there will be contention http://play.golang.org/p/3seVsKCcZk

BenchmarkSingle 5000000       470 ns/op
BenchmarkMulti 1000000      2397 ns/op
BenchmarkMultiX 5000000       383 ns/op

 

Kyle Lemons

unread,
Oct 12, 2012, 4:09:43 PM10/12/12
to Martin Bruse, Jim Whitehead II, golan...@googlegroups.com, Jonathan Chayce Dickinson
On Fri, Oct 12, 2012 at 9:11 AM, Martin Bruse <zond...@gmail.com> wrote:

> The setting for GOMAXPROCS is the problem with your bencmark:

Problem, I don't know if it's a problem. It's the way locks behave when one has multiple threads...

> BenchmarkSingle 10000000               205 ns/op
> BenchmarkMulti   2000000               905 ns/op
> BenchmarkMultiSingle    10000000               195 ns/op
>
> The first is your single test with one consumer/one producer/GOMAXPROCS=1.
> The second is your test with one consumer/four producers/GOMAXPROCS=4.
> The third is your test with one consumer/four producers/GOMAXPROCS=1.

And I am not surprised. I tried that as well. With only one goroutine active at a time there is no contention and almost no overhead from the locking..

Where you see contention, I see the overhead incurred by multiple goroutines bouncing from one thread to another because the scheduler is naive and doesn't understand data locality.  Did you try running BenchmarkSingle with GOMAXPROCS=4?  You may well see its performance decrease too.

> This is a more realistic run of the benchmark.

Yes, and it is what I want.

But, from the docs:

"This call will go away when the scheduler improves."

What I want (again) is the simplest way to limit the number of active goroutines spawned by a service. And I want it to work even "when the scheduler improves".

I understand that you *think* you want this, but I don't understand *why* you want this, and I think you misunderstand what GOMAXPROCS does (as it has zero impact on the concurrency of your program).  Idiomatic Go is all about writing your code in ways that make sense and work naturally together, not fighting the language and trying to hobble some of the tools it gives you.  You might want only one HTTP handler to be running at a time (this is called mutual exclusion, i.e. use a mutex), but do you actually want none of those handlers to spawn off other goroutines?

Using a channel or other mutex WILL incur extra costs (perhaps not noticeable before the scheduler decides on its own to increase GOMAXPROCS), but maybe it is the best way to future-proof the solution.

Using (the default) GOMAXPROCS=1 is simplest and probably cheapest performance wise, but is not guaranteed to work when GOMAXPROCS disappears. And it will not guarantee safety even for maps and slices when/if the scheduler turns preemptive.

Maps and slices are perfectly safe as long as you don't try to share them between goroutines at the same time as you might be mutating them.  Have you caught yourself doing so accidentally and that's why you are worried?

My problem is a luxury problem, in other languages this problem might not exist, but nevertheless I want to.create a good solution.

I don't think you'll find a "good" solution to fighting the language. 

I have more or less decided to use an RWMutex around everything my service does, but damn it's a lot of extra code when I want it fine grained around each getter and setter.

I'm not sure I understand.  Do you want each field to be independently locked?  It really sounds like you might want to have one goroutine that acts as an internal "rpc server" of sorts via channels if you have this much data that's being shared.

bryanturley

unread,
Oct 12, 2012, 4:13:01 PM10/12/12
to golan...@googlegroups.com, minux

If you don't want contention then don't create a situation wherein there will be contention http://play.golang.org/p/3seVsKCcZk
 
About to say something similar, you seem to be fighting/not aware of  http://en.wikipedia.org/wiki/Cache_Coherency.
There is no way for multiple cpus to use the same tincy bit of memory all at the same time without slowdown.
The last few gens of x86 have a 64byte cache-line (next gen of the intel chips is 128byte).  If multiple actual cpu's try to *write* to the same cache-line in current x86 land some of them end up being blocked until the others finish.
This is less of a problem if no one is writing to and people are all reading from the same cache-line.

Read this for more understanding http://en.wikipedia.org/wiki/MOESI.  I don't think the intel/amd chips follow this exact coherency protocol but it is something very similar.
Also remember there are multiple levels of memory cache some of which  are not shared with any other cores, and some that are shared with some but not all cores.

Rémy Oudompheng

unread,
Oct 12, 2012, 4:14:14 PM10/12/12
to Martin Bruse, Jim Whitehead II, golan...@googlegroups.com, Jonathan Chayce Dickinson, Kyle Lemons
On 2012/10/12 Martin Bruse <zond...@gmail.com> wrote:
> Using (the default) GOMAXPROCS=1 is simplest and probably cheapest
> performance wise, but is not guaranteed to work when GOMAXPROCS disappears.
> And it will not guarantee safety even for maps and slices when/if the
> scheduler turns preemptive.
>
> My problem is a luxury problem, in other languages this problem might not
> exist, but nevertheless I want to.create a good solution.

If you want to share variables without locks, channels or any proper
synchronization, it doesn't matter whether GOMAXPROCS=1 works or
whatever. You code is illegal and not guaranteed to do anything
interesting. There is no solution.

Rémy.

Martin Bruse

unread,
Oct 12, 2012, 4:21:53 PM10/12/12
to roger peppe, Jim Whitehead II, golan...@googlegroups.com, Jonathan Chayce Dickinson, Kyle Lemons
I totally admit to premature optimization!

But it is just a plaything and a hobby, so I do it for fun :)

Rémy Oudompheng

unread,
Oct 12, 2012, 4:33:33 PM10/12/12
to Martin Bruse, Jim Whitehead II, golan...@googlegroups.com, Jonathan Chayce Dickinson, Kyle Lemons
Additionnally, a compliant Go implementation could be such that each
goroutine has a complete copy of the program state, which evolves
independently, is flushed between goroutines when they meet at a
synchronization point, and is duplicated where a goroutine is
launched.

It could be compliant, single-threaded and have completely cooperative
scheduling, and you would get race conditions if you don't use proper
synchronization.

Rémy.

Martin Bruse

unread,
Oct 12, 2012, 4:38:31 PM10/12/12
to Kyle Lemons, Jim Whitehead II, golan...@googlegroups.com, Jonathan Chayce Dickinson
>> And I am not surprised. I tried that as well. With only one goroutine
>> active at a time there is no contention and almost no overhead from the
>> locking..
>
> Where you see contention, I see the overhead incurred by multiple goroutines
> bouncing from one thread to another because the scheduler is naive and
> doesn't understand data locality. Did you try running BenchmarkSingle with
> GOMAXPROCS=4? You may well see its performance decrease too.

If I just do it with multiple goroutines, but without locks, I get:
http://pastie.org/5047318

And it produces:

BenchmarkSingle 10000000 258 ns/op
BenchmarkAtomic 50000000 42.9 ns/op
BenchmarkMulti 1000000 1066 ns/op

The atomic version does no locking at all, but with 4 concurrently
running goroutines.

And of course BenchmarkSingle becomes much slower if I introduce
contention by running multiple threads competing for the same lock.

>> What I want (again) is the simplest way to limit the number of active
>> goroutines spawned by a service. And I want it to work even "when the
>> scheduler improves".
>
> I understand that you *think* you want this, but I don't understand *why*
> you want this,

Why?

Because I want to serve a non thread safe data structure to clients.
This means that I either have to use a single threaded server, or
introduce lots of locking.

> and I think you misunderstand what GOMAXPROCS does (as it has
> zero impact on the concurrency of your program).

The concurrency is not the problem, the concurrently executing code is.

> Idiomatic Go is all about
> writing your code in ways that make sense and work naturally together, not
> fighting the language and trying to hobble some of the tools it gives you.

If a tool does more than I need, am I hobbling it when I want it to do
less? Perhaps. But then it needs a bit of hobbling ;)

> You might want only one HTTP handler to be running at a time (this is called
> mutual exclusion, i.e. use a mutex), but do you actually want none of those
> handlers to spawn off other goroutines?

No, I don't want them to spawn other goroutines. They are all all
about responding to a single request and then handling the next.

> Maps and slices are perfectly safe as long as you don't try to share them
> between goroutines at the same time as you might be mutating them. Have you
> caught yourself doing so accidentally and that's why you are worried?

Since I want my requests to both mutate and read the data structure, I
will get this problem.

> I don't think you'll find a "good" solution to fighting the language.

Bah, you have to stop accusing me of this, it just sounds plaintive.
If this is a general purpose language I should be able to serve data
structures over the net. Of course I may have to resort to mutexing
every single request, but I was hoping not to have to do that, mainly
because other more primitive language do it single threaded by
default...

>> I have more or less decided to use an RWMutex around everything my service
>> does, but damn it's a lot of extra code when I want it fine grained around
>> each getter and setter.
>
> I'm not sure I understand. Do you want each field to be independently
> locked? It really sounds like you might want to have one goroutine that
> acts as an internal "rpc server" of sorts via channels if you have this much
> data that's being shared.

I am going to use an RWMutex that locks every operation on my data structure.

Using a single goroutine as server and communicate with it through a
channel seems less efficient (since the channel doesn't do the
read/write logic that the RWMutex does.. maybe a channel is slightly
slower than a regular Mutex as well). This way I will even get some
juice out of multi threading if multiple requests just do reading.

Martin Bruse

unread,
Oct 12, 2012, 4:40:57 PM10/12/12
to Rémy Oudompheng, Jim Whitehead II, golan...@googlegroups.com, Jonathan Chayce Dickinson, Kyle Lemons
That is another reason why I don't like the GOMAXPROCS=1 solution.
Even it it seems to work right now, it isn't specced to work and I
can't really trust it.

Martin Bruse

unread,
Oct 12, 2012, 4:43:47 PM10/12/12
to Rémy Oudompheng, Jim Whitehead II, golan...@googlegroups.com, Jonathan Chayce Dickinson, Kyle Lemons
Yeah :)

As I said, the RWMutex solution seems best, even if the code gets a
bit spammy with all the self.lock.RLock(); defer self.lock.RUnlock();

On Fri, Oct 12, 2012 at 10:33 PM, Rémy Oudompheng

Ian Lance Taylor

unread,
Oct 12, 2012, 5:05:16 PM10/12/12
to Martin Bruse, Kyle Lemons, Jim Whitehead II, golan...@googlegroups.com, Jonathan Chayce Dickinson
On Fri, Oct 12, 2012 at 1:38 PM, Martin Bruse <zond...@gmail.com> wrote:
>
> Because I want to serve a non thread safe data structure to clients.
> This means that I either have to use a single threaded server, or
> introduce lots of locking.

Perhaps you could simply use a channel to serialize the requests.

Ian

bryanturley

unread,
Oct 12, 2012, 5:13:51 PM10/12/12
to golan...@googlegroups.com, Rémy Oudompheng, Jim Whitehead II, Jonathan Chayce Dickinson, Kyle Lemons
If you embed the mutex you don't have use it so verbosely.

type MyThing struct {
  RWMutex  // no name just a type with methods defined on it.
  x uint32
}

y := new(MyThing)
y.RLock()


Wrote this in here so it might be slightly wrong ;) but you get the idea.

read this bit  http://golang.org/ref/spec#Struct_types

Reply all
Reply to author
Forward
0 new messages