Re: [go-nuts] Concurrency: channel vs mutex

2,497 views
Skip to first unread message

Donovan Hide

unread,
Jan 28, 2013, 4:33:58 AM1/28/13
to pku...@gmail.com, golang-nuts
Have a read of this thread, it has a similar discussion:


Messy/clean are terms which are a bit hard to imagine without seeing some code. Post an example of what you are doing and I'm sure people will help. If you are frequently requesting values of multiple keys at once, RWMutex might be what you are looking for:


Lars Pensjö

unread,
Jan 31, 2013, 7:06:23 AM1/31/13
to golan...@googlegroups.com, pku...@gmail.com
My experience with the case of "many" readers is that the channel way is harder and can risk not being as effective as using a mutex.

On Monday, 28 January 2013 16:55:23 UTC+1, pku...@gmail.com wrote:
Oh, thank you. I think RWMutex is exactly what I'm looking for. The whole channel thing was really nice when I was just dropping things onto them and moving on, but I think just moving to a mutex is going to be better going forward. Plus, it will allow simultaneous reads.

bryanturley

unread,
Jan 31, 2013, 2:11:46 PM1/31/13
to golan...@googlegroups.com, pku...@gmail.com


On Thursday, January 31, 2013 6:06:23 AM UTC-6, Lars Pensjö wrote:
My experience with the case of "many" readers is that the channel way is harder and can risk not being as effective as using a mutex.

 
I have much more experience using mutex'es vs channels, but I am preferring channels in go over mutex'es in general.
Though I still use both.  You have to change the way you "flow" your data around your code, at least I did.

(mutices? ugh...)

Hamish Ogilvy

unread,
Jan 31, 2013, 7:44:11 PM1/31/13
to golan...@googlegroups.com, pku...@gmail.com
Lots of people suggest to lock the map itself, but if you need to read and write to the map a lot, you should avoid it. We just had a situation where channels accessing a map (R & W) were becoming very messy, but because the map was also under high contention, a mutex on the map itself caused too much delay. In the end we mapped to pointers and put mutexes in the target struct instead. This allowed us essentially lock individual map elements instead of the whole map. We can essentially update thousands of map elements concurrently because the map itself is not actually being updated, just the structs it points to. Only a new element insert into the map needs to be treated differently. Anyway, different solutions for different problems, but in my view, locking a whole map is rarely a good thing to do.

For us it was something like:

map[uint32]*Target

type Target struct {
Whatever string
sync.Mutex
}


On Monday, 28 January 2013 05:30:56 UTC+11, pku...@gmail.com wrote:
So, I've got this struct which contains a map and which is accessed from many goroutines. It's also go a bunch of channels and a select on those channels. So, if you want something done with this map, you just put a message on the proper channel. This works great when you don't need a response, but when you want to do something like, say, get some subset of keys from the map, you have to put your own channel on a channel, then wait on it for a response. That seems at least as messy as just globally locking on the map. I'm just wondering if maybe there's a cleaner way to go about this? Is it something that comes up often? Thanks!

Lars Pensjö

unread,
Feb 1, 2013, 7:45:34 AM2/1/13
to golan...@googlegroups.com, pku...@gmail.com
The trivial channel implementation to lock an item only allows one reader (or writer). I am not sure how to use channels for the many readers/few writers case. All feedback is welcome.

I have used the proposal by Hamish, with a map of pointers, and I agree it can be a good solution in some situations.

Ian Lance Taylor

unread,
Feb 1, 2013, 10:39:03 AM2/1/13
to Lars Pensjö, golan...@googlegroups.com, pku...@gmail.com
On Fri, Feb 1, 2013 at 4:45 AM, Lars Pensjö <lars....@gmail.com> wrote:
> The trivial channel implementation to lock an item only allows one reader
> (or writer). I am not sure how to use channels for the many readers/few
> writers case. All feedback is welcome.

Have each reader/writer pass in a channel on which to receive the response.

Ian

Hamish Ogilvy

unread,
Feb 1, 2013, 8:45:29 PM2/1/13
to golan...@googlegroups.com, pku...@gmail.com
Whenever you have writers you need to either lock, or all readers need to be aware of the writes so they don't clash. If you want to use multiple channels instead of mutexes, i typically create an array of channels equal to the number of CPU's and then use a modulus of the key to decide which channel to send to. So if you have 8 CPU's, you would create 8 channels. For each read/write you do "key%8" and then put it on that numbered channel. All those channels can read/write concurrently and they'll never clash on the same element. The throughput is very high.

bryanturley

unread,
Feb 2, 2013, 12:49:08 AM2/2/13
to golan...@googlegroups.com, pku...@gmail.com
On Friday, February 1, 2013 7:45:29 PM UTC-6, Hamish Ogilvy wrote:
Whenever you have writers you need to either lock, or all readers need to be aware of the writes so they don't clash. If you want to use multiple channels instead of mutexes, i typically create an array of channels equal to the number of CPU's and then use a modulus of the key to decide which channel to send to. So if you have 8 CPU's, you would create 8 channels. For each read/write you do "key%8" and then put it on that numbered channel. All those channels can read/write concurrently and they'll never clash on the same element. The throughput is very high.

I think that method will still have problems when the map grows.  You need to guard the growth as well as the writes.

I have a map mostly written that keeps 8 pointers to bucket groups and uses the highest 3bits of the hashed key to pick which group to read/write to.
Each group has it's own lock and grows by itself.  I wanted different writers to not block other writers (hopefully) when a grow/copy happens, splitting growth into chunks seemed like a good idea.
It is hard coded to 8 because I didn't want to add another lock to grow the bucket groups list.

key & 3 should be faster than key % 8 as well.  *IF* you also hard coded to 8.
Though the compiler might translate that % behind your back.

Hamish Ogilvy

unread,
Feb 2, 2013, 1:02:07 AM2/2/13
to bryanturley, golan...@googlegroups.com, pku...@gmail.com
Interesting. Yeah i agree buckets makes a lot of sense. In my case i didn't think to do that as i had other operations afterwards that would then take the map and sort it, so multiple buckets would be more expensive merge/sort. 

I've hardcoded the 8 and also used "concurrency := runtime.NumCPU()" to set it dynamically. I don't notice any difference. I was also initially worried about the cost, but i checked at the time and remember it was very quick and not an issue at all. 


--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

bryanturley

unread,
Feb 2, 2013, 1:20:30 AM2/2/13
to golan...@googlegroups.com, bryanturley, pku...@gmail.com


On Saturday, February 2, 2013 12:02:07 AM UTC-6, Hamish Ogilvy wrote:
Interesting. Yeah i agree buckets makes a lot of sense. In my case i didn't think to do that as i had other operations afterwards that would then take the map and sort it, so multiple buckets would be more expensive merge/sort. 


Not sure if we are defining bucket the same way.

Each bucket group has a slice of struct { key/value } (aka bucket). 
The slice index is picked by the lower X bits of the hashed key.
The bucket group index is picked from the upper bits of the hashed key.

I actually used twice as many groups as cpu's to minimize lock contention but I was working on testing if that was needed.
Then I was going to make a version that used channels and pit them against each other for speed.

bryanturley

unread,
Feb 2, 2013, 1:23:04 AM2/2/13
to golan...@googlegroups.com, bryanturley, pku...@gmail.com


On Saturday, February 2, 2013 12:20:30 AM UTC-6, bryanturley wrote:


On Saturday, February 2, 2013 12:02:07 AM UTC-6, Hamish Ogilvy wrote:
Interesting. Yeah i agree buckets makes a lot of sense. In my case i didn't think to do that as i had other operations afterwards that would then take the map and sort it, so multiple buckets would be more expensive merge/sort. 


Not sure if we are defining bucket the same way.

Each bucket group has a slice of struct { key/value } (aka bucket). 
The slice index is picked by the lower X bits of the hashed key.

Then there is some linear probing to find a free index, probably shouldn't have left that out.

John Nagle

unread,
Feb 2, 2013, 1:12:47 PM2/2/13
to golan...@googlegroups.com
On 1/31/2013 4:06 AM, Lars Pensj� wrote:
> My experience with the case of "many" readers is that the channel way is
> harder and can risk not being as effective as using a mutex.

If you're sending the data to be processed over a channel, and
getting the answer back over a channel, then the channel is useful.
If the channel is just being used as a "done" flag, there's
no gain in using a channel over a mutex.

John Nagle

bryanturley

unread,
Feb 2, 2013, 1:33:36 PM2/2/13
to golan...@googlegroups.com, na...@animats.com


On Saturday, February 2, 2013 12:12:47 PM UTC-6, John Nagle wrote:

This may be true but you don't have to remember to Unlock() a channel.
"done" channel's can also work with multiple reader/writers in a way a single mutex could not.

I am not a 100% on this, but from what I remember...

sync.Mutex'es that spin to long end up in a system call waiting on the kernels scheduler.
I do not believe that is the case for channels.

Though for the channel to work properly on multicore systems there has to be a mutex (or mutex like access) at each read and write.
I think that instead doing the futex thing the channels just return their goroutines to the runtimes waiting que.

Someone more knowledgeable feel free to correct me.

John Nagle

unread,
Feb 2, 2013, 1:37:39 PM2/2/13
to bryanturley, golan...@googlegroups.com
On 2/2/2013 10:33 AM, bryanturley wrote:
>
>
> On Saturday, February 2, 2013 12:12:47 PM UTC-6, John Nagle wrote:
>>
>> On 1/31/2013 4:06 AM, Lars Pensj� wrote:
>>> My experience with the case of "many" readers is that the channel way is
>>> harder and can risk not being as effective as using a mutex.
>>
>> If you're sending the data to be processed over a channel, and
>> getting the answer back over a channel, then the channel is useful.
>> If the channel is just being used as a "done" flag, there's
>> no gain in using a channel over a mutex.
>>
>> John Nagle
>>
>>
> This may be true but you don't have to remember to Unlock() a channel.

That's what "defer" is for.

m.Lock()
defer m.Unlock()

John Nagle

bryanturley

unread,
Feb 2, 2013, 1:58:20 PM2/2/13
to golan...@googlegroups.com, bryanturley, na...@animats.com

Your not seeing the full picture a channel is more than a mutex even in the simplest case.
From your previous example of a done flag being easily replaced by a mutex, please do so.

http://play.golang.org/p/adcY5LdTyj  <-- done flag example.

I use sync.Mutex in my go code, but it is not as useful or interesting as channels.

John Nagle

unread,
Feb 2, 2013, 3:01:06 PM2/2/13
to bryanturley, golan...@googlegroups.com
On 2/2/2013 10:58 AM, bryanturley wrote
> http://play.golang.org/p/adcY5LdTyj <-- done flag example.

If the "work" parameter there actually did anything, you'd have
a race condition. The input parameter to four instances of a
goroutine is the same shared slice.

John Nagle

bryanturley

unread,
Feb 2, 2013, 3:21:11 PM2/2/13
to golan...@googlegroups.com, bryanturley, na...@animats.com

Nice avoidance.  It wasn't about the work it was about the mutex, channel.
The work slice was just representative.

Hamish Ogilvy

unread,
Feb 2, 2013, 3:36:20 PM2/2/13
to bryanturley, golan...@googlegroups.com, bryanturley, pku...@gmail.com
Ok got you. I thought we were still talking maps. All good. In the case I was talking about, my map keys were very sparse, so various solutions i tried based on slices all ended up with more overhead to get around that. 

bryanturley

unread,
Feb 2, 2013, 3:41:58 PM2/2/13
to golan...@googlegroups.com, bryanturley, pku...@gmail.com


On Saturday, February 2, 2013 2:36:20 PM UTC-6, Hamish Ogilvy wrote:
Ok got you. I thought we were still talking maps. All good. In the case I was talking about, my map keys were very sparse, so various solutions i tried based on slices all ended up with more overhead to get around that. 

I was talking hash map, sounds like you were talking some kind of ordered map.
Should have realized when you were talking about sorting, heh.



John Nagle

unread,
Feb 2, 2013, 4:10:59 PM2/2/13
to golan...@googlegroups.com
Here's the program written using Add/Done/Wait, which is the
appropriate pattern for this case.

http://play.golang.org/p/VP8To6GkaC

If you're not sending data over channels, channels don't do much.
They're really just two counted semaphores, a counter, and a
buffer. Read C.A. Hoare's classic paper from 1974

http://www.matematicas.unam.mx/jloa/Articulos/CARHoareMonitors.pdf

where these concepts were first introduced. Example 4 describes
a Go channel.

John Nagle






bryanturley

unread,
Feb 2, 2013, 8:04:08 PM2/2/13
to golan...@googlegroups.com, na...@animats.com
On Saturday, February 2, 2013 3:10:59 PM UTC-6, John Nagle wrote:
On 2/2/2013 12:21 PM, bryanturley wrote:
>
>
> On Saturday, February 2, 2013 2:01:06 PM UTC-6, John Nagle wrote:
>>
>> On 2/2/2013 10:58 AM, bryanturley wrote
>>  > http://play.golang.org/p/adcY5LdTyj  <-- done flag example.
>>
>>    If the "work" parameter there actually did anything, you'd have
>> a race condition.  The input parameter to four instances of a
>> goroutine is the same shared slice.
>>
>>                                 John Nagle
>>
>
> Nice avoidance.  It wasn't about the work it was about the mutex, channel.
> The work slice was just representative.

   Here's the program written using Add/Done/Wait, which is the
appropriate pattern for this case.


So not a mutex but something that uses a mutex...
A better response would have been to point out that channel uses a mutex internally.

You couldn't replace the channel with a mutex which was your original statement.
Knowing you couldn't replace the channel with only a mutex you then proceed to
attempt to divert the situation with an insulting remark.  Read this paper, then you may be
as smart as me!

   If you're not sending data over channels, channels don't do much.
They're really just two counted semaphores, a counter, and a
buffer.  Read C.A. Hoare's classic paper from 1974

http://www.matematicas.unam.mx/jloa/Articulos/CARHoareMonitors.pdf

where these concepts were first introduced.  Example 4 describes
a Go channel.

                                John Nagle


I need to stop feeding trolls.

roger peppe

unread,
Feb 3, 2013, 8:37:45 AM2/3/13
to John Nagle, golang-nuts
On 2 February 2013 21:10, John Nagle <na...@animats.com> wrote:
> If you're not sending data over channels, channels don't do much.

one advantage of channels is that you
can use them with other channels in a select
statement; non-blocking operations
are available too.

the semantics are often more obvious
too because of the directionality syntax.
Reply all
Reply to author
Forward
0 new messages