Re: [go-nuts] Concurrent map - newbie question

2,519 views
Skip to first unread message

Dave Cheney

unread,
Jan 9, 2013, 8:22:35 PM1/9/13
to jac...@gmail.com, golan...@googlegroups.com
> If I update a key value in it while another reader is reading the same key
> what is going to happen?

Memory corruption, http://golang.org/ref/mem

> Are we supposed to wrap it all via locks?

Yes, try a sync.RWMutex

> How do channels fit into this?

You could place your map inside a goroutine and communicate with it
via channels, but it's simpler to use a mutex.

Ian Lance Taylor

unread,
Jan 9, 2013, 8:25:34 PM1/9/13
to jac...@gmail.com, golan...@googlegroups.com
On Wed, Jan 9, 2013 at 4:52 PM, <jac...@gmail.com> wrote:
>
> first, sorry for the newbie question, but despite searching on the web I am
> not sure I fully understand how to best implement what we need.
>
> We basically have a close-to-realtime app (SLA in milliseconds) that relies
> heavily on in-memory caching (no I/O hits to DB when replying to a request).
>
> The cache is used mostly for reads, but there is a background process that
> periodically refreshes it from the DB.
> So we basically have a single WRITER, multiple READERs.
> In our Java app, this is a simple use of the built-in ConcurrentMap object
> that allows for high performance in-memory maps that are thread safe.
>
> How would be go about implementing the same paradigm in Go?
>
> Is the built in map object thread safe?

No.

> If I update a key value in it while another reader is reading the same key
> what is going to happen?

Unpredictable results. Don't do this.

> Are we supposed to wrap it all via locks?

Yes, one way or another.

> How do channels fit into this?

Not sure, they seem unrelated.

> We are basically evaluating Go as an alternative to Java for some of our
> real-time needs and this is sort of the first big hurdle that we're not sure
> how to handle to ensure maximum performance (which is our primary concern).

There are a few approaches you could take.

You could maintain the map in a single goroutine, and communicate with
that goroutine via channels: a channel to read values and a channel to
update values. That is clearly safe, but may not give you the
performance you need.

A simple solution would be to use sync.RWMutex to lock access to the map.

You could get a bit more efficiency via functions in sync/atomic but I
wouldn't go there unless you really need to.

Ian

jac...@gmail.com

unread,
Jan 9, 2013, 9:00:44 PM1/9/13
to golan...@googlegroups.com, jac...@gmail.com
Wouldn't the use of a single RWMutex for the entire map have negative repercussions as well?

If I recall, the Java ConcurrentHashMap uses a lot of smaller fine grained locks to avoid it...

Dave Cheney

unread,
Jan 9, 2013, 9:06:16 PM1/9/13
to jac...@gmail.com, golan...@googlegroups.com
Maybe, concurrent hashmap stripes the lock over 16 (from memory)
buckets. I guess it depends on the ratio of readers to updaters.
> --
>
>

jac...@gmail.com

unread,
Jan 9, 2013, 9:10:56 PM1/9/13
to golan...@googlegroups.com, jac...@gmail.com
Yes, that is what I read too.
There is an interesting article on the internal design:


I will read up on it more and see how much of it could be applicable to Go.

Ian Lance Taylor

unread,
Jan 9, 2013, 9:14:13 PM1/9/13
to jac...@gmail.com, golan...@googlegroups.com
On Wed, Jan 9, 2013 at 6:00 PM, <jac...@gmail.com> wrote:
> Wouldn't the use of a single RWMutex for the entire map have negative
> repercussions as well?

The OP described a scenario in which the map is used mostly for reads
and there is one occasional writer. I would expect an RWMutext to
work fine in that scenario. Of course it would have to be tried and
measured.

> If I recall, the Java ConcurrentHashMap uses a lot of smaller fine grained locks to avoid it...

Yes, there are times when that is desirable. But the OP appeared to
be describing a case in which the whole map was updated at once, so
ConcurrentHashMap would seem to introduce additional unnecessary
overhead. If you are going to update the whole map, you just build a
new map, get a writer lock from the RWMutex, and replace the old map
with the new one. The write lock will be held for a very small period
of time. Even the read locks will be held for very short periods of
time: just long enough to load the variable.

Ian

Patrick Mylund Nielsen

unread,
Jan 9, 2013, 9:17:03 PM1/9/13
to jac...@gmail.com, golang-nuts

jac...@gmail.com

unread,
Jan 9, 2013, 9:17:32 PM1/9/13
to golan...@googlegroups.com, jac...@gmail.com
The map would be too large to be rebuilt from scratch and replaced, since it would require double the RAM.

We are talking about up to 64 GB of data. So we have to update one key at a time, via some sort of a DB iterator (in our case Cassandra)
and then a second pass to remove deleted keys that are not in the DB anymore...

Asta Xie

unread,
Jan 9, 2013, 9:39:06 PM1/9/13
to Dave Cheney, jac...@gmail.com, golan...@googlegroups.com
the follow code is what i am use in my project. I hope this code is what you want:

type RouterLock struct {
    lock       sync.RWMutex
    RouterIdcs map[string]string
}

// get the map's value

func (this *RouterLock) Get(key string) (v string) {
    this.lock.RLock()
    defer this.lock.RUnlock()
    if v, ok := this.RouterIdcs[key]; ok {
        return v
    }
    return ""
}

//update the map, this function is activated when the data changed.

func LoadRouter() {
    RouterL.lock.Lock()
    defer RouterL.lock.Unlock()
    resp, err := http.Get(beego.AppConfig.String("routerurl"))
    if err != nil {
        beego.Critical("http get info error")
        return
    }
    defer resp.Body.Close()
    body, err := ioutil.ReadAll(resp.Body)
    err = json.Unmarshal(body, &RouterL.RouterIdcs)
    if err != nil {
        beego.Critical("json.Unmarshal routerurl error:", err)
    }
}




2013/1/10 Dave Cheney <da...@cheney.net>
--



jac...@gmail.com

unread,
Jan 9, 2013, 9:57:59 PM1/9/13
to golan...@googlegroups.com, Dave Cheney, jac...@gmail.com
I notice you wrap the map in a struct.

Is there no way to use Go's duck typing to create a custom implementation that has the same API?

e.g.

var myMap ConcurrentMap;

myMap[key] = value

etc.

Does the built-in map has an interface a custom type could implement as well?

si guy

unread,
Jan 9, 2013, 10:11:53 PM1/9/13
to golan...@googlegroups.com
No, the map semantics will not show through embedding, or: a map can't be anonymously embedded in a struct.

-Simon Watt

jac...@gmail.com

unread,
Jan 9, 2013, 10:17:27 PM1/9/13
to golan...@googlegroups.com
Got it (seems like a shortcoming though).

Cheers
Jacek

Dustin Sallings

unread,
Jan 9, 2013, 10:17:21 PM1/9/13
to golan...@googlegroups.com
Dave Cheney <da...@cheney.net> writes:

>> Are we supposed to wrap it all via locks?
>
> Yes, try a sync.RWMutex

I find most of the time when someone asks for a RWMutex, it's really
not well suited for the task.

If this is really a read-mostly map with a periodic refresh, I'd just
have a pointer to the map and allow clients to access it freely and
directly for reads. For updates, rebuild it separately and swap in the
pointer. This is entirely non-blocking all the time.

If there are periodic updates to the map, put a regular lock on it for
the "small" updates, but still do the full rebuild independently as
described above (but go ahead and just swap it out with the lock held).

If you prove this is the bottleneck in your system, imagine how much
worse RWMutex would be. :)

--
dustin

jac...@gmail.com

unread,
Jan 9, 2013, 10:20:43 PM1/9/13
to golan...@googlegroups.com
As I mentioned, the map may be too large to have two instances of them in memory on a single server during the build-and-swap process (64 GB+)

steve wang

unread,
Jan 9, 2013, 10:21:01 PM1/9/13
to golan...@googlegroups.com
Regarding LoadRouter() , I wonder why not just protect the smallest piece of code snippet where the map is updated? 
like this:
err = func() (error) {
    RouterL.lock.Lock()
    defer RouterL.lock.Unlock()
    return json.Unmarshal(body, &RouterL.RouterIdcs)
}()

Dustin Sallings

unread,
Jan 9, 2013, 11:19:34 PM1/9/13
to golan...@googlegroups.com
jac...@gmail.com writes:

> As I mentioned, the map may be too large to have two instances of them
> in memory on a single server during the build-and-swap process (64
> GB+)

How long with the write lock block reads while this update is
occurring? You could do batches of updates while holding locks, I
suppose in such a way as to free up time for readers.

When you're updating, does the entire data set change? i.e. will you
actually have two of everything sitting around, or will there be a lot
of common data that wouldn't need to diverge?

--
dustin

Ian Lance Taylor

unread,
Jan 10, 2013, 1:03:36 AM1/10/13
to jac...@gmail.com, golan...@googlegroups.com
I see. This is a case where it might wind up being faster to have a
goroutine mediate access to the map, with communication over channels.
Kind of depends how much processing you do other than map lookups.

Ian

Donovan Hide

unread,
Jan 10, 2013, 5:44:15 AM1/10/13
to Ian Lance Taylor, jac...@gmail.com, golang-nuts
I think Dave touched on this, but another approach is a Slotted RWMutex. You need to hash the key yourself and then modulo (or AND if number of stripes is a power of 2) the hash with the number of stripes. This is what Kyoto Cabinet does for locking sections of its key value stores:


Doesn't reduce the amount of atomic operations but does reduce the probability of a read hitting a write lock. Maybe this belongs in the sync package. It's very easy to layer on top of the existing RWMutex. Would happily write a patch for this...


Cheers,
Donovan.








Ian

--



Donovan Hide

unread,
Jan 10, 2013, 6:39:13 AM1/10/13
to Ian Lance Taylor, jac...@gmail.com, golang-nuts
Here's an example for a map with keys and values of type int:


Sure this could be tidied up and made generic with {}interface keys and values and the repetition in the code removed, but shows the idea of reducing write lock contention through the use of slots. Fairly sure the playground only has access to one thread, so it's not really proving anything running there, but this should(?) give you better performance than a single RWMutex.

Hope that helps,
Donovan.

steve wang

unread,
Jan 10, 2013, 7:17:45 AM1/10/13
to golan...@googlegroups.com, Ian Lance Taylor, jac...@gmail.com
When two goroutines hold their respective locks which are pointed to by two different keys, they can update the map simultaneously.
// goroutine1
locks[1].Lock()
m[1] = 1
locks[1].Unlock()

// goroutine2
locks[2].Lock()
m[2] = 2
locks[2].Unlock()

How could the map be goroutine-safe?
Did I misread the code?

Donovan

unread,
Jan 10, 2013, 7:37:29 AM1/10/13
to golan...@googlegroups.com

When two goroutines hold their respective locks which are pointed to by two different keys, they can update the map simultaneously.
// goroutine1
locks[1].Lock()
m[1] = 1
locks[1].Unlock()

// goroutine2
locks[2].Lock()
m[2] = 2
locks[2].Unlock()

How could the map be goroutine-safe?
Did I misread the code?

Oops, you are right. The lock for sets and deletes on a single map needs to be global in case either triggers a resize. A solution would be to slot the map as well, which would be easily possible with ints as the key type. Often these problems boil down to the fact that the map type is a builtin with the source in C, so you can't modify it's behaviour. Would be great to have a go version of map in the standard library with methods instead of the builtin keywords (a hash set as well :-)). Brings up the old generics debate about how to specialise for specific types without using reflection...

I'll fix the example to use multiple maps.
 

Donovan Hide

unread,
Jan 10, 2013, 7:50:49 AM1/10/13
to golang-nuts


 

--
 
 

Donovan Hide

unread,
Jan 10, 2013, 10:23:22 AM1/10/13
to golang-nuts
This question was of interest to me too, so did a bit of benchmarking (I can hear the collective groan :-)). I mocked up a slotted map against another struct which uses channels to arbitrate access to the map. The benchmarks show a double in speed for the slotted map. There may be bugs and I might be making false conclusions. I'd appreciate some scrutiny :-)


Also I wasn't sure of a good pattern of returning results from the channel-arbitrated map, so I'm making a new channel for each request which might be a bit heavy. Any alternative patterns that could be used?


Cheers,
Donovan. 

Jacek Furmankiewicz

unread,
Jan 10, 2013, 10:29:10 AM1/10/13
to Donovan Hide, golang-nuts
Looking at ConcurrentIntMap in your example, it seems you need to hardcode the types that are stored in the map.

Does Go have some sort of concept equivalent to generics that would allow you to code a generic ConcurrentMap implementation where the underlying map key/value types could be pluggable?

Or would we need to code ConcurrentStringMap, ConcurrentDoubleMap, ConcurrentMyTypeMap, ConcurrentThatTypeMap, etc....that seems like a very painful approach.

--
 
 

Donovan Hide

unread,
Jan 10, 2013, 10:36:39 AM1/10/13
to Jacek Furmankiewicz, golang-nuts

Does Go have some sort of concept equivalent to generics that would allow you to code a generic ConcurrentMap implementation where the underlying map key/value types could be pluggable?

The ConcurrentIntMap  interface could have been made generic for keys and values of type {}interface and a type assertion used for common types to work out a method for calculating the slot, but that would have made the test more complex. I'm more interested in what the best way to share a map amongst goroutines is, in terms of performance.

Have a look at type switches to see how it could have been done:


Cheers,
Donovan.



Donovan Hide

unread,
Jan 10, 2013, 11:08:47 AM1/10/13
to Jacek Furmankiewicz, golang-nuts
Amusingly, the slotted RWMutex and maps was slower than having a single mutex protecting a single map.


Keep it simple :-)

jac...@gmail.com

unread,
Jan 10, 2013, 11:41:59 AM1/10/13
to golan...@googlegroups.com, Jacek Furmankiewicz
Interesting. I will take your basic RWMutex implementation and fool around with it. It's a bit painful to have to create a different implementation for every type we need to store, but we only have < 10 of them, so it should be OK.

So, here is an interesting follow up question:

If I load a massive amount of data into (let's say 64GB), should I expect some massive GC pauses from the Go runtime?

Tuning Java to handle this type of heap is a bit of a black art, so I was wondering how does Go behave when managing such a massive amount of in-memory data and what we could realistically expect at runtime?

Thanks
Jacek

minux

unread,
Jan 10, 2013, 1:06:07 PM1/10/13
to jac...@gmail.com, golan...@googlegroups.com
On Fri, Jan 11, 2013 at 12:41 AM, <jac...@gmail.com> wrote:
Interesting. I will take your basic RWMutex implementation and fool around with it. It's a bit painful to have to create a different implementation for every type we need to store, but we only have < 10 of them, so it should be OK.

So, here is an interesting follow up question:

If I load a massive amount of data into (let's say 64GB), should I expect some massive GC pauses from the Go runtime?
i guess so.
and note that the upcoming Go 1.1 just got the ability to handle 64GiB heap, so if your data is about 64GiB
Go might not be able to handle them all in memory (of course, you can raise the heap limit, but there will be
problems like long GC pause time and other subtle ones).

Ian Lance Taylor

unread,
Jan 10, 2013, 1:55:52 PM1/10/13
to jac...@gmail.com, golan...@googlegroups.com
On Thu, Jan 10, 2013 at 8:41 AM, <jac...@gmail.com> wrote:
>
> If I load a massive amount of data into (let's say 64GB), should I expect
> some massive GC pauses from the Go runtime?

The answer in part depends on the nature of the data. The GC time is
roughly proportional to the amount of live data that may contain
pointers (this is being tightened down to only looking at data that
definitely does contain pointers). So 64G of strings shouldn't take
an inordinate amount of GC time. 64GB of pointers will take a lot.

Ian

Kyle Lemons

unread,
Jan 10, 2013, 2:19:18 PM1/10/13
to jac...@gmail.com, golang-nuts
If your map is read only, you don't need to lock it.  It sounds like you produce a series of read only maps and that the background process will produce a new one (as opposed to modifying the old one) so you could use a scheme like this:

var nextMap = make(chan map[string]int)

func runMap() {
m := make(map[string]int)
for {
select {
case m = <-nextMap:
case nextMap <- m:
}
}
}

With this, you can get a map that you can access without locking periodically in each process when it is safe to do so.  Whenever a new map is ready from the update process, you store it and it will then start being the map that is served on subsequent updates.  As long as the individual concurrent processes don't need to guarantee that they all receive the updated data simultaneously, this can help dramatically, especially if you can design the processes such that there are many map accesses per update to amortize the cost of the channel synchronization (which is somewhat higher than that of a mutex).

This is only one of a number of solutions, including RWMutex, as has been suggested, and which one works best is probably something you'll have to determine empirically.


On Wed, Jan 9, 2013 at 4:52 PM, <jac...@gmail.com> wrote:
Hi everyone,

first, sorry for the newbie question, but despite searching on the web I am not sure I fully understand how to best implement what we need.

We basically have a close-to-realtime app (SLA in milliseconds) that relies heavily on in-memory caching (no I/O hits to DB when replying to a request).

The cache is used mostly for reads, but there is a background process that periodically refreshes it from the DB.
So we basically have a single WRITER, multiple READERs.
In our Java app, this is a simple use of the built-in ConcurrentMap object that allows for high performance in-memory maps that are thread safe.

How would be go about implementing the same paradigm in Go?

Is the built in map object thread safe?
If I update a key value in it while another reader is reading the same key what is going to happen?
Are we supposed to wrap it all via locks?
How do channels fit into this?

etc., etc.

We are basically evaluating Go as an alternative to Java for some of our real-time needs and this is sort of the first big hurdle that we're not sure
how to handle to ensure maximum performance (which is our primary concern).

Thanks in advance
Jacek

--
 
 

Dan Kortschak

unread,
Jan 10, 2013, 2:23:39 PM1/10/13
to minux, jac...@gmail.com, golan...@googlegroups.com
My understanding was that this limit is 128GiB. Has something changed since a310cb32c278?
--
 
 

Donovan Hide

unread,
Jan 10, 2013, 2:45:32 PM1/10/13
to Kyle Lemons, Jacek Furmankiewicz, golang-nuts

On 10 January 2013 19:19, Kyle Lemons <kev...@google.com> wrote:
If your map is read only, you don't need to lock it.  It sounds like you produce a series of read only maps and that the background process will produce a new one (as opposed to modifying the old one) so you could use a scheme like this:

I like this solution a lot. It's very much like double-buffering used in graphics cards:


Only drawback is that for a map which needs 64GB of RAM, you'd need 128GB to hold both copies!

Kyle Lemons

unread,
Jan 10, 2013, 3:23:11 PM1/10/13
to Donovan Hide, Jacek Furmankiewicz, golang-nuts
At first blush, yes.   The other posters' suggestions to shard the map by key could be merged with this approach to minimize the size of the chunks you precompute.  Assuming that the keys aren't interdependent in a way which makes this sharding impossible, of course.  At that point, though, it would be harder to amortize the channel send and goroutine context switch across enough reads to make it worth it, so the RWMutex (which has fast paths in the case of no contention) is probably the right approach.

Dmitry Vyukov

unread,
Jan 11, 2013, 1:05:39 AM1/11/13
to golan...@googlegroups.com
That would be a good case for the distributed rwmutex:
Unfortunately it is not supported now...

Donovan Hide

unread,
Jan 11, 2013, 5:11:08 AM1/11/13
to Dmitry Vyukov, golang-nuts

That would be a good case for the distributed rwmutex:
Unfortunately it is not supported now...

Interesting! Could you please describe what the advantage of ProcLocal is over a straight forward atomic add and possible semaphore/signal as used in the normal RWMutex:


I'm assuming it's the idea of having a pool of locks which are processor-specific and therefore are not contended, but I could just be totally misreading the code :-)

Thanks for any explanation, I'm sure I've learnt more about programming than anytime before by reading the Go source and patches!

Cheers,
Donovan.

 

Dmitry Vyukov

unread,
Jan 11, 2013, 5:13:55 AM1/11/13
to Donovan Hide, golang-nuts
On Fri, Jan 11, 2013 at 2:11 PM, Donovan Hide <donov...@gmail.com> wrote:
>
>> That would be a good case for the distributed rwmutex:
>>
>> https://codereview.appspot.com/4850045/diff2/1:3001/src/pkg/co/distributedrwmutex.go
>> Unfortunately it is not supported now...
>
>
> Interesting! Could you please describe what the advantage of ProcLocal is
> over a straight forward atomic add and possible semaphore/signal as used in
> the normal RWMutex:
>
> https://codereview.appspot.com/4850045/patch/3002/3024
> http://tip.golang.org/src/pkg/sync/rwmutex.go#L29
>
> I'm assuming it's the idea of having a pool of locks which are
> processor-specific and therefore are not contended, but I could just be
> totally misreading the code :-)


Yes, per-CPU mutexes avoid contention (not thread blocking, but
low-level contention on mutex cache-line). It should be much faster
than plain rwmutex for read-mostly data. I observed up to 300x speedup
on synthetic benchmark.

Donovan Hide

unread,
Jan 11, 2013, 5:21:49 AM1/11/13
to Dmitry Vyukov, golang-nuts

On 11 January 2013 10:13, Dmitry Vyukov <dvy...@google.com> wrote:
Yes, per-CPU mutexes avoid contention (not thread blocking, but
low-level contention on mutex cache-line). It should be much faster
than plain rwmutex for read-mostly data. I observed up to 300x speedup
on synthetic benchmark.

Thanks! This might be a bit hypothetical given the co package is not in place, but for the concurrent map example in this thread, how might you ensure that a shard/slot of the map is only processed by the same CPU? Would this require goroutines to have processor affinity? If that was the case would you even need locks at all, seeing as the same CPU would always be responsible for the shard?

Sorry for too many questions, just trying to understand the pros and cons of all the different approaches for this particular use case, which is directly related to what I'm working on right now!

Dmitry Vyukov

unread,
Jan 11, 2013, 5:38:20 AM1/11/13
to Donovan Hide, golang-nuts
On Fri, Jan 11, 2013 at 2:21 PM, Donovan Hide <donov...@gmail.com> wrote:
>> Yes, per-CPU mutexes avoid contention (not thread blocking, but
>> low-level contention on mutex cache-line). It should be much faster
>> than plain rwmutex for read-mostly data. I observed up to 300x speedup
>> on synthetic benchmark.
>
> Thanks! This might be a bit hypothetical given the co package is not in
> place, but for the concurrent map example in this thread, how might you
> ensure that a shard/slot of the map is only processed by the same CPU?

That's not required. Just read-lock distributed rwmutex, read from
global map. No shards.

> Would
> this require goroutines to have processor affinity?

The distributed rwmutex does not have real per-CPU data, it has
per-Machine data, Machine is a Go runtime concept that is roughly
equivalent to OS threads.

> If that was the case
> would you even need locks at all, seeing as the same CPU would always be
> responsible for the shard?

For very simple data structures it could be possible (to not have
locks at all), but hashmap can be used this way, because during
hashmap access the goroutine can be rescheduled to another Machine.

Donovan Hide

unread,
Jan 11, 2013, 5:56:37 AM1/11/13
to Dmitry Vyukov, golang-nuts
Hi Dmitry,

many thanks for the explanation! Much appreciated.

That's not required. Just read-lock distributed rwmutex, read from
global map. No shards.

Sounds great! In that case, +1 for co package inclusion in the standard library!
 
The distributed rwmutex does not have real per-CPU data, it has
per-Machine data, Machine is a Go runtime concept that is roughly
equivalent to OS threads.
 
I'm going to have a good long read of proc.c and your scheduler patches, I think I need to understand this better :-)


All the best,
Donovan.

Dmitry Vyukov

unread,
Jan 11, 2013, 6:02:06 AM1/11/13
to Donovan Hide, golang-nuts
On Fri, Jan 11, 2013 at 2:56 PM, Donovan Hide <donov...@gmail.com> wrote:
> Hi Dmitry,
>
> many thanks for the explanation! Much appreciated.
>
>> That's not required. Just read-lock distributed rwmutex, read from
>> global map. No shards.
>
>
> Sounds great! In that case, +1 for co package inclusion in the standard
> library!

You are welcome!
co package won't be included this that form. First, I need to commit
new scheduler, then it will be possible to expose "per-CPU" data to Go
packages, and them it will be possible to create scalable
ResourcePool, Counter and perhaps the RWMutex.

Martin Bruse

unread,
Jan 11, 2013, 6:51:04 AM1/11/13
to golan...@googlegroups.com
I built https://github.com/zond/gotomic a while ago for a similar situation. It is a thread safe mapping for Go, based on fairly recent research in data structures using cpu CAS instructions.

It would allow you to read, write and iterate concurrently on multiple parallell thread with minimum lock overhead.

Dmitry Vyukov

unread,
Jan 11, 2013, 7:31:18 AM1/11/13
to Martin Bruse, golang-nuts
On Fri, Jan 11, 2013 at 3:51 PM, Martin Bruse <zond...@gmail.com> wrote:
> I built https://github.com/zond/gotomic a while ago for a similar situation.
> It is a thread safe mapping for Go, based on fairly recent research in data
> structures using cpu CAS instructions.
>
> It would allow you to read, write and iterate concurrently on multiple
> parallell thread with minimum lock overhead.


What exactly is that "minimum lock overhead"?
> --
>
>

Martin Bruse

unread,
Jan 11, 2013, 7:39:31 AM1/11/13
to Dmitry Vyukov, golang-nuts
Well, strictly no lock overhead. But there is additional indirection
overhead, as well as some iterations to get the operation finished in
cases of heavy contention.
Reply all
Reply to author
Forward
0 new messages