Assuming the status strings themselves don't change, you could have
..
Status *string
..
have that point to different constants. That said, if Status has a
finite number of possibilitises, it's probably easier and more
efficient to do
type Status int
const (
OK = Status(0)
ENOENT = Status(22)
)
func (s Status) String() string {
..
}
--
Han-Wen Nienhuys
Google Engineering Belo Horizonte
han...@google.com
The recommended way is not to access objects from multiple goroutines.
That is what channels are for, passing around the 'ownership' of
objects.
"Don't communicate by sharing memory; share memory by communicating."
uriel
Only if you don't care what the value is. If you want the value
to be up to date you have to do some kind of synchronization
between goroutines, at least according to the memory model.
Of course, if you have a single-word field that is
independent of all the others, in the current Go implementations
it is okay to read and write it without a lock: you'll get a
relatively recent value, and your program won't crash.
However, the memory model doesn't guarantee either of
those important behaviors: it doesn't guarantee that the
compiler won't change to break one or both at some point in
the future. If either behavior is important to your program,
synchronize.
> Is [locked getters]
> really the recommended way of dealing with objects that might be being used
> in multiple goroutines?
If you must have such an object, then yes, you have to
synchronize. You can use mutexes if you want, or you
can use channels, like in John Asmuth's example.
> It just seems like a lot of boiler plate code and
> an easy to make, hard to catch, pitfall for go programs.
Saying it is boilerplate suggests that you want something
to generate it for you, as some other languages do.
One problem (of many) with automatically locked getters
and setters is that they provide a false sense of security.
Maybe in this case you really do have a field that is
unrelated to everything else in the data structure, so that
you can read it and write it independently of all other
memory. In that case, an auto-generated locking getter/setter
wrapper would be just perfect. However, as soon as you
have two or more fields that must be adjusted in tandem,
you need a lock around the whole update, or a lock around
the read of the entire set, in order to get a consistent snapshot.
That's harder to auto-generate.
Of course, one programmer's boilerplate is another
programmer's opportunity to write a function. If this kind of
atomic string field comes up over and over in your program,
type SyncString struct {
mu sync.RWMutex
val string
}
func (s *SyncString) Get() {
s.mu.RLock()
defer s.mu.RUnlock()
return s.val
}
func (s *SyncString) Set(x string) {
s.mu.Lock()
defer s.mu.Unlock()
s.val = x
}
Russ
The recommended way is not to access objects from multiple goroutines.
That is what channels are for, passing around the 'ownership' of
objects."Don't communicate by sharing memory; share memory by communicating."
In this particular case I just need to make sure that I don't read a corrupt string structure and crash. It doesn't matter if I return the one just before or just after someone else is changing it. I just need to not crash. Of course, there is expectation that the value is not too stale.Russ's suggestion of basically building thread safe strings (and maps and slices) is not a bad solution but has the cost of a mutex per element. The common wisdom of "well, if you need to protect a map you probably need to protect something else" does suggest a lock for the collection of data rather than just the map but at a cost of a new method per element.
> In this particular case I just need to make sure that I don't read a corrupt
> string structure and crash. It doesn't matter if I return the one just
> before or just after someone else is changing it. I just need to not crash.
> Of course, there is expectation that the value is not too stale.
>
> Russ's suggestion of basically building thread safe strings (and maps and
> slices) is not a bad solution but has the cost of a mutex per element. The
> common wisdom of "well, if you need to protect a map you probably need to
> protect something else" does suggest a lock for the collection of data
> rather than just the map but at a cost of a new method per element.
>
> It perhaps is an unfair association of mine. If the language is going to
> tell me it is going to handle all my memory allocation/freeing (i.e.,
> garbage collection) then I start to feel short changed if it is going to
> claim concurrency but make me do the equivalent of alloc/free for
> concurrency (lock/unlock). Not enough friction on my slope, I guess.
We understand how to use garbage collection to more or less handle all
memory allocation issues. As somebody else said in a different way, we
don't understand how to handle all concurrency issues. The reason we
don't understand them is that any comprehensible approach requires some
notion of what collection of changes must be handled atomically. If
there are several associated fields that must all be changed as a unit,
then it doesn't help to make each individual field change atomic.
Go takes a nonintrusive approach, in that you have to specify the
locking yourself. Because that is hard to do correctly, Go strongly
encourages having a single owner for each data item. If a data item has
a single owner, there are no concurrency issues. But sometimes that
doesn't work, and then you need to use locks.
Other languages take different approaches. There is no clearly correct
approach in this space, at least not yet.
Ian
> On Wednesday, 5 October 2011 00:08:51 UTC+2, Uriel K wrote:
>>
>> "Don't communicate by sharing memory; share memory by communicating."
>>
> It is a very nice way of doing things, but it doesn't work well if the
> number of readers are much bigger than the number of writers. They would be
> forced to wait, only allowing access one at a time.
It really depends on how your data is structured and how your program is
structured. It's hard to discuss the idea abstractly without concrete
details.
E.g., when there are many readers and few writers, it is often the case
that an update need not be pushed atomically to all readers. When that
is the case, each reader can have a channel which it checks via select
when it is ready to use an updated value. Then writes could go through
a single goroutine which would then push out the updated value via
channels to all readers. This is just an example of an alternate
approach which will work in some cases.
Ian
Russ