I am implementing a c++ thread safe class as follow
class foo {
private:
pthread_mutex_t mutex;
public:
fooMethod( ) { // do something , lock and unlock mutex }
}
What is the difference between the mutex is declared as static or not ?
Thanks
Perseus
What's your implied meaning of "thread safe class" thing,
exactly, please?
> class foo {
> private:
> pthread_mutex_t mutex;
> public:
> fooMethod( ) { // do something , lock and unlock mutex }
What's the associated data/objects you want to protected
(associate) with class or object instance(s) specific mutex(s)?
> }
>
> What is the difference between the mutex is declared as static or not ?
Storage duration aside, FYI:
http://www.opengroup.org/onlinepubs/007904975/functions/pthread_mutex_init.html
"....
In cases where default mutex attributes are appropriate,
the macro PTHREAD_MUTEX_INITIALIZER can be used to
initialize mutexes that are statically allocated.
The effect shall be equivalent to dynamic initialization
by a call to pthread_mutex_init() with parameter attr
specified as NULL, except that no error checks are
performed.
...."
regards,
alexander.
In C++, a static member is one that is defined for the class rather than
the object (a particular instance of that class). So you would use
static if you wanted to lock _all_ foos when inside fooMethod(). You
would not use static if you want to lock only the foo you are working with.
A few examples of where you'd want static:
- generating a unique ID for each object. You'd want a static mutex
around a function that increments a sequence, as the sequence is also
static - it applies to all foos.
- Dealing with many objects of the class in the same operation. You want
to be sure to avoid deadlock. Using a static mutex is a very easy way to
do that. Some smart pointer classes use a doubly-linked circular list to
keep track of references. It would be complicated to lock correctly an
entire chain otherwise, if it's even possible.
And per-object:
- when there is a lot of contention. More granular locking helps.
- when the resources you are accessing are more tied to that particular
object. This is really true in most situations, I think.
--
Scott Lamb
Gee! Forget about this "per-object" locking thing. *Objects*
(and their methods) aren't {generally} *observable application
specific* TRANSACTIONS on some thread-shared stuff -- THAT's
what is really needed to be "protected"/formed via locking
(NOT just this or that "object", which could actually be
thread-private [no locking needed at all] in many cases,
to begin with... would you EVER want/need SYNCHRONIZED
char/int/etc.built-in-type "classes" with "synchronized"
inc/dec/etc.?!).
regards,
alexander.
P.S. Please don't tell me about MS-"interlocked" stuff. ;-)
Hmmmmm. Nearly all of my locks/mutexes protect a single instance of
some class, possibly a container class such as a queue that is shared
among producers and consumers. Using a single (static) lock to
protect all instances of a container class would have a negative
impact on potential concurrency. It would also cause a deadlock if
one instance of that class requests a service from another. YMMV.
Tom Payne
Synchronization (higher level) utility classes (like queues
"shared among producers and consumers") and the fact that
one could indeed wrap/decorate almost everything as a
"class" aside, our experiences just differ here, I guess.
> Using a single (static) lock to
> protect all instances of a container class would have a negative
> impact on potential concurrency.
Sure/maybe. Locking granularity is a very important
design decision; I agree. ;-)
> It would also cause a deadlock if
> one instance of that class requests a service from another.
You mean calling something from inside synchronized
region(s) meant to blindly lock it "again"?
> YMMV.
Yep. That would be a serious design or coding error,
for sure (and use of *recursive locking* in general,
is just another frequent *design error*, BTW ;-)).
regards,
alexander.
: Sure/maybe. Locking granularity is a very important
: design decision; I agree. ;-)
The finer the granularity, the greater the potential concurrency but
the higher the lock-handling overhead. I'm told that the Solaris
kernel has lots of lock. I'm told that Linus Torvalds, fearing
lock-handling overhead, opted instead for coarse granularity in the
Linux kernel. I presume that is why Linux's performance falls behind
other SMP Unix's when the the degree of multiprocessing exceeds about
ten.
:> It would also cause a deadlock if
:> one instance of that class requests a service from another.
: You mean calling something from inside synchronized
: region(s) meant to blindly lock it "again"?
Not necessarily "it". It's common to encounter situation (say in
modeling and simulation) where one instance of a class, say Widget,
performs a operation on another Widget and where that operation needs
exclusive access to the second Widget's data. If they share a common
lock that doesn't support recursive acquisition, the thread will self
deadlock. If it does support recursive acquisition, there's the issue
of what happens to the lock if the operation waits on a condition in
the second Widget. If it stays locked, we have a deadlock. If it
does not, we have unlocked the first Widget without necessarily having
restored its invariants. Too many opporunities for astonishment.
:> YMMV.
: Yep. That would be a serious design or coding error,
: for sure (and use of *recursive locking* in general,
: is just another frequent *design error*, BTW ;-)).
Agreed. I've never seen a design decision to which the proper answer
was "recursive acquisition". In my experience, recursive locking just
make bugs harder to detect and locate.
Tom Payne
Even with a single lock per instance, object-level locking does not
necessarily provide correct behavior in many cases when some object state is
required to be maintained across multiple calls.
class BadContainer {
reference& operator[](size_t index) // "synchronized"
{
Lock l(m_mx);
...
}
class iterator {
BadContainer::reference& operator*()
{ return m_cont[m_index]; }
};
iterator begin(); // "synchronized"
iterator end(); // "synchronized"
private:
Mutex m_mx;
};
{
BadContainer cont;
thread 1:
for_each (cont.begin(), cont.end(), mem_fun(&BadContainer::DoSomething));
thread 2:
while (!cont.empty()) {
// use cont.top()
cont.pop_back();
}
}
hys
<t...@cs.ucr.edu> wrote in message news:abr5t0$4c0$1...@glue.ucr.edu...
> Not necessarily "it". It's common to encounter situation (say in
> modeling and simulation) where one instance of a class, say Widget,
> performs a operation on another Widget and where that operation needs
> exclusive access to the second Widget's data. If they share a common
> lock that doesn't support recursive acquisition, the thread will self
> deadlock. If it does support recursive acquisition, there's the issue
> of what happens to the lock if the operation waits on a condition in
> the second Widget. If it stays locked, we have a deadlock. If it
> does not, we have unlocked the first Widget without necessarily having
> restored its invariants. Too many opporunities for astonishment.
>
--
Hillel Y. Sims
hsims AT factset.com
Perhaps there is a semantic issue here. In my paradigm, an object
(i.e., server) offers services via service routines (a.k.a. "methods"
or "member functions"), which, if need be, acquire the surrounding
server's local lock to insure exclusive access. In my view, it is the
client thread, who requests the service (i.e., invokes the service
routine), that acquires the lock. But if you take the position that
the service routine acquires the lock on behalf of its surrounding
server, I can't really argue the matter.
: -- the client code that uses Widgets a and b would acquire the
: shared lock protecting the access to both of them, and then a and b would do
: their tangled business to each other normally without worrying about
: recursive locks / deadlocks / broken invariants / astonishment.
The point here is to untangle that "tangled business" in such a way
that it does not tie up each and every Widget and in such a way that
the designers of Widget don't have to worry that waiting while that
"tangled business" is under way will leave all Widgets deadlocked.
: This also
: provides an effective metric to judge appropriate level of granularity (at
: what level can you avoid redundant/recursive locking and guarantee properly
: synchronized access to objects/resources).
For performance reasons I prefer much finer granularity that
class-wide locking.
: Even with a single lock per instance, object-level locking does not
: necessarily provide correct behavior in many cases when some object state is
: required to be maintained across multiple calls.
Every tool has its limitations and is subject to abuse.
Tom Payne