Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

How to properly initialize shared memory on Unix?

795 views
Skip to first unread message

Andrew Schetinin

unread,
Aug 13, 2006, 3:26:15 AM8/13/06
to
Hi,

Looks like a newbie question :-) but I've searched a lot on Internet,
and checked quite a lot of books looking for an answer. With no success
yet.

The question is: suppose there is a shared library which may be loaded
by different processes, and potentially simultaneously. This shared
library needs to create (open) a shared memory region for synchronizing
its operations. How is it possible to ensure that only single process
will create and initialize the shared memory, and the other processes
will only be able to open it after initialization is complete?

Most books/discussions I found assume there is a master process which
is the only one who creates the shared memory object and initializes it
before usage. But even then, suppose the kernel object is created, and
not yet initialized, and a client process opens that kernel object -
how to prevent the client from reading uninitialized data?

Sincerely,

Andrew

Jim Langston

unread,
Aug 13, 2006, 3:34:12 AM8/13/06
to
"Andrew Schetinin" <asche...@gmail.com> wrote in message
news:1155453975.3...@74g2000cwt.googlegroups.com...

Sounds like you need a locking mechanism. Ask in comp.programming.threads.

I know there are exlusive locks that work on the OS level, I just can't
remember how to access them, they'll know in there.


Jim Langston

unread,
Aug 13, 2006, 3:55:07 AM8/13/06
to
"Jim Langston" <tazm...@rocketmail.com> wrote in message
news:ZBADg.733$n8....@newsfe04.lga...

Er, this is comp.programming.threads. Don't I feel like a n00b


Arnold Hendriks

unread,
Aug 13, 2006, 6:33:21 AM8/13/06
to
"Andrew Schetinin" <asche...@gmail.com> wrote in message
news:1155453975.3...@74g2000cwt.googlegroups.com...

> The question is: suppose there is a shared library which may be loaded


> by different processes, and potentially simultaneously. This shared
> library needs to create (open) a shared memory region for synchronizing
> its operations. How is it possible to ensure that only single process
> will create and initialize the shared memory, and the other processes
> will only be able to open it after initialization is complete?

If you implement the shared memory as a mmap-ed file, perhaps naming it
'mymemory.initializing' and renaming it to 'mymemory' when other processes
are allowed to access it? Perhaps OS-provided forms of advisory/mandatory
locking on the filesystem level (eg flock) will do?


Andrew Schetinin

unread,
Aug 13, 2006, 6:50:16 AM8/13/06
to
Hi Arnold,

> If you implement the shared memory as a mmap-ed file, perhaps naming it

I agree, that would work for a file. File system can often serve as a
useful synchronization tool. But in this case it is not based on a
file. It is created with either of shmget (SystemV) or shm_open
(POSIX).

One way I thought initially was relying on the shared memory to be
zeroed, but that's not the case on all platforms, as far as I heard.

Another way I'm thinking about right now is to create the initial
structure of smaller size, initialize it with most important staff,
than resize the shared memory (assuming that the new memory after
resizing does not need to be initialized). This way clients will need
to wait until the memory is resized to some larger size from some known
value. Not sure if resizing is portable.

Sincerely,

Andrew

David Hopwood

unread,
Aug 13, 2006, 9:22:01 AM8/13/06
to
Andrew Schetinin wrote:
> Hi Arnold,
>
>>If you implement the shared memory as a mmap-ed file, perhaps naming it
>
> I agree, that would work for a file.
>
> File system can often serve as a
> useful synchronization tool. But in this case it is not based on a
> file. It is created with either of shmget (SystemV) or shm_open
> (POSIX).
>
> One way I thought initially was relying on the shared memory to be
> zeroed, but that's not the case on all platforms, as far as I heard.

Isn't it? That would be a security bug. It would also be a violation of
POSIX:

<http://www.opengroup.org/onlinepubs/009695399/functions/shmget.html>

# When the shared memory segment is created, it shall be initialized with
# all zero values.

Similarly, using ftruncate on an fd returned by shm_open will result in
a shared memory object initialized to all zeroes (as shown in the example
at <http://www.opengroup.org/onlinepubs/009695399/functions/shm_open.html>).

<http://www.opengroup.org/onlinepubs/009695399/functions/ftruncate.html>

# If the file size is increased, the extended area shall appear as if it
# were zero-filled.
[...]
# [SHM] [Option Start] If fildes refers to a shared memory object, ftruncate()
# shall set the size of the shared memory object to length. [Option End]

(If each process does the ftruncate before trying to access the memory,
it can't get a SIGBUS from trying to access it when it is still zero-length.
This assumes that the memory has a fixed size known to all the processes.)

> Another way I'm thinking about right now is to create the initial
> structure of smaller size, initialize it with most important staff,
> than resize the shared memory (assuming that the new memory after
> resizing does not need to be initialized). This way clients will need
> to wait until the memory is resized to some larger size from some known
> value. Not sure if resizing is portable.

POSIX says that all systems supporting the SHM option should support
resizing, with zero-fill.

--
David Hopwood <david.nosp...@blueyonder.co.uk>

Andrew Schetinin

unread,
Aug 13, 2006, 9:56:02 AM8/13/06
to
Hi David,

> > One way I thought initially was relying on the shared memory to be
> > zeroed, but that's not the case on all platforms, as far as I heard.
>
> Isn't it? That would be a security bug. It would also be a violation of
> POSIX:

Thank you for pointing that out. I've read shmget man page too many
times (LOL) to pay attention to these "little" details :-) shame on
me...

Well, then initializing some field to a "magic" constast that would
signal about shared memory initialization completion would work. At
least for the case when there is a single creator process.

For the case with simultaneous creators it is not possible to rely on
the shared memory content. In this case, probably, another named
synchronization object (like mutex or semaphore), would work.

Sincerely,

Andrew

Andrew Schetinin

unread,
Aug 14, 2006, 11:23:24 AM8/14/06
to

Andrew Schetinin wrote:

> For the case with simultaneous creators it is not possible to rely on
> the shared memory content. In this case, probably, another named
> synchronization object (like mutex or semaphore), would work.

The problem with such a named mutex (please correct me if I'm wrong) is
that the semaphore-based mutex does not "unlock" in a case when the
process crashes.
This would mean that once the process which locked the mutex crashes,
no other process could access the locked object.

For example, named mutexes in Windows do have such a feature - they
automatically unlock if the process terminates.

Sincerely,

Andrew

David Hopwood

unread,
Aug 14, 2006, 1:00:37 PM8/14/06
to

You mean "misfeature".

Automatic unlocking on a crash is not the desired behaviour. Some way of
detecting this situation and *explicitly* bypassing the lock is what
should be desired.

--
David Hopwood <david.nosp...@blueyonder.co.uk>

Andrew Schetinin

unread,
Aug 15, 2006, 3:11:53 AM8/15/06
to
David Hopwood wrote:
> You mean "misfeature".
>
> Automatic unlocking on a crash is not the desired behaviour. Some way of
> detecting this situation and *explicitly* bypassing the lock is what
> should be desired.

Well, I see your point, but do not agree with it.

Let's take a simple example: we need to create and initialize some
object atomically. There is a flag (external or internal) which serves
as a signal that the object was initialized properly, and this flag is
set at the end. Now, the entire process of creation, initialization,
and setting that flag, is surrounded by lock/unlock.

Let's take Windows mutex - the process which tries to initialize the
object will either complete the work, or crash in the middle, unlocking
the mutex in either case. The other process will lock the mutex and
right away detect that the object is not initalized (in a case of
crash) - it would allow to the second process to re-intialize the
object. I really do not see problem with that.

In a case of Unix sockets the only way (see sidenote) to detect that
another process crashed and left the mutex locked is to wait for some
timeout, and then assume that the other process crashed (not a healthy
assumption, IMHO). But the timeout should allow relatively long time to
the first process to complete semaphore initialization, so it's not
very convinient in routines which should return fast (my case). Please
advice me if there is a better way....

Sidenote: well, in SystemV semaphores there is also pid of the last
process performing operation on a socket (probably in POSIX semaphores
too), and it is possible to check if that process is alive, which is
not trivial task by itself - such a test usually done by sending zero
signal to the process under question, but this technique is not
something recommended in books - it's more like a hack and it's not
good relying on hacks in such a critical area as synchronization.

BTW, SystemV semaphores seem to have such a feature of automated
unlocking (according to Stevens, it's not clear and I want to test it).
But SystemV semaphores also have a race condition because they are
created and initialized in two separate system calls. Stevens says
there is a way to work around this problem by verifying timestamp in
the semaphore structure, but this is not something reliable, especially
on multi-CPU computers.

Sincerely,

Andrew

Andrew Schetinin

unread,
Aug 15, 2006, 5:24:06 AM8/15/06
to

> BTW, SystemV semaphores seem to have such a feature of automated
> unlocking (according to Stevens, it's not clear and I want to test it).

One guy in another forum pointed out that he tested SEM_UNDO flag on
Linux a couple of years ago, and found it not very reliable.

Sincerely,

Andrew

David Hopwood

unread,
Aug 15, 2006, 11:34:30 AM8/15/06
to
Andrew Schetinin wrote:
> David Hopwood wrote:
>
>>You mean "misfeature".
>>
>>Automatic unlocking on a crash is not the desired behaviour. Some way of
>>detecting this situation and *explicitly* bypassing the lock is what
>>should be desired.
>
> Well, I see your point, but do not agree with it.
>
> Let's take a simple example: we need to create and initialize some
> object atomically. There is a flag (external or internal) which serves
> as a signal that the object was initialized properly, and this flag is
> set at the end. Now, the entire process of creation, initialization,
> and setting that flag, is surrounded by lock/unlock.
>
> Let's take Windows mutex - the process which tries to initialize the
> object will either complete the work, or crash in the middle, unlocking
> the mutex in either case. The other process will lock the mutex and
> right away detect that the object is not initialized (in a case of
> crash) - it would allow to the second process to re-initialize the
> object.

If the process crashes while it holds the mutex, then without relying on
platform-specific memory ordering properties, we can only say that some
unspecified subset of the write operations executed by that process before
the crash will be visible -- and so detecting that the object is not
initialized may not be as simple as you suggest.

It is possible to rely on memory ordering, essentially doing lock-free
programming for the case in which the process crashed. Now that I think
about it, this is less complicated than I thought when writing my previous
post. It is sufficient (even for operations other than initialization) for
the object to have a 'consistent' flag, and for each process to do the
following:

lock mutex
if not consistent {
... recovery ...
} else {
consistent := false
full barrier
... normal processing ...
}
full barrier
consistent := true
unlock mutex

which should work in any memory model that supports full (compiler and
processor) memory barriers [*]. Note, however, that it may be tricky to
write correct recovery code for cases where we are not just doing
initialization.

> In a case of Unix sockets the only way (see sidenote) to detect that
> another process crashed and left the mutex locked is to wait for some
> timeout, and then assume that the other process crashed (not a healthy
> assumption, IMHO).

I agree; I was not defending the POSIX semantics.


[*] Using less than a full barrier would be pointless over-optimization,
given that we're also using an OS-level mutex.

--
David Hopwood <david.nosp...@blueyonder.co.uk>

Chris Thomasson

unread,
Aug 15, 2006, 6:54:23 PM8/15/06
to
"Andrew Schetinin" <asche...@gmail.com> wrote in message
news:1155453975.3...@74g2000cwt.googlegroups.com...
> Hi,
>
> Looks like a newbie question :-) but I've searched a lot on Internet,
> and checked quite a lot of books looking for an answer. With no success
> yet.
>
> The question is: suppose there is a shared library which may be loaded
> by different processes, and potentially simultaneously. This shared
> library needs to create (open) a shared memory region for synchronizing
> its operations. How is it possible to ensure that only single process
> will create and initialize the shared memory, and the other processes
> will only be able to open it after initialization is complete?

Well, you could use something like this:


/* crude pseudo-code for shm init functionality */

#define EFATAL (-666)


int xsem_open_and_lock(char const* name, sem_t **psem) {
int retry = 0;


/* try to create a locked semaphore */
sem_t *sem = sem_open(name, O_CREAT | O_EXCL, ..., 0);


/* did we fail to create? */
while(sem == SEM_FAILED) {


/* infinite loop prevention */
++retry;
if (retry >= 100) { return ETIMEDOUT; }


/* did the semaphore already exist? */
if (errno == EEXIST) {


/* this should not create a semaphore! */
sem = sem_open(name, 0);


/* did we attach? */
if (sem != SEM_FAILED) {
int ret;
*psem = sem;


/* okay, we attached to a semaphore; lock it! */
ret = xsem_lock(sem);
if (! ret) { return EEXIST; }


/* the SHI% hit the fuc%king fan! */
return EFATAL;
}
}


/* retry the whole damn thing! */
sem = sem_open(name, O_CREAT | O_EXCL, ..., 0);
}

*psem = sem;
return 0;
}

int xsem_lock(sem_t *sem) {
return sem_wait(sem);
}

int xsem_unlock(sem_t *sem) {
return sem_post(sem);
}

/* simple usage */


/* your shm "descriptor" */
struct xshm_t {
sem_t *sem;
void *buf;
};

int xshm_init_whatever(xshm_t *_this) {

/* lock the "shm lock/guard"
int ret = xsem_open_and_lock("xshmSemaphore", &_this->sem);


/* we are in a locked region wrt _this->sem */


/* did we create? */
if (! ret) {

/* safe to fully initialize your shm to whatever */

} else {

assert(ret == EEXIST);

/* safe to read your shm */
}

/* unlock the "shm lock/guard"
ret = xsem_unlock(&_this->sem);
assert(!ret);

return 0;
}


This "should" do what you want... It basically guards shared memory creation
and initialization by locking an external POSIX semaphore.


Any thoughts?


Andrew Schetinin

unread,
Aug 16, 2006, 2:45:44 AM8/16/06
to

David Hopwood wrote:
> If the process crashes while it holds the mutex, then without relying on
> platform-specific memory ordering properties, we can only say that some
> unspecified subset of the write operations executed by that process before
> the crash will be visible -- and so detecting that the object is not
> initialized may not be as simple as you suggest.

In my case working with memory is relatively atomic - at the end of
each operation a 4 byte value is updated, and if it is not updated
because of a crash on any of the previous stage, all changes are
"invisible". Kind of a natural rollback. And I assume that it is
possible to rely on consistant update of 4 bytes, assuming they are
aligned.

Chris Thomasson wrote:
> int xsem_open_and_lock(char const* name, sem_t **psem) {

> /* try to create a locked semaphore */
> sem_t *sem = sem_open(name, O_CREAT | O_EXCL, ..., 0);

> /* this should not create a semaphore! */
> sem = sem_open(name, 0);

Nice code :-) I just cannot get this trick with
try-create-then-try-open...
Why not to simply call sem_open(name, O_CREAT, ..., 0) ?

Locking a named semaphore is simple, my problem with it was that if the
process exits-by-signal/crashes while holding lock on the semaphore, no
other process could open the shared memory till system restart (or
forever, for platforms where named semaphores are file-system
persistant).

To solve that it is necessary to define a timeout after which the lock
is assumed to expire. Not a very healthy assumption, and the timeout
should be somewhat long enough, but I really don't see another way.

Andrew

Chris Thomasson

unread,
Aug 16, 2006, 3:55:45 PM8/16/06
to
"Andrew Schetinin" <asche...@gmail.com> wrote in message
news:1155710744....@m79g2000cwm.googlegroups.com...

>
> David Hopwood wrote:
>> If the process crashes while it holds the mutex, then without relying on
>> platform-specific memory ordering properties, we can only say that some
>> unspecified subset of the write operations executed by that process
>> before
>> the crash will be visible -- and so detecting that the object is not
>> initialized may not be as simple as you suggest.
>
> In my case working with memory is relatively atomic - at the end of
> each operation a 4 byte value is updated, and if it is not updated
> because of a crash on any of the previous stage, all changes are
> "invisible". Kind of a natural rollback. And I assume that it is
> possible to rely on consistant update of 4 bytes, assuming they are
> aligned.

You need to be sure that the "operations" effects on shared memory are
visible, before the updates to the "4-byte value" become visible... You
would use a memory barrier instruction (e.g., membar #StoreLoad|#StoreStore)
after the operations. Of course, when a crash occurs you might not be so be
sure that the membar instruction was even executed...


> Chris Thomasson wrote:
>> int xsem_open_and_lock(char const* name, sem_t **psem) {
>> /* try to create a locked semaphore */
>> sem_t *sem = sem_open(name, O_CREAT | O_EXCL, ..., 0);
>> /* this should not create a semaphore! */
>> sem = sem_open(name, 0);
>
> Nice code :-) I just cannot get this trick with
> try-create-then-try-open...
> Why not to simply call sem_open(name, O_CREAT, ..., 0) ?

I wanted to be able to distinguish between the "creator" of the semaphore
and an "opener" of the semaphore. The "creator" is responsible for
"completely initializing" the shared memory to whatever it wants to. The
semaphore is created in a locked state, so a "creator" has atomic mutual
exclusion wrt concurrent "openers". An "opener" waits for the "creator" to
do whatever it is going to do before it takes a reference or whatever to the
shared memory.

The semaphore in the example code is only used when a processes
attaches/detaches* from the shared memory that it is protecting. You are
free to use other synchronization methods after the initialization of shared
memory is complete... The example code has functionality that is basically
equivalent to pthread_once()...


> Locking a named semaphore is simple, my problem with it was that if the
> process exits-by-signal/crashes while holding lock on the semaphore, no
> other process could open the shared memory till system restart (or
> forever, for platforms where named semaphores are file-system
> persistant).

The thread that "locked" the semaphore does not have to be the thread that
"unlocks" it; semaphores have no concept of ownership. So, another process
can post to the semaphore to forcefully unlock it. You have go to be very
careful here... If you don't do things correctly, a very nasty
race-condition can and will occur.


David Hopwood

unread,
Aug 16, 2006, 3:48:12 PM8/16/06
to
Andrew Schetinin wrote:
> David Hopwood wrote:
>
>>If the process crashes while it holds the mutex, then without relying on
>>platform-specific memory ordering properties, we can only say that some
>>unspecified subset of the write operations executed by that process before
>>the crash will be visible -- and so detecting that the object is not
>>initialized may not be as simple as you suggest.
>
> In my case working with memory is relatively atomic - at the end of
> each operation a 4 byte value is updated, and if it is not updated
> because of a crash on any of the previous stage, all changes are
> "invisible". Kind of a natural rollback.

That isn't sufficient. Since the compiler may reorder the write to
the 4-byte value before previous writes, you really do need the memory
barriers (even on architectures with fairly strong memory ordering such
as x86 and SPARC).

--
David Hopwood <david.nosp...@blueyonder.co.uk>

Chris Thomasson

unread,
Aug 16, 2006, 6:13:26 PM8/16/06
to
"David Hopwood" <david.nosp...@blueyonder.co.uk> wrote in message
news:aQlEg.104267$9d4....@fe2.news.blueyonder.co.uk...

I remember doing something like this to a critical shared data-structure:

lock mutex
if consistent < 5{
... specialized recovery depending on what Stage were at ...
} else {
consistent := 0
full barrier

... normal processing for stage 1...
full barrier
consistent := 1

... normal processing for stage 2...
full barrier
consistent := 2

... normal processing for stage 3...
full barrier
consistent := 3

... normal processing for stage 4...
full barrier
consistent := 4
}

... final processing for consistent data at Stage 4...
full barrier
consistent := 5

... normal processing complete; Stage 5 means coherent data..

unlock mutex

Technically, you can get "arbitrarily fine grain" recovery scheme for highly
critical data that way... IMHO, this simple method can make recovery code be
more concise...


Chris Thomasson

unread,
Aug 17, 2006, 4:34:03 AM8/17/06
to
"David Hopwood" <david.nosp...@blueyonder.co.uk> wrote in message
news:aQlEg.104267$9d4....@fe2.news.blueyonder.co.uk...
> Andrew Schetinin wrote:
>> David Hopwood wrote:
[...]

I should point out that a #LoadLoad barrier is needed after you load the
consistent flag, and before the check... Like this:


lock mutex
local := consistent; /* load the flag */
membar #LoadLoad /* order the load of the flag */
if not local { /* is local not consistent */

Chris Thomasson

unread,
Aug 17, 2006, 4:41:07 AM8/17/06
to

"Chris Thomasson" <cri...@comcast.net> wrote in message
news:jeSdnTviNadSunnZ...@comcast.com...

> "David Hopwood" <david.nosp...@blueyonder.co.uk> wrote in message
> news:aQlEg.104267$9d4....@fe2.news.blueyonder.co.uk...
>> Andrew Schetinin wrote:
>>> David Hopwood wrote:
> [...]
>
> I should point out that a #LoadLoad barrier is needed after you load the
> consistent flag, and before the check... Like this:
>
[...]

Now that I think about it some more, the semantics of the lock itself should
make the #LoadLoad unnecessary...


However, in cases where a processes crashes in the middle of a critical
section may send its locking semantics off to "undefined behavior" land...


So, the explicit load ordering wrt the consistent flag will probably be
needed after all...


Humm....


David Hopwood

unread,
Aug 17, 2006, 2:11:42 PM8/17/06
to
Chris Thomasson wrote:

> "David Hopwood" <david.nosp...@blueyonder.co.uk> wrote:
>>Andrew Schetinin wrote:
>>>David Hopwood wrote:
>
> [...]
>
> I should point out that a #LoadLoad barrier is needed after you load the
> consistent flag, and before the check... Like this:
>
> lock mutex
> local := consistent; /* load the flag */
> membar #LoadLoad /* order the load of the flag */
> if not local { /* is local not consistent */
> ... recovery ...
> } else {
> consistent := false
> full barrier
> ... normal processing ...
> }
> full barrier
> consistent := true
> unlock mutex

I believe you're mistaken; it is not needed.

(Even though the load of 'consistent' may be reordered after any loads
in the recovery code, this does not matter because all such loads are of
locations protected by the mutex.)

You wrote:
> However, in cases where a processes crashes in the middle of a critical
> section may send its locking semantics off to "undefined behavior" land...

Remember that this psuedocode was for a platform that implicitly releases
shared mutexes on a crash. By assumption, the behaviour of a shared mutex
on a crash is well-defined on such a platform.

Even for POSIX, a platform would be severely broken if a crash could cause
the state of a *shared* mutex to become undefined.

--
David Hopwood <david.nosp...@blueyonder.co.uk>

Alexander Terekhov

unread,
Aug 17, 2006, 6:26:40 PM8/17/06
to

David Hopwood wrote:
[...]

> Remember that this psuedocode was for a platform that implicitly releases
> shared mutexes on a crash. By assumption, the behaviour of a shared mutex
> on a crash is well-defined on such a platform.
>
> Even for POSIX, a platform would be severely broken if a crash could cause
> the state of a *shared* mutex to become undefined.

In a sense, the state is indeed undefined. A watchdog process should
emergently recycle the whole crew and stuff (hoping that the platform
is smart enough to reclaim all its internal shared mutex resources
(if any) that it might hold... at last when corresponding shared
memory is reclaimed).

regards,
alexander.

Andrew Schetinin

unread,
Aug 20, 2006, 8:52:00 AM8/20/06
to

Chris Thomasson wrote:
> You need to be sure that the "operations" effects on shared memory are
> visible, before the updates to the "4-byte value" become visible... You
> would use a memory barrier instruction (e.g., membar #StoreLoad|#StoreStore)
> after the operations. Of course, when a crash occurs you might not be so be
> sure that the membar instruction was even executed...

Agree with you on memory/compiler barriers. As David said, mutex (and
most of other synchronization primitives) enforce memory barrier,
that's for sure. But it does not necessary enforce the compiler
optimization barrier (which may also reorder writes). In most compilers
there are intristic functions which enforce compiler and/or memory
barriers. Well, anyway a combination is necessary, to ensure that the
"state consistent" flag is updated last, after everything else is
stored to the memory, and neither CPU nor optimizer reorder this flag
write. Repeating the pseudo-code provided by one of you earlier:

.............
writing to the shared memory
write or full memory barrier
write the consistency flag
unlock the mutex

Once again, in my case the data structures allow committing the
transaction by a single value, which is extremily simplifies my life
:-) if the value is not written, it's like nothing happened.

David Hopwood wrote:
> That isn't sufficient. Since the compiler may reorder the write to
> the 4-byte value before previous writes, you really do need the memory
> barriers (even on architectures with fairly strong memory ordering such
> as x86 and SPARC).

Thanks. After I read more on compiler and memory barriers, I've got
much better understanding of the picture. I've tried to play with VC++
compiler and its optimizer. Unfortunately, barrier support is poorly
documented. I found several barrier functionsin MSDN, but some of them
are missing, others are only defined for AMD64, and finally I found
those that work, it's just that they require defining bizzare #pragmas
or asm {} insertions :-) On other platforms (Linux, AIX, Solaris) I
also found the barrier functions but did not try them. HP-UX docs say
volatile is enough for their platform :-)

Alexander Terekhov wrote:


> David Hopwood wrote:
> > Even for POSIX, a platform would be severely broken if a crash could cause
> > the state of a *shared* mutex to become undefined.
> In a sense, the state is indeed undefined. A watchdog process should
> emergently recycle the whole crew and stuff (hoping that the platform
> is smart enough to reclaim all its internal shared mutex resources
> (if any) that it might hold... at last when corresponding shared
> memory is reclaimed).

That's quite unfortunate (difficult to use) in a case when the shared
memory is used from a shared library which is opened from multiple
independent processes, and there is no a single master process. If one
of the processes crashes, and leaves the mutex locked, it becames
difficult to decide when to unlock it. One of potential solutions I
thought of would be having a process id of the locking process stored
in the shared memory, and if there is a timeout, another process could
test if that process is alive.
Once again, comparing Windows mutexes and POSIX semaphore-based
mutexes, I don't see any bad in auto-unlock feature (assuming
consistency of my memory structures may be controlled by some flag,
which is anyway necessary).

BTW, ACE library implements SystemV mutexes (semaphores) with this
auto-unlock semantic. They have a very nice implementation there, with
workaround for a race condition on initialization, and even reference
counting :-)

Could somebody advice me where to look for a cross-platform sources for
memory barriers?...

Sincerely,

Andrew

Andrew Schetinin

unread,
Aug 20, 2006, 8:54:42 AM8/20/06
to
A quick fix for the code example, added mention of the compiler barrier
:-)

.............
writing to the shared memory

write or full memory barrier + compiler optimization barrier


write the consistency flag
unlock the mutex

Sincerely,

Andrew

David Hopwood

unread,
Aug 20, 2006, 11:32:17 AM8/20/06
to
Andrew Schetinin wrote:
> David Hopwood wrote:
>
>>[...] Since the compiler may reorder the write to

>>the 4-byte value before previous writes, you really do need the memory
>>barriers (even on architectures with fairly strong memory ordering such
>>as x86 and SPARC).
>
> Thanks. After I read more on compiler and memory barriers, I've got
> much better understanding of the picture. I've tried to play with VC++
> compiler and its optimizer. Unfortunately, barrier support is poorly
> documented. I found several barrier functions in MSDN, but some of them

> are missing, others are only defined for AMD64, and finally I found
> those that work, it's just that they require defining bizzare #pragmas
> or asm {} insertions :-)

Yes, it's a bit of a mess. For Windows, I ended up with:

#include <windows.h>

extern "C" void _ReadWriteBarrier();
#pragma intrinsic(_ReadWriteBarrier)

/* The distinction between acquire and release barriers is for
* documentation. */

#ifdef MemoryBarrier
# define ReleaseBarrier() { MemoryBarrier(); _ReadWriteBarrier(); }
# define AcquireBarrier() { MemoryBarrier(); _ReadWriteBarrier(); }
#else
# define ReleaseBarrier() _ReadWriteBarrier()
# define AcquireBarrier() _ReadWriteBarrier()
#endif

... although I have no non-x86, Windows Server platform on which to test
whether the MemoryBarrier macro actually works. (_ReadWriteBarrier seems
to work, but it only prevents compiler reordering.)

--
David Hopwood <david.nosp...@blueyonder.co.uk>

Chris Thomasson

unread,
Aug 20, 2006, 6:02:41 PM8/20/06
to
"Andrew Schetinin" <asche...@gmail.com> wrote in message
news:1156078320.6...@74g2000cwt.googlegroups.com...

>
> Chris Thomasson wrote:
>> You need to be sure that the "operations" effects on shared memory are
>> visible, before the updates to the "4-byte value" become visible... You
>> would use a memory barrier instruction (e.g., membar
>> #StoreLoad|#StoreStore)
>> after the operations. Of course, when a crash occurs you might not be so
>> be
>> sure that the membar instruction was even executed...
>
> Agree with you on memory/compiler barriers. As David said, mutex (and
> most of other synchronization primitives) enforce memory barrier,
> that's for sure. But it does not necessary enforce the compiler
> optimization barrier (which may also reorder writes). In most compilers
> there are intristic functions which enforce compiler and/or memory
> barriers. Well, anyway a combination is necessary, to ensure that the
> "state consistent" flag is updated last, after everything else is
> stored to the memory, and neither CPU nor optimizer reorder this flag
> write. Repeating the pseudo-code provided by one of you earlier:
[...]

http://groups.google.com/group/comp.programming.threads/msg/423df394a0370fa6

http://groups.google.com/group/comp.programming.threads/browse_frm/thread/29ea516c5581240e/423df394a0370fa6?#423df394a0370fa6
(read whole thread, if you haven't already)

External assembled functions usually inhibits code motion... However, with
the advent of link-time optimizations, this may not always hold true...

Luckily, most compilers have decorations that can prevent this kind of
optimization. Sun C compiler allows one to decorate thier critical function
declarations with a pragma that tells the compiler not to optimize... MS
compiler has pragma that allows one to adjust the optimization level... ect.

IMHO, I would advise you to externally assemble and decorate your critical
function declarations with stuff that prevents link-time optimization, for
this specific function.


FWIW, GCC and MS have compiler optimization barrier functionality. I can't
remember the GCC stuff you need to use right now... I would investigate...
decorating a dummy GCC inline asm with __volatile and "memory" should act as
a compiler barrier. Humm...

On MS, calls to Interlocked API behave as a memory barrier, and the volatile
keyword in the Interlocked API block compiler optimization... However, I am
not too sure to totally trust this behavior... I still use external assembly
on MS platforms...

I know that what I do works fine for me, but you may find it difficult or
nearly impossible to create a strictly C compiler based solution... You will
probably be forces to drop down into assembly to actually get at the memory
barrier instructions.


I think the support for this stuff is going to get better over time... MS's
new compilers seem to be able to handle a DCL pattern:

http://groups.google.com/group/comp.programming.threads/msg/a369a6e499791e25?hl=en
(look for part on MS DCL, toward middle of msg)

In the near future, it might be a very bad business decision for a compiler
vendor to not support some sort of memory barrier and compiler barrier
functionality throughout their application suites... I hope this is not
wishful thinking!

:O


Chris Thomasson

unread,
Aug 20, 2006, 6:07:29 PM8/20/06
to
"David Hopwood" <david.nosp...@blueyonder.co.uk> wrote in message
news:5g%Fg.130587$F8.6...@fe3.news.blueyonder.co.uk...

IIRC, the MemoryBarrier() macro boils down to a lock xadd on a dummy
location. _ReadWriteBarrier() is an external function, so code motion will
be reduced or halted... Again, IIRC, _ReadWriteBarrier() could also be a
volatile store to a dummy location, which should prevent code motion...

I don't really trust MS compilers "enough" to actually "use" these functions
in my critical production code. However, I do trust my hand crafted
assembly...

;)


Andrew Schetinin

unread,
Aug 21, 2006, 2:16:11 AM8/21/06
to
David Hopwood wrote:
> Yes, it's a bit of a mess. For Windows, I ended up with:
..........

> # define ReleaseBarrier() { MemoryBarrier(); _ReadWriteBarrier(); }
> # define AcquireBarrier() { MemoryBarrier(); _ReadWriteBarrier(); }
...........

> ... although I have no non-x86, Windows Server platform on which to test
> whether the MemoryBarrier macro actually works. (_ReadWriteBarrier seems
> to work, but it only prevents compiler reordering.)

I've eventually came up with a similar technique for Windows, with
_ReadWriteBarrier for preventing compiler reordering, and a
construction found somewhere in MSDN for memory barriers:
{ LONG Barrier; __asm { xchg Barrier, eax } };
instead of MemoryBarrier() which is not available in VC++ 2003 (better
to say, in the Platform SDK installed with it).

Sincerely,

Andrew

Andrew Schetinin

unread,
Aug 21, 2006, 11:32:34 AM8/21/06
to
Chris Thomasson wrote:
> I don't really trust MS compilers "enough" to actually "use" these functions
> in my critical production code. However, I do trust my hand crafted
> assembly...

It's only matter of having time to research what's going on behind the
curtains, IMHO.
For instance, I played with different barrier functions and looked at
disassembly of the following piece of code:

a = 2
<barrier>
a = 3

Looks dumb, but it showed perfectly where optimizer injected xchg and
reordered (eliminated) instructions.

The funny thing is that at the beginning I started to play with these
things in a function called from main, and the compiler decided that
the function is not necessary and optimized it out together with
_ReadWriteBarrier inside :-) after that I continued in the main() and
there it worked (as a compiler barrier, not a memory barrier). Probably
asm construct would keep the function alive. In VC++ 2005 (I'm using
2003) they say that barriers propagate down the call chain, which might
be useful when inlining.

Regards,

Andrew

David Hopwood

unread,
Aug 21, 2006, 8:41:41 PM8/21/06
to
Chris Thomasson wrote:
> "David Hopwood" <david.nosp...@blueyonder.co.uk> wrote:
>

I don't trust MS compilers, or most other C compilers, either -- that's why
I normally compile most of my code without optimizations, and just apply
optimization to a file containing functions where performance is significant [*].

_ReadWriteBarrier is a compiler intrinsic. Even if an external function
or a volatile store would happen to work for the current compiler,
_ReadWriteBarrier appears to be the only *documented* way to inhibit
reordering, where there is an obligation for MS not to break it in future
versions.

> However, I do trust my hand crafted assembly...

Only if it includes an explicit memory clobber. But we've had this discussion
before (for gcc):
<http://groups.google.co.uk/group/comp.lang.java.programmer/msg/75a40c3c5f60b126>


[*] It's not that I think compilers shouldn't be doing aggressive optimization;
quite the contrary. But the semantics of C, and the "as long as it usually
works" C culture, leave me with little confidence that all optimizations
will be correct, especially in the case of multithreaded code.

--
David Hopwood <david.nosp...@blueyonder.co.uk>

0 new messages