Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Does anyone think 'volatile' is a platform-independent way to make variable access thread safe?

367 views
Skip to first unread message

David Schwartz

unread,
Jun 30, 2003, 4:40:40 PM6/30/03
to
http://www.cuj.com/documents/s=7998/cujcexp1902alexandr/alexandr.htm

Here's a reputable journal and a reputable person claiming, among other
things:

"Although both C and C++ Standards are conspicuously silent when it comes to
threads, they do make a little concession to multithreading, in the form of
the volatile keyword.


Just like its better-known counterpart const, volatile is a type modifier.
It's intended to be used in conjunction with variables that are accessed and
modified in different threads. Basically, without volatile, either writing
multithreaded programs becomes impossible, or the compiler wastes vast
optimization opportunities."

*SIGH* Anyone got a cluebat?

DS

Alexander Terekhov

unread,
Jun 30, 2003, 4:57:37 PM6/30/03
to

David Schwartz wrote:
>
> http://www.cuj.com/documents/s=7998/cujcexp1902alexandr/alexandr.htm
[...]

> *SIGH* Anyone got a cluebat?

http://groups.google.com/groups?threadm=3EE84807.DD00F4D0%40web.de
http://groups.google.com/groups?threadm=3EE861E5.13B60F31%40web.de
(Subject: Re: volatile keyword usage philosophy (long!))

regards,
alexander.

Michael Furman

unread,
Jun 30, 2003, 5:20:27 PM6/30/03
to

"David Schwartz" <dav...@webmaster.com> wrote in message
news:bdq788$icm$1...@nntp.webmaster.com...

David,
that is what I spend more then enough time to explain you, but you refuse
(or just can't?) listen.

Michael Furman


SenderX

unread,
Jun 30, 2003, 6:51:22 PM6/30/03
to
No. Its only one half of the equation...

Volatile should affect the way the compiler treats the vars.


You will need to use other mechanisms, sometimes combined with volatile, to
provide thread and/or memory visibility safety:

Critical Section
InterlockedAPI's
Mutex
ect...

So by using volatile AND using thread-safe means to access the variable, you
should be fine:

LONG volatile lVar = 0;

InterlockedIncrement( &lVar );

Will work for Windows.

Compilers written for non-TSO processors might inject fences on volatile var
access. It seems like they would have to. Especially MS!

--
The designer of the SMP and HyperThread friendly, AppCore library.

http://AppCore.home.comcast.net


David Schwartz

unread,
Jun 30, 2003, 7:12:05 PM6/30/03
to

"Michael Furman" <Michae...@Yahoo.com> wrote in message
news:bdq9is$10420c$1...@ID-122417.news.dfncis.de...

> David,
> that is what I spend more then enough time to explain you, but you refuse
> (or just can't?) listen.

I'm listening, it's just totally wrong. Stop me when you disagree:

1) The 'volatile' keyword is a C and C++ keyword.

2) Neither of those standards says anything about threading, leaving it
up to the implementation.

3) So if volatile were to be useful for concurrent accesses from
different threads, the implementation documentation would be the only place
that could say so.

4) I know of no implementation documentation that says so. Can you cite
any?

DS


Michael Furman

unread,
Jun 30, 2003, 7:47:05 PM6/30/03
to

"David Schwartz" <dav...@webmaster.com> wrote in message
news:bdqg45$nnu$1...@nntp.webmaster.com...

>
> "Michael Furman" <Michae...@Yahoo.com> wrote in message
> news:bdq9is$10420c$1...@ID-122417.news.dfncis.de...
>
> > David,
> > that is what I spend more then enough time to explain you, but you
refuse
> > (or just can't?) listen.
>
> I'm listening, it's just totally wrong. Stop me when you disagree:

I was trying to stop you many times last couple of days :-)
OK, one more time.

>
> 1) The 'volatile' keyword is a C and C++ keyword.

Correct.

>
> 2) Neither of those standards says anything about threading, leaving
it
> up to the implementation.

First half "Neither of those standards says anything about threading" -
is correct. Second part "leaving it up to the implementation" is wrong:
C/C++ standards does not leaving up threads to anything and standard C/C++
implementation does not have threads.

> 3) So if volatile were to be useful for concurrent accesses from
> different threads, the implementation documentation would be the only
place
> that could say so.

Not just incorrect but totally senseless. Documentation of implementation
of what?

> 4) I know of no implementation documentation that says so. Can you
cite
> any?

documentation of what? And what do you want to see: exact word "useful"?


David Schwartz

unread,
Jul 1, 2003, 12:20:35 AM7/1/03
to

"SenderX" <x...@xxx.xxx> wrote in message
news:u83Ma.78$a45...@rwcrnsc52.ops.asp.att.net...

> No. Its only one half of the equation...

No, it's not.

> Volatile should affect the way the compiler treats the vars.

Yes, but not in a way you can rely on.

> You will need to use other mechanisms, sometimes combined with volatile,
to
> provide thread and/or memory visibility safety:

You never need to combine them with volatile.

> So by using volatile AND using thread-safe means to access the variable,
you
> should be fine:

Same without volatile.

> LONG volatile lVar = 0;
>
> InterlockedIncrement( &lVar );
>
> Will work for Windows.

Yes, and it will work equally well without 'volatile'.

> Compilers written for non-TSO processors might inject fences on volatile
var
> access. It seems like they would have to. Especially MS!

They might, but they might not. So you can't rely on it. So you're back
to mutexes or inherently thread-safe operations like the Interlocked ones.

DS


David Schwartz

unread,
Jul 1, 2003, 12:22:46 AM7/1/03
to

"Michael Furman" <Michae...@Yahoo.com> wrote in message
news:bdqi5q$uq27l$1...@ID-122417.news.dfncis.de...

> > 3) So if volatile were to be useful for concurrent accesses from
> > different threads, the implementation documentation would be the only
> > place
> > that could say so.

> Not just incorrect but totally senseless. Documentation of implementation
> of what?

Either the compiler or the threading API.

> > 4) I know of no implementation documentation that says so. Can you
> > cite
> > any?

> documentation of what? And what do you want to see: exact word "useful"?

Either a compiler, a threading library, or anything else that would be
in a position to do it. I don't care what the exact wording is, anything
sufficient to create a way to use 'volatile' to protect variables accessed
concurrently by multiple threads.

Note that it *MUST* be implementation-specific documentation. Anything
that claims that the C or C++ standards say 'volatile' will work for
concurrent access among threads would just be an incorrect interpretation of
the standards. They say nothing of the sort.

DS


SenderX

unread,
Jul 1, 2003, 12:37:46 AM7/1/03
to
> > Volatile should affect the way the compiler treats the vars.
>
> Yes, but not in a way you can rely on.

So there is no good time to use volatile, anywhere?

> You never need to combine them with volatile.

If you don't want the compiler to mess with them (optimize), then volatile
will help.

> Yes, and it will work equally well without 'volatile'.

It should.

=)

> you're back
> to mutexes or inherently thread-safe operations like the Interlocked ones.

Absolutely.


Also, remember the interlocked API's take the dest address as volatile...


LONG InterlockedIncrement(
LPLONG volatile lpAddend
);


;)

Ole Reinartz

unread,
Jul 1, 2003, 7:52:29 AM7/1/03
to
SenderX wrote:

>>>Volatile should affect the way the compiler treats the vars.
>>>
>>>
>> Yes, but not in a way you can rely on.
>>
>>
>
>So there is no good time to use volatile, anywhere?
>

When accessing HW registers ( i.e. memory locations where changes are
not caused by threads, so that no thread interlocking mechanism would help)?

>
>
>
>
>
>> You never need to combine them with volatile.
>>
>>
>
>If you don't want the compiler to mess with them (optimize), then volatile
>will help.
>

Yes, but why would you want that when they are changed only by threads?

Ole

Ulrich Eckhardt

unread,
Jul 1, 2003, 8:26:56 AM7/1/03
to
[snip]

>
> *SIGH* Anyone got a cluebat?

Apart from the thing you pointed out (i.e. the wrong interpretation of the
meaning of volatile), would you call this article sound ?

Ulrich Eckhardt

--
Questions ?
see C++-FAQ Lite: http://parashift.com/c++-faq-lite/ first !

David Butenhof

unread,
Jul 1, 2003, 10:23:15 AM7/1/03
to
Ulrich Eckhardt wrote:

> David Schwartz wrote:
>> http://www.cuj.com/documents/s=7998/cujcexp1902alexandr/alexandr.htm
>>
> [snip]
>>
>> *SIGH* Anyone got a cluebat?
>
> Apart from the thing you pointed out (i.e. the wrong interpretation of the
> meaning of volatile), would you call this article sound ?

The main intent of this article is to "hack" the C++ typing system. He's not
presuming any semantics for "volatile" -- he's using it purely as a
syntactic tag that's universally supported by C++ and not commonly used by
applications. The idea is that one declares variables that are shared in a
way that the compiler will complain if you reference them in a context
where you don't hold a lock. That is, you lock in a way that removes the
"volatile tag" from the data and allows you to reference it normally. While
this idea is "sound" (in a bizarre sort of way) it's also a misuse of the
compiler typing mechanism.

This sort of thing isn't a bad idea, and could be used as an argument for an
extensible "attributes" mechanism in C++, as in Lisp or some other
languages.

It also, however, suggests that having "volatile" for unlocked references
helps the program -- and in general, for all the reasons David Schwartz has
been trying to explain, that's silly, pointless, and dangerously wrong
unless you're targeting a SPECIFIC implementation that you happen to KNOW
provides a NONSTANDARD meaning to "volatile" which is relevant to threads.

--
/--------------------[ David.B...@hp.com ]--------------------\
| Hewlett-Packard Company Tru64 UNIX & VMS Thread Architect |
| My book: http://www.awl.com/cseng/titles/0-201-63392-2/ |
\----[ http://homepage.mac.com/dbutenhof/Threads/Threads.html ]---/

Pete Becker

unread,
Jul 1, 2003, 10:27:45 AM7/1/03
to
SenderX wrote:
>
> > > Volatile should affect the way the compiler treats the vars.
> >
> > Yes, but not in a way you can rely on.
>
> So there is no good time to use volatile, anywhere?
>

Not portably. Neither the C standard nor the C++ standard imposes any
observable requirements on code that uses volatile. It's strictly a hook
for implementation-specific changes to code generation.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

Alexander Terekhov

unread,
Jul 1, 2003, 10:45:32 AM7/1/03
to

Pete Becker wrote:
>
> SenderX wrote:
> >
> > > > Volatile should affect the way the compiler treats the vars.
> > >
> > > Yes, but not in a way you can rely on.
> >
> > So there is no good time to use volatile, anywhere?
> >
>
> Not portably. ...

Except that both C and C++ standards require volatiles for the
static sig_atomic_t stuff (in order to handle "single-threaded
asynchrony"). You would also need volatiles in C for any auto
stuff that can be modified between the "two returns" from the
setjmp() creature (note its use is severely restricted in C++).

regards,
alexander.

Pete Becker

unread,
Jul 1, 2003, 11:18:53 AM7/1/03
to

You're right, I forgot about those. After all, this is
comp.programming.threads. <g>

Alexander Terekhov

unread,
Jul 1, 2003, 11:24:20 AM7/1/03
to

David Butenhof wrote:
[...]

> applications. The idea is that one declares variables that are shared in a
> way that the compiler will complain if you reference them in a context
> where you don't hold a lock. That is, you lock in a way that removes the
> "volatile tag" from the data and allows you to reference it normally. While
> this idea is "sound" (in a bizarre sort of way) it's also a misuse of the
> compiler typing mechanism.

C99:

"If an attempt is made to refer to an object defined with a volatile-
qualified type through use of an lvalue with non-volatile-qualified
type, the behavior is undefined."

C++:

"If an attempt is made to refer to an object defined with a volatile-
qualified type through the use of an lvalue with a non-volatile-
qualified type, the program behaviour is undefined."

regards,
alexander.

--
http://lists.boost.org/MailArchives/boost/msg49026.php
(Subject: [boost] Re: Boost::thread feature request: thread priority)

David Butenhof

unread,
Jul 1, 2003, 3:06:12 PM7/1/03
to
Alexander Terekhov wrote:

> David Butenhof wrote:
> [...]
>> applications. The idea is that one declares variables that are shared in
>> a way that the compiler will complain if you reference them in a context
>> where you don't hold a lock. That is, you lock in a way that removes the
>> "volatile tag" from the data and allows you to reference it normally.
>> While this idea is "sound" (in a bizarre sort of way) it's also a misuse
>> of the compiler typing mechanism.
>
> C99:
>
> "If an attempt is made to refer to an object defined with a volatile-
> qualified type through use of an lvalue with non-volatile-qualified
> type, the behavior is undefined."
>
> C++:
>
> "If an attempt is made to refer to an object defined with a volatile-
> qualified type through the use of an lvalue with a non-volatile-
> qualified type, the program behaviour is undefined."

Yeah, exactly. Or at least, this is one of the most obvious tips of the
iceberg range underlying my statement that this is a "misuse" of the typing
mechanism. ;-)

David Butenhof

unread,
Jul 1, 2003, 3:08:24 PM7/1/03
to
Alexander Terekhov wrote:

Yes, yes, of course. But that's all beside the point, because those are
precisely the defined standard domain of "volatile", and have nothing to do
with threads. What David meant, clearly, in introducing this topic was the
use of "volatile" for THREADED programming.

David Schwartz

unread,
Jul 1, 2003, 4:05:17 PM7/1/03
to

"SenderX" <x...@xxx.xxx> wrote in message
news:ue8Ma.4625$fG.3540@sccrnsc01...

> > > Volatile should affect the way the compiler treats the vars.

> > Yes, but not in a way you can rely on.

> So there is no good time to use volatile, anywhere?

I didn't say that. You can see another post of mine where I demonstrated
a way to use 'volatile' to provide a form of synchronization between two
threads without any locking in cases where the shared variable doesn't
change. It's also sometimes useful as a platform-specific optimization when
dealing with shutdown flags. But that's IT.

> > You never need to combine them with volatile.

> If you don't want the compiler to mess with them (optimize), then volatile
> will help.

How can a call to InterlockedIncrement be optimized?

> > Yes, and it will work equally well without 'volatile'.

> It should.

> =)

Exactly. Any code that is guaranteed to work with volatile will work
without it, assuming you are using volatile to synchronize access to
variables across threads. I've explained why -- 'volatile' is part of the
C/C++ specification and those specifications don't talk about threads, they
leave that up to the implementation. Neither the WIN32 nor the the pthreads
implementations add any such semantics for volatile, so it doesn't have it.

> > you're back
> > to mutexes or inherently thread-safe operations like the Interlocked
ones.

> Absolutely.

> Also, remember the interlocked API's take the dest address as volatile...

Sure, any function can take parameters any way it wants to.

> LONG InterlockedIncrement(
> LPLONG volatile lpAddend
> );

So what? You don't think it's the 'volatile' keyword that provides the
synchronization between threads, do you? I know you're smarter than that.

DS


David Schwartz

unread,
Jul 1, 2003, 4:07:09 PM7/1/03
to

"Ulrich Eckhardt" <doom...@knuut.de> wrote in message
news:bdrukl$108cu2$1...@ID-178288.news.dfncis.de...

> David Schwartz wrote:
> > http://www.cuj.com/documents/s=7998/cujcexp1902alexandr/alexandr.htm
> >
> [snip]
> >
> > *SIGH* Anyone got a cluebat?

> Apart from the thing you pointed out (i.e. the wrong interpretation of the
> meaning of volatile), would you call this article sound ?

I think propogating such dangerous misinformation about 'volatile'
precludes calling the article sound "apart from that". If you have a doctor
who sets his patients on fire, you really can't talk about his medical
judgment "apart from that". If you had seen misuse of 'volatile' cause one
tenth of the pain I've seen it cause, you'd probably feel the same way.

DS


SenderX

unread,
Jul 1, 2003, 8:08:09 PM7/1/03
to
> So what? You don't think it's the 'volatile' keyword that provides the
> synchronization between threads, do you? I know you're smarter than that.

Volatile is important only if you want the compiler to leave the vars out of
code optimizations.

I was just thinking, MS must have the dest address of the Interlocked API's
as volatile for a reason?


> How can a call to InterlockedIncrement be optimized?

Code it yourself:

// Atomic add
__declspec( naked ) __inline long __fastcall

IA32_AtomicAdd
(
long volatile *plDest,
long lAddend = 1
)

{
__asm
{
xadd dword ptr [ecx], edx
mov eax, edx
ret

SenderX

unread,
Jul 1, 2003, 8:32:20 PM7/1/03
to
> > Volatile should affect the way the compiler treats the vars.
>
> Yes, but not in a way you can rely on.

So there is no good time to use volatile, anywhere?

> You never need to combine them with volatile.

If you don't want the compiler to mess with them (optimize), then volatile
will help.

> Yes, and it will work equally well without 'volatile'.

It should.

=)

> you're back
> to mutexes or inherently thread-safe operations like the Interlocked ones.

Absolutely.


Also, remember the interlocked API's take the dest address as volatile...

LONG InterlockedIncrement(
LPLONG volatile lpAddend
);


;)

David Schwartz

unread,
Jul 1, 2003, 10:09:50 PM7/1/03
to

"SenderX" <x...@xxx.xxx> wrote in message
news:JnpMa.10569$926.216@sccrnsc03...

> I was just thinking, MS must have the dest address of the Interlocked
API's
> as volatile for a reason?

I think they just use it as an arbitrary tag. Basically, the idea is
that it provides a way to distinguish variables that are intended to be used
in a thread-safe manner with those that aren't. (As discussed in another
thread, this is an abuse of the C/C++ type-safety mechanism.) Another
possibility is that they did it to make pure read accesses 'just work' (at
least if they're properly aligned and on x86 processors).

It's really hard to say.

> > How can a call to InterlockedIncrement be optimized?

> Code it yourself:
>
> // Atomic add
> __declspec( naked ) __inline long __fastcall
>
> IA32_AtomicAdd
> (
> long volatile *plDest,
> long lAddend = 1
> )
>
> {
> __asm
> {
> xadd dword ptr [ecx], edx
> mov eax, edx
> ret
> }
> }

This isn't a call to InterlockedIncrement. What I'm trying to say is
that they're not using 'volatile' to avoid any compiler optimizations since
there's no possible way the compiler could optimize a call to
'InterlockedIncrement' other than if it knew something special about
InterlockedIncrement, in which case, how would the 'volatile' help?

DS


David Schwartz

unread,
Jul 1, 2003, 10:21:29 PM7/1/03
to

"David Schwartz" <dav...@webmaster.com> wrote in message
news:bdtetf$em2$1...@nntp.webmaster.com...

> > // Atomic add
> > __declspec( naked ) __inline long __fastcall
> >
> > IA32_AtomicAdd
> > (
> > long volatile *plDest,
> > long lAddend = 1
> > )
> >
> > {
> > __asm
> > {
> > xadd dword ptr [ecx], edx
> > mov eax, edx
> > ret
> > }
> > }

One side note: The use of '__declspec(naked)' and '__fastcall' has two
disadvantages that might outweigh the benefits.

First, I've never seen VC++ actually inline a naked/fastcall function,
while it does inline normal functions quite often (you can even force it
to). So this might be a net performance *loss*. (I wonder what would happen
if you tried to force it to inline it?)

Second, Microsoft has made it clear that the fastcall calling convention
is not stable across compiler versions, so you have to be prepared for a new
compiler version to break this code. It should be wrapped in preprocessor
logic to at least issue a warning if the compiler version is not one on
which this is known to work.

DS


SenderX

unread,
Jul 1, 2003, 11:44:18 PM7/1/03
to
> This isn't a call to InterlockedIncrement.

On an NT 4.0 Server for IA-32, InterlockedIncrement uses xadd.

SenderX

unread,
Jul 1, 2003, 11:46:42 PM7/1/03
to
> Second, Microsoft has made it clear that the fastcall calling
convention
> is not stable across compiler versions, so you have to be prepared for a
new
> compiler version to break this code.

Yeah, I just typed it in off the top of my head.

;)

David Schwartz

unread,
Jul 2, 2003, 3:28:19 AM7/2/03
to

"SenderX" <x...@xxx.xxx> wrote in message
news:mysMa.80701$R73.9547@sccrnsc04...

> > This isn't a call to InterlockedIncrement.

> On an NT 4.0 Server for IA-32, InterlockedIncrement uses xadd.

Somehow I guess I didn't make myself clear. My point is that there's no
way the compiler could optimize a call to InterlockedIncrement, so the only
thing the volatile could do, avoid compiler optimizations, can't apply.
There's no difference between how you push a regular variable's address onto
the stack and how you push a volatile variable's address on the stack.

On another note, newer versions of VC++ have _InterlockedIncrement,
which can be made to fully inline with an appropriate #pragma. I haven't
looked at the quality of the assembly code generated.

DS


Ziv Caspi

unread,
Jul 3, 2003, 5:35:01 PM7/3/03
to
On Wed, 2 Jul 2003 00:28:19 -0700, "David Schwartz"
<dav...@webmaster.com> wrote:

>My point is that there's no
>way the compiler could optimize a call to InterlockedIncrement,

Why not?

>so the only
>thing the volatile could do, avoid compiler optimizations, can't apply.

Actually, that's exactly why it's there.

Ziv

David Schwartz

unread,
Jul 3, 2003, 6:05:39 PM7/3/03
to

"Ziv Caspi" <zi...@netvision.net.il> wrote in message
news:3f034e85.177551805@newsvr...

> On Wed, 2 Jul 2003 00:28:19 -0700, "David Schwartz"
> <dav...@webmaster.com> wrote:

> >My point is that there's no
> >way the compiler could optimize a call to InterlockedIncrement,

> Why not?

Because the function takes a pointer. There's no conceivable difference
between how the compiler would get the address of a 'volatile' variable and
how the compiler would get the address of one that wasn't volatile.

> >so the only
> >thing the volatile could do, avoid compiler optimizations, can't apply.

> Actually, that's exactly why it's there.

What type of compiler optimization could there be? How could it handle a
pointer to volatile data different from a pointer to non-volatile data? The
pointer value itself is constant and never changes.

DS


Ziv Caspi

unread,
Jul 4, 2003, 5:04:18 AM7/4/03
to
On Thu, 3 Jul 2003 15:05:39 -0700, "David Schwartz"
<dav...@webmaster.com> wrote:

>
>"Ziv Caspi" <zi...@netvision.net.il> wrote in message
>news:3f034e85.177551805@newsvr...
>
>> On Wed, 2 Jul 2003 00:28:19 -0700, "David Schwartz"
>> <dav...@webmaster.com> wrote:
>
>> >My point is that there's no
>> >way the compiler could optimize a call to InterlockedIncrement,
>
>> Why not?
>
> Because the function takes a pointer. There's no conceivable difference
>between how the compiler would get the address of a 'volatile' variable and
>how the compiler would get the address of one that wasn't volatile.

Wrong. Modifying a non-volatile object is not an observable operation,
so the optimizer might move/modify/eliminate it, as long as it thinks
the program semantics are not changed. The function's being in another
DLL does not change this fact (assume, for the moment, that you have
the sources for this function and are compiling it).

>
>> >so the only
>> >thing the volatile could do, avoid compiler optimizations, can't apply.
>
>> Actually, that's exactly why it's there.
>
> What type of compiler optimization could there be? How could it handle a
>pointer to volatile data different from a pointer to non-volatile data? The
>pointer value itself is constant and never changes.

It changes the semantics of what it means to read/write to the
variable -- it is an observable operation.

Ziv

David Schwartz

unread,
Jul 4, 2003, 5:30:14 AM7/4/03
to

"Ziv Caspi" <zi...@netvision.net.il> wrote in message
news:3f05422b.305462221@newsvr...

> On Thu, 3 Jul 2003 15:05:39 -0700, "David Schwartz"
> <dav...@webmaster.com> wrote:

> >"Ziv Caspi" <zi...@netvision.net.il> wrote in message
> >news:3f034e85.177551805@newsvr...

> >> On Wed, 2 Jul 2003 00:28:19 -0700, "David Schwartz"
> >> <dav...@webmaster.com> wrote:

> >> >My point is that there's no
> >> >way the compiler could optimize a call to InterlockedIncrement,
> >
> >> Why not?

> > Because the function takes a pointer. There's no conceivable
difference
> >between how the compiler would get the address of a 'volatile' variable
and
> >how the compiler would get the address of one that wasn't volatile.

> Wrong. Modifying a non-volatile object is not an observable operation,
> so the optimizer might move/modify/eliminate it, as long as it thinks
> the program semantics are not changed. The function's being in another
> DLL does not change this fact (assume, for the moment, that you have
> the sources for this function and are compiling it).

But the call to InterlockedIncrement doesn't modify anything.

> >> >so the only
> >> >thing the volatile could do, avoid compiler optimizations, can't
apply.
> >
> >> Actually, that's exactly why it's there.
> >
> > What type of compiler optimization could there be? How could it
handle a
> >pointer to volatile data different from a pointer to non-volatile data?
The
> >pointer value itself is constant and never changes.

> It changes the semantics of what it means to read/write to the
> variable -- it is an observable operation.

Except a call to InterlockedIncrement doesn't read/write the variable.
It just pushes it onto the stack and invokes the InterlockedIncrement
function.

Perhaps we're talking across each other. The question is why
InterlockedIncrement is declared like this:

LONG InterlockedIncrement(LONG volatile *lpAddend);

Rather than say:

LONG InterlockedIncrement(LONG *lpAddend);

Now, the code for InterlockedIncrement is already generated and already
exists in a DLL somewhere, it was written in assembly language by Microsoft.
We're talking about how you invoke it. To invoke it, the compiler pushes the
address of lpAddend on the stack and jumps into the DLL. So why is the
'volatile' there?

That's the question. Your answer, that it disables some compiler
optimization, doesn't make sense. There is no compiler optimization to
disable, the compiler will push the address of lpAddend on the stack and
jump to InterlockedIncrement the same whether the 'volatile' is there or
not.

DS


Alexander Terekhov

unread,
Jul 4, 2003, 5:30:56 AM7/4/03
to

Ziv Caspi wrote: ...

Ziv, "volatile" stuff IS brain-damaged. Really. kinda-<copy&paste>

Originally, C/C++ volatiles were indeed designed having memory mapped
I/O ("registers"/"ports") in mind. At that time hardware was rather
simplistic and didn't do any reordering in either "ordinal" or "I/O"
space. Modern hardware COULD reorder memory accesses in ALL spaces.
So, you basically need almost the same reordering constrains (see the
links below) for portable device driver programming as for threading.
The situation is actually kinda more complicated because for device
drivers you'd need to "control" BOTH spaces -- e.g. if you publish a
bunch of structure addresses and those structure meant to be
read/write by a device, you'll need to ensure "cross-space" ordering.

Unfortunately, the upcoming <iohw.h> and <hardware> still lacks
facilities (ala "msync" stuff -- pls see below) to do that... unless
they really want to impose "total ordering" if you use any read/write
IO calls.

Later, volatiles were kinda {re-}used for setjmp/longjmp stuff in
order to spill the locals to memory (and to reload them after the
jump). Finally, volatiles are also needed for static sig_atomic_t's.
Both these uses of volatiles are purely "single-threaded". Ideally,
we should get rid completely of setjmp/longjmp (ISO C should adopt
C++ exceptions... but after they will be repaired, of course) and
also asynchronous signals (threads shall replace them). Given the
upcoming <iohw.h> and <hardware>, C/C++ volatiles could then be
deprecated as well. Now, back to threading and C++... here's some
C++ code for "curious" folks who can "speak C++":

http://www.terekhov.de/pthread_refcount_t/experimental/refcount.cpp

Some wording about msync "semantics" can be found here:

http://www.google.com/groups?selm=3EE0CA46.593F938B%40web.de
(Subject: Re: Asynchronous queue without synchronization primitives)

Reference counting operation are described here:

http://www.terekhov.de/pthread_refcount_t/draft-edits.txt

Comments are quite welcome. TIA.

regards,
alexander.

Ziv Caspi

unread,
Jul 5, 2003, 6:09:23 AM7/5/03
to
On Fri, 4 Jul 2003 02:30:14 -0700, "David Schwartz"
<dav...@webmaster.com> wrote:

>> It changes the semantics of what it means to read/write to the
>> variable -- it is an observable operation.
>
> Except a call to InterlockedIncrement doesn't read/write the variable.
>It just pushes it onto the stack and invokes the InterlockedIncrement
>function.

And when that function executes, it modifies a volatile object.

> Perhaps we're talking across each other. The question is why
>InterlockedIncrement is declared like this:
>
>LONG InterlockedIncrement(LONG volatile *lpAddend);
>
> Rather than say:
>
>LONG InterlockedIncrement(LONG *lpAddend);
>
> Now, the code for InterlockedIncrement is already generated and already
>exists in a DLL somewhere, it was written in assembly language by Microsoft.

This doesn't matter.

>We're talking about how you invoke it.

No we're not. We're talking about what semantic knowledge does the
compiler have of InterlockedIncrement. Unlike POSIX, synchronization
functions in Win32 do not require the compiler to have special
knowledge of them. So Win32 uses the only tool C/C++ gives it -- the
volatile keyword. A comforming C/C++ compiler can generate code that
reorders the assignments in the A, but not in B:

void f( long* p ) { p++; }
void g( volatile long* p ) { p++; }

int main()
{
long a;
long b;

// A
a = 1;
f( &b );

// B
a = 1;
g( &b );
}

(Note that this has nothing to do with multiple threading.)

> To invoke it, the compiler pushes the
>address of lpAddend on the stack and jumps into the DLL. So why is the
>'volatile' there?

To prevent compiler reordering.

> That's the question. Your answer, that it disables some compiler
>optimization, doesn't make sense.

I respectfully disagree.

Ziv

Ziv Caspi

unread,
Jul 5, 2003, 6:09:25 AM7/5/03
to
On Fri, 04 Jul 2003 11:30:56 +0200, Alexander Terekhov
<tere...@web.de> wrote:

>
>Ziv Caspi wrote: ...
>
>Ziv, "volatile" stuff IS brain-damaged. Really. kinda-<copy&paste>

I appreciate the history lesson. I'm old enough (but not too old!) to
actually remember these days...

But this has nothing to do with the argument I've made, unfortunately.

Ziv

Alexander Terekhov

unread,
Jul 5, 2003, 8:14:46 AM7/5/03
to

C/C++ volatiles don't prevent compiler reordering (hardware aside for
a moment) of neither READS nor WRITES to *non-volatile* data. "The
observable behavior of the abstract machine is its sequence of reads
and writes to volatile data and calls to library I/O functions". Your
argument is totally bogus, I'm afraid. Unless, of course, you always
declare each and every variable volatile.

regards,
alexander.

David Schwartz

unread,
Jul 5, 2003, 4:30:50 PM7/5/03
to

"Ziv Caspi" <zi...@netvision.net.il> wrote in message
news:3f069f22.394797588@newsvr...

> > Except a call to InterlockedIncrement doesn't read/write the
variable.
> >It just pushes it onto the stack and invokes the InterlockedIncrement
> >function.

> And when that function executes, it modifies a volatile object.

The function could modify a volatile object whether or not it takes a
volatile parameter.

> >We're talking about how you invoke it.

> No we're not. We're talking about what semantic knowledge does the
> compiler have of InterlockedIncrement. Unlike POSIX, synchronization
> functions in Win32 do not require the compiler to have special
> knowledge of them. So Win32 uses the only tool C/C++ gives it -- the
> volatile keyword. A comforming C/C++ compiler can generate code that
> reorders the assignments in the A, but not in B:
>
> void f( long* p ) { p++; }
> void g( volatile long* p ) { p++; }
>
> int main()
> {
> long a;
> long b;
>
> // A
> a = 1;
> f( &b );
>
> // B
> a = 1;
> g( &b );
> }
>
> (Note that this has nothing to do with multiple threading.)

The compiler can only invoke such re-ordering if it has no visible
consequences. If 'f(&b)' could possibly get at the value of 'a', then this
optimization will fail no matter what. If 'f(&b)' cannot possibly get at the
value of 'a', then this optimization is safe no matter what.

So if this requires 'volatile' it only because the compiler is making an
optimization that is not safe in any case.

> > To invoke it, the compiler pushes the
> >address of lpAddend on the stack and jumps into the DLL. So why is the
> >'volatile' there?
>
> To prevent compiler reordering.
>
> > That's the question. Your answer, that it disables some compiler
> >optimization, doesn't make sense.
>
> I respectfully disagree.

I will admit it's theoretically possible that the compiler might somehow
perform an optimization that 'volatile' disables and that this optimization
might cause the compiler to make code incorrect. However, in practice, this
never happens because any mechanism another thread could use to modify the
variable could also be used by InterlockedIncrement itself. Microsoft has
made sure the compiler can't see into InterlockedIncrement.

If you built a compiler that did look inside the function's
implementation, even after it's compiler into normal object code, you'd
either have to specifically teach it about threads or it would have to
recognize read-modify-write operations, locked bus cycles, or whatever the
equivalent is on the platform we're talking about.

Can you show an actual example using InterlockedIncrement where the
compiler will make an optimization without 'volatile' present that could
cause a problem? Remember, the compiler has no idea what
InterlockedIncrement actually does.

DS


Michael Furman

unread,
Jul 5, 2003, 4:39:34 PM7/5/03
to

"Alexander Terekhov" <tere...@web.de> wrote in message
news:3F06C136...@web.de...

Which of Ziv's argument is "totally bogus" in your opinion? What
you quoted is:
"Ziv Caspi wrote: ..."

Michael Furman


>
> regards,
> alexander.


Ziv Caspi

unread,
Jul 5, 2003, 5:06:13 PM7/5/03
to
On Sat, 5 Jul 2003 13:30:50 -0700, "David Schwartz"
<dav...@webmaster.com> wrote:

>
>"Ziv Caspi" <zi...@netvision.net.il> wrote in message
>news:3f069f22.394797588@newsvr...
>
>> > Except a call to InterlockedIncrement doesn't read/write the
>variable.
>> >It just pushes it onto the stack and invokes the InterlockedIncrement
>> >function.
>
>> And when that function executes, it modifies a volatile object.
>
> The function could modify a volatile object whether or not it takes a
>volatile parameter.

But it doesn't. Apparently you're thinking along the lines of "I
didn't compile this function, so the compiler can't assume anything
about it". This is false reasoning.


>> >We're talking about how you invoke it.
>
>> No we're not. We're talking about what semantic knowledge does the
>> compiler have of InterlockedIncrement. Unlike POSIX, synchronization
>> functions in Win32 do not require the compiler to have special
>> knowledge of them. So Win32 uses the only tool C/C++ gives it -- the
>> volatile keyword. A comforming C/C++ compiler can generate code that
>> reorders the assignments in the A, but not in B:
>>
>> void f( long* p ) { p++; }
>> void g( volatile long* p ) { p++; }
>>
>> int main()
>> {
>> long a;
>> long b;
>>
>> // A
>> a = 1;
>> f( &b );
>>
>> // B
>> a = 1;
>> g( &b );
>> }
>>
>> (Note that this has nothing to do with multiple threading.)
>
> The compiler can only invoke such re-ordering if it has no visible
>consequences.

Not according to the C/C++ standards. Accessing volatile objects and
invoking I/O functions are observable operations. If you like, the
standard answers the question "if a tree falls in the forest, but
nobody is there to hear it, is there still noise?" with a resounding
"yes".

> I will admit it's theoretically possible that the compiler might somehow
>perform an optimization that 'volatile' disables and that this optimization
>might cause the compiler to make code incorrect. However, in practice, this
>never happens because any mechanism another thread could use to modify the
>variable could also be used by InterlockedIncrement itself. Microsoft has
>made sure the compiler can't see into InterlockedIncrement.

a. Win32 is not intended just for Microsoft compilers.

b. No it didn't.

> If you built a compiler that did look inside the function's
>implementation, even after it's compiler into normal object code, you'd
>either have to specifically teach it about threads or it would have to
>recognize read-modify-write operations, locked bus cycles, or whatever the
>equivalent is on the platform we're talking about.

No you wouldn't. Note that POSIX compilers don't need to understand
such issues as well -- they just need is to take some actions when
synchronization functions are called, and leave the rest to the OS.



> Can you show an actual example using InterlockedIncrement where the
>compiler will make an optimization without 'volatile' present that could
>cause a problem? Remember, the compiler has no idea what
>InterlockedIncrement actually does.

I'm having a deja vu here. When another poster told you that he tried
implementing a synchronization primitive and it worked, therefore it
must work, you (rightly) refused to accept that as a valid argument.

Ziv

Ziv Caspi

unread,
Jul 5, 2003, 5:13:00 PM7/5/03
to
On Sat, 05 Jul 2003 14:14:46 +0200, Alexander Terekhov
<tere...@web.de> wrote:

>C/C++ volatiles don't prevent compiler reordering (hardware aside for
>a moment) of neither READS nor WRITES to *non-volatile* data.

I never said it did.

>"The
>observable behavior of the abstract machine is its sequence of reads
>and writes to volatile data and calls to library I/O functions". Your
>argument is totally bogus, I'm afraid.

Careful here. I never made any argument like the one you apparently
think I did.

> Unless, of course, you always
>declare each and every variable volatile.

No I don't. Like DS, you seem to think I'm claiming something I don't.

Ziv

David Schwartz

unread,
Jul 5, 2003, 5:32:53 PM7/5/03
to

"Ziv Caspi" <zi...@netvision.net.il> wrote in message
news:3f073ad7.434657905@newsvr...

> > The function could modify a volatile object whether or not it takes a
> >volatile parameter.

> But it doesn't. Apparently you're thinking along the lines of "I
> didn't compile this function, so the compiler can't assume anything
> about it". This is false reasoning.

No, I'm thinking along the lines of "this function is in fact written in
assembly language and the ocmpiler in fact doesn't know anything about it."
We're not talking about a totally abstract situation here, we're talking
about InterlockedIncrement in VC++.

> > The compiler can only invoke such re-ordering if it has no visible
> >consequences.

> Not according to the C/C++ standards. Accessing volatile objects and
> invoking I/O functions are observable operations. If you like, the
> standard answers the question "if a tree falls in the forest, but
> nobody is there to hear it, is there still noise?" with a resounding
> "yes".

The standard allows the compiler to make any optimizations that have no
visible consequences, under the 'as if' rule. I do not believe that volatile
creates an exception to the 'as if' rule. But for the purposes of this
discussion, it doesn't matter. We're not talking about why some abstract
function has a 'volatile' parameter in some abstract implementation, we're
talking about InterlockedIncrement on VC++.

> >I will admit it's theoretically possible that the compiler might somehow
> >perform an optimization that 'volatile' disables and that this
optimization
> >might cause the compiler to make code incorrect. However, in practice,
this
> >never happens because any mechanism another thread could use to modify
the
> >variable could also be used by InterlockedIncrement itself. Microsoft has
> >made sure the compiler can't see into InterlockedIncrement.

> a. Win32 is not intended just for Microsoft compilers.

True. I suppose it's possible that Microsoft marked the parameter to
InterlockedIncrement 'volatile' with the idea that it might provide some
benefit to future or other compilers.

> b. No it didn't.

I'm not sure what your basis for saying "No it didn't" it, but it in
fact doesn't. In fact, it *can't*, because the implementation if
InterlockedIncrement is different for different CPUs, and applications can
be linked at run time to differing implementations.

> > If you built a compiler that did look inside the function's
> >implementation, even after it's compiler into normal object code, you'd
> >either have to specifically teach it about threads or it would have to
> >recognize read-modify-write operations, locked bus cycles, or whatever
the
> >equivalent is on the platform we're talking about.

> No you wouldn't. Note that POSIX compilers don't need to understand
> such issues as well -- they just need is to take some actions when
> synchronization functions are called, and leave the rest to the OS.

I'm not sure I understand your argument. If a compiler claims to support
multi-threaded code but adds some optimizations that break multi-threaded
code, then it has to do something special to make sure the optimizations
don't break multi-threaded code.

That could be done in a function-specific way -- that is, disable these
optimizations for these named functions. It could also be done in a
non-function-specific way -- that is, disable these optimizations if
functions invoke operations that are often used for synchronization
purposes. Either way, InterlockedIncrement would be safe. If it's a named
function, then the optimizations wouldn't be made. If it's not, then the
compiler would recognize the locked bus cycle and disable them.

Yes, it's theoertically possible someone might concoct a new
optimization that doesn't detect these locked bus cycles but does detect
'volatile' and disables only in that case. I highly doubt Microsoft put the
'volatile' in there to disable this hypothetical future optimization
specifically because it's really not possible to 'look inside'
InterlockedIncrement because it's linked at run time.

> > Can you show an actual example using InterlockedIncrement where the
> >compiler will make an optimization without 'volatile' present that could
> >cause a problem? Remember, the compiler has no idea what
> >InterlockedIncrement actually does.

> I'm having a deja vu here. When another poster told you that he tried
> implementing a synchronization primitive and it worked, therefore it
> must work, you (rightly) refused to accept that as a valid argument.

Showing something that works it useless. Showing that it *must* work, on
the other hand, is very valuable.

DS


Sean Burke

unread,
Jul 5, 2003, 2:28:58 AM7/5/03
to

"Michael Furman" <Michae...@Yahoo.com> writes:

> "David Schwartz" <dav...@webmaster.com> wrote in message

> news:bdq788$icm$1...@nntp.webmaster.com...
> > http://www.cuj.com/documents/s=7998/cujcexp1902alexandr/alexandr.htm
> >
> > Here's a reputable journal and a reputable person claiming, among
> other
> > things:
> >
> > "Although both C and C++ Standards are conspicuously silent when it comes
> to
> > threads, they do make a little concession to multithreading, in the form
> of
> > the volatile keyword.
> >
> >
> > Just like its better-known counterpart const, volatile is a type modifier.
> > It's intended to be used in conjunction with variables that are accessed
> and
> > modified in different threads. Basically, without volatile, either writing
> > multithreaded programs becomes impossible, or the compiler wastes vast
> > optimization opportunities."


> >
> >
> >
> > *SIGH* Anyone got a cluebat?
>

> David,
> that is what I spend more then enough time to explain you, but you refuse
> (or just can't?) listen.
>
> Michael Furman

I propose that we create a new group comp.volatile.advocacy
and move this thread there. All in favor?

-SEan

Alexander Terekhov

unread,
Jul 7, 2003, 3:20:43 AM7/7/03
to

Michael Furman wrote:
[...]

> Which of Ziv's argument is "totally bogus" in your opinion? What
> you quoted is:
> "Ziv Caspi wrote: ..."

"A comforming C/C++ compiler can generate code that reorders the

assignments in the A, but not in B"

That's totally bogus.

regards,
alexander.

Ziv Caspi

unread,
Jul 12, 2003, 4:45:52 PM7/12/03
to
On Sat, 5 Jul 2003 14:32:53 -0700, "David Schwartz"
<dav...@webmaster.com> wrote:

>We're not talking about a totally abstract situation here, we're talking
>about InterlockedIncrement in VC++.

For the last time: InterlockedIncrement is a Win32 function, and has
nothing to do with VC++. This is crucial in understanding the issue.

> The standard allows the compiler to make any optimizations that have no
>visible consequences, under the 'as if' rule. I do not believe that volatile
>creates an exception to the 'as if' rule.

May I suggest you actually read it? The relevant C++ part is section


1.9 (Program Execution). In particular, para 6 which says:

"The observable behavior of the abstract machine is its sequence of

reads and writes to volatile data and calls to library IO functions"

>> a. Win32 is not intended just for Microsoft compilers.
>
> True. I suppose it's possible that Microsoft marked the parameter to
>InterlockedIncrement 'volatile' with the idea that it might provide some
>benefit to future or other compilers.

As well as its own.

>> b. No it didn't.
>
> I'm not sure what your basis for saying "No it didn't" it, but it in
>fact doesn't. In fact, it *can't*, because the implementation if
>InterlockedIncrement is different for different CPUs, and applications can
>be linked at run time to differing implementations.

That's bogus reasoning, as I've indicated in previous postings. An
implementation is allowed to have a few "stock" implementations -- for
example, one per target CPU -- and switch between them at runtime.

> I'm not sure I understand your argument. If a compiler claims to support
>multi-threaded code but adds some optimizations that break multi-threaded
>code, then it has to do something special to make sure the optimizations
>don't break multi-threaded code.
>
> That could be done in a function-specific way -- that is, disable these
>optimizations for these named functions. It could also be done in a
>non-function-specific way -- that is, disable these optimizations if
>functions invoke operations that are often used for synchronization
>purposes.

Or, it could be done by disabling the optimizations because the
functions touch volatile memory and the standard *requires* it not to
re-order volatile access. So you see -- Win32, by adding volatile to
some function arguments, prevents any conforming C/C++ from making
certain kinds of optimizations.

> Yes, it's theoertically possible someone might concoct a new
>optimization that doesn't detect these locked bus cycles but does detect
>'volatile' and disables only in that case. I highly doubt Microsoft put the
>'volatile' in there to disable this hypothetical future optimization
>specifically because it's really not possible to 'look inside'
>InterlockedIncrement because it's linked at run time.

That's more bogus reasoning.


>
>> > Can you show an actual example using InterlockedIncrement where the
>> >compiler will make an optimization without 'volatile' present that could
>> >cause a problem? Remember, the compiler has no idea what
>> >InterlockedIncrement actually does.
>
>> I'm having a deja vu here. When another poster told you that he tried
>> implementing a synchronization primitive and it worked, therefore it
>> must work, you (rightly) refused to accept that as a valid argument.
>
> Showing something that works it useless. Showing that it *must* work, on
>the other hand, is very valuable.

True; however, in no way it answers the remark I've made.

Ziv

Ziv Caspi

unread,
Jul 12, 2003, 4:45:55 PM7/12/03
to

Really.

Ziv

David Schwartz

unread,
Jul 13, 2003, 2:35:17 PM7/13/03
to

"Ziv Caspi" <zi...@netvision.net.il> wrote in message
news:3f0fc8f1.34643765@newsvr...

> May I suggest you actually read it? The relevant C++ part is section
> 1.9 (Program Execution). In particular, para 6 which says:

> "The observable behavior of the abstract machine is its sequence of
> reads and writes to volatile data and calls to library IO functions"

Yes, except this is meaningless with respect to volatile data. See the
other post on this issue. For example, where are the reads and writes to
volatile data supposed to be observed? At the cache? On the memory bus? What
about out of order processors with speculative execution? What constitutes a
sequence point then? (The ordering is with respect to sequence points,
right?)

> > I'm not sure what your basis for saying "No it didn't" it, but it in
> >fact doesn't. In fact, it *can't*, because the implementation if
> >InterlockedIncrement is different for different CPUs, and applications
can
> >be linked at run time to differing implementations.

> That's bogus reasoning, as I've indicated in previous postings. An
> implementation is allowed to have a few "stock" implementations -- for
> example, one per target CPU -- and switch between them at runtime.

If the implementation had all this specific smarts about
InterlockedIncrement, the specific smarts could also include the fact that
some optimizations can't be used. You can't have it both ways.

Either the implementation doesn't treat InterlockedIncrement specially,
in which case it can't be sure it doesn't access volatile data internally
and can't optimize, or the implementations treats InterlockedIncrement
specially, in which case it knows exactly what its requirements are and
doesn't need 'volatile'.

> > I'm not sure I understand your argument. If a compiler claims to
support
> >multi-threaded code but adds some optimizations that break multi-threaded
> >code, then it has to do something special to make sure the optimizations
> >don't break multi-threaded code.
> >
> > That could be done in a function-specific way -- that is, disable
these
> >optimizations for these named functions. It could also be done in a
> >non-function-specific way -- that is, disable these optimizations if
> >functions invoke operations that are often used for synchronization
> >purposes.

> Or, it could be done by disabling the optimizations because the
> functions touch volatile memory and the standard *requires* it not to
> re-order volatile access. So you see -- Win32, by adding volatile to
> some function arguments, prevents any conforming C/C++ from making
> certain kinds of optimizations.

The problem is that, with respect to compilers about which Microsoft
does nothing, there's no way to know whether the set of optimizations that
'volatile' disables and the set of optimizations that need to be disabled
have anything to do with each other. This makes your argument a 'who knows,
it might even make it do the right thing' type of argument.

DS


Ziv Caspi

unread,
Jul 16, 2003, 4:31:59 PM7/16/03
to
On Sun, 13 Jul 2003 11:35:17 -0700, "David Schwartz"
<dav...@webmaster.com> wrote:

>For example, where are the reads and writes to
>volatile data supposed to be observed?

Nowhere. The as-if rule has been carefully constructed so that it
won't impose this type of meaning.

> If the implementation had all this specific smarts about
>InterlockedIncrement, the specific smarts could also include the fact that
>some optimizations can't be used. You can't have it both ways.

The question is not QOI. It is what the standard allows an
implementation to do and what it does not.

> Either the implementation doesn't treat InterlockedIncrement specially,
>in which case it can't be sure it doesn't access volatile data internally
>and can't optimize, or the implementations treats InterlockedIncrement
>specially, in which case it knows exactly what its requirements are and
>doesn't need 'volatile'.

Says you, but not the standard.

> The problem is that, with respect to compilers about which Microsoft
>does nothing, there's no way to know whether the set of optimizations that
>'volatile' disables and the set of optimizations that need to be disabled
>have anything to do with each other. This makes your argument a 'who knows,
>it might even make it do the right thing' type of argument.

No it doesn't. It provides a tool programmers can use to guarantee
correctness of their programs so long as they have conforming
compilers.

Ziv

David Schwartz

unread,
Jul 16, 2003, 6:01:03 PM7/16/03
to

"Ziv Caspi" <zi...@netvision.net.il> wrote in message
news:3f11c297.164090139@newsvr...

> On Sun, 13 Jul 2003 11:35:17 -0700, "David Schwartz"
> <dav...@webmaster.com> wrote:

> >For example, where are the reads and writes to
> >volatile data supposed to be observed?

> Nowhere. The as-if rule has been carefully constructed so that it
> won't impose this type of meaning.

Okay, then I give up. What does it mean for the order of reads and
writes to volatile variables to be part of the observable behavior if they
always have different orders in different places and no place is the place
you're supposed to be observing them?

> > If the implementation had all this specific smarts about
> >InterlockedIncrement, the specific smarts could also include the fact
that
> >some optimizations can't be used. You can't have it both ways.

> The question is not QOI. It is what the standard allows an
> implementation to do and what it does not.

No standard says what affect 'volatile' will have on concurrent accesses
from threads.

> > Either the implementation doesn't treat InterlockedIncrement
specially,
> >in which case it can't be sure it doesn't access volatile data internally

> >and can't optimize, or the implementation treats InterlockedIncrement


> >specially, in which case it knows exactly what its requirements are and
> >doesn't need 'volatile'.

> Says you, but not the standard.

No standard requires 'volatile' for variables that might be accesses by
concurrent threads. No standard says anything about 'volatile' with respect
to concurrent threads. We're trying to figure out why Microsoft used
'volatile' on a variable with concurrent threads. No standard is relevent
except the design of the particular threading API, perhaps issues about this
particular platform and what types of compiler optimizations are possible,
and what 'volatile' actually happens to do with respect to concurrent
threads.

> > The problem is that, with respect to compilers about which Microsoft
> >does nothing, there's no way to know whether the set of optimizations
that
> >'volatile' disables and the set of optimizations that need to be disabled
> >have anything to do with each other. This makes your argument a 'who
knows,
> >it might even make it do the right thing' type of argument.

> No it doesn't. It provides a tool programmers can use to guarantee
> correctness of their programs so long as they have conforming
> compilers.

Please present me an example of a program that is 'guarnteed correct'
provided a 'conforming compiler' with 'volatile' present but not without
where the 'volatile' keyword is applied to a variable shared by concurrent
threads. As you most certainly know, that's not possible. The 'volatile'
keyword at best only disables compiler optimizations, and the processor (or
the memory hardware, or who knows what else) can theoretically make any
optimization the compiler could.

The only part of the standard you could possibly point to as a guarantee
is the observability of accesses to volatile variables. But as I've already
argued, the standard doesn't say what it means to observe an access, so the
guarantee of observability isn't meaningful.

DS


Michael Furman

unread,
Jul 16, 2003, 7:37:22 PM7/16/03
to

"David Schwartz" <dav...@webmaster.com> wrote in message
news:bf4hv0$qcn$1...@nntp.webmaster.com...
> [...]

> As you most certainly know, that's not possible. The 'volatile'
> keyword at best only disables compiler optimizations, and the processor
(or
> the memory hardware, or who knows what else) can theoretically make any
> optimization the compiler could.

... again and again and again ... One more time:
Yes, 'volatile' keyword disables cerain compiler optimizations. To make
your code
working you have to disable similar CPU optimizations too - use mutexes or
whatever
else! You need to disable both kinds of optimizations - it is not enough to
disable just
one.

Michael Furman


David Schwartz

unread,
Jul 16, 2003, 8:22:32 PM7/16/03
to

"Michael Furman" <Michae...@Yahoo.com> wrote in message
news:bf4nk3$av8e3$1...@ID-122417.news.uni-berlin.de...

> "David Schwartz" <dav...@webmaster.com> wrote in message
> news:bf4hv0$qcn$1...@nntp.webmaster.com...

> > As you most certainly know, that's not possible. The 'volatile'


> > keyword at best only disables compiler optimizations, and the processor
> > (or
> > the memory hardware, or who knows what else) can theoretically make any
> > optimization the compiler could.

> ... again and again and again ... One more time:
> Yes, 'volatile' keyword disables cerain compiler optimizations. To make
> your code
> working you have to disable similar CPU optimizations too - use mutexes or
> whatever
> else! You need to disable both kinds of optimizations - it is not enough
to
> disable just
> one.

How can an application programmer know every possible source of such
optimizations? Read the subject line and note the words
"platform-independent" there.

If for some particular platform, you know every possible thing that
could go wrong and you know precisely what 'volatile' does, and you can
somehow do all the other things needed, then it will work. But if you switch
any part of that equation, it won't work -- including the compiler.

You cut what I was responding to. Ziv alleged that with 'volatile',
you're can write code that's guaranteed to work on *any* "conforming
compiler". The rest of the post you're responding to explains why this is
not so -- the clause in the standard he is relying on (and the one he is
expecting a conforming compiler to conform to) is meaningless because it
uses a word that's not defined. What does it mean to "observe" a memory
access? What does it mean to make memory accesses "observable"?

If a program is supposed to read a before b, where does this mean the
reads occur in that order? In the instruction stream? In the processor
execution order? In the processor-to-cache bus? On the main memory bus?
Please show me where the C or C++ standard makes it clear that a conforming
compiler must order them a particular way in a particular place. You can't.

So give that you can't rely upon the standard, you might still be able
to rely upon knowledge that a particular compiler does certain particular
things. But that's not what Ziv is saying. He's saying you can have the
guarantee on any "conforming compile". And that is what is wrong.

Also, you keep repeating one thing that's just so incredibly wrong it
deserves specific mention. You keep talking about using volatile in
conjunction with mutexes. There is no platform I know of where mutexes alone
are not sufficient. By constantly repeating this, you may confuse people
into thinking that they must use both 'volatile' and mutexes. This is
totally false. On every platform I know, mutexes are sufficient by themself
and using them with volatile just slows things down.

This has been pointed out to you in painful detail. So I'm puzzled why
you would keep repeating it, other than to mislead people.

DS


Michael Furman

unread,
Jul 16, 2003, 8:48:48 PM7/16/03
to

"David Schwartz" <dav...@webmaster.com> wrote in message
news:bf4q89$v74$1...@nntp.webmaster.com...

>
> "Michael Furman" <Michae...@Yahoo.com> wrote in message
> news:bf4nk3$av8e3$1...@ID-122417.news.uni-berlin.de...
>
> > "David Schwartz" <dav...@webmaster.com> wrote in message
> > news:bf4hv0$qcn$1...@nntp.webmaster.com...
>
> > > As you most certainly know, that's not possible. The 'volatile'
> > > keyword at best only disables compiler optimizations, and the
processor
> > > (or
> > > the memory hardware, or who knows what else) can theoretically make
any
> > > optimization the compiler could.
>
> > ... again and again and again ... One more time:
> > Yes, 'volatile' keyword disables cerain compiler optimizations. To make
> > your code
> > working you have to disable similar CPU optimizations too - use mutexes
or
> > whatever
> > else! You need to disable both kinds of optimizations - it is not enough
> to
> > disable just
> > one.
>
> How can an application programmer know every possible source of such
> optimizations? Read the subject line and note the words
> "platform-independent" there.

Silly question (and unrelated to the subject): he or she does not know and
does
not need to know.


> [other unrelated silly stuff skipped]

Michael Furman

David Schwartz

unread,
Jul 16, 2003, 9:07:56 PM7/16/03
to

"Michael Furman" <Michae...@Yahoo.com> wrote in message
news:bf4rq1$b4cqr$1...@ID-122417.news.uni-berlin.de...

> > How can an application programmer know every possible source of such
> > optimizations? Read the subject line and note the words
> > "platform-independent" there.

> Silly question (and unrelated to the subject): he or she does not know and
> does not need to know.

It would save a lot of time if you gave complete answers. What good is
disabling compiler optimizations if those same optimizations could be made
by other things?

DS


Michael Furman

unread,
Jul 16, 2003, 11:19:13 PM7/16/03
to

"David Schwartz" <dav...@webmaster.com> wrote in message
news:bf4stc$166$1...@nntp.webmaster.com...

Again: because if you do not disable it, whatever you do with other things
would
not help in making your code correct.

Michael Furman

David Schwartz

unread,
Jul 16, 2003, 11:45:44 PM7/16/03
to

"Michael Furman" <Michae...@Yahoo.com> wrote in message
news:bf54k0$at3vs$1...@ID-122417.news.uni-berlin.de...

> > It would save a lot of time if you gave complete answers. What good
is
> > disabling compiler optimizations if those same optimizations could be
made
> > by other things?

> Again: because if you do not disable it, whatever you do with other things
> would
> not help in making your code correct.

There are no "other things" that will not work without 'volatile' and
will work with it.

In any event, you are contradicting yourself. You said before:

> > > How can an application programmer know every possible source of
> > > such
> > > optimizations? Read the subject line and note the words
> > > "platform-independent" there.

> > Silly question (and unrelated to the subject): he or she does not know
> > and
> > does not need to know.

Now you are saying they *do* need to know. Otherwise, how would you know
what "other things" you needed?

DS


Michael Furman

unread,
Jul 17, 2003, 1:13:49 AM7/17/03
to

"David Schwartz" <dav...@webmaster.com> wrote in message
news:bf5659$6e9$1...@nntp.webmaster.com...

>
> "Michael Furman" <Michae...@Yahoo.com> wrote in message
> news:bf54k0$at3vs$1...@ID-122417.news.uni-berlin.de...
>
> > > It would save a lot of time if you gave complete answers. What
good
> is
> > > disabling compiler optimizations if those same optimizations could be
> made
> > > by other things?
>
> > Again: because if you do not disable it, whatever you do with other
things
> > would
> > not help in making your code correct.
>
> There are no "other things" that will not work without 'volatile' and
> will work with it.

That you think so.What I said that there is no "other things" that would be
garanteed to work by C/C++ standard w/o using volatile.

>
> In any event, you are contradicting yourself. You said before:
>
> > > > How can an application programmer know every possible source of
> > > > such
> > > > optimizations? Read the subject line and note the words
> > > > "platform-independent" there.
>
> > > Silly question (and unrelated to the subject): he or she does not know
> > > and
> > > does not need to know.
>
> Now you are saying they *do* need to know.

I did not say that.

>Otherwise, how would you know
> what "other things" you needed?

Why do you want me to know that?

Michael Furman


0 new messages