Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Volatile and Multithreading Redux

17 views
Skip to first unread message

Doug Harrison

unread,
Mar 15, 2004, 3:26:53 PM3/15/04
to
In light of recent discussions I've read in this newsgroup on the
relationship between volatile and multithreading in C++ (e.g. see
"Subject: using volatile bools instead of mutexes when synchronizing
threads" from 2004-02-15), I'd be interested to hear what people have
to say about a new web page on the subject that has been brought to my
attention by its author in microsoft.public.vc.mfc, a newsgroup
devoted to the MFC C++ application framework. Here's the link:

http://www.flounder.com/volatile.htm

Here's an excerpt:

<q>
I consulted with Dr. Samuel P. Harbison, co-author of the
highly-respected book C: A Reference Manual. It is now in its fifth
edition.

Sam has been an implementor of C compilers for a couple decades. He
was project manager of several C and C++ projects, has been a member
of the C Standards group, was chairman of the C++ standards committee
for three years, and has more years of detailed C and C++ experience
than a considerable number of self-declared C or C++ experts.
...
He has given me permission to print his response to my question to
him. I am reproducing the email in its entirety below.

If you don't want to read the whole thing, the summary is this: one of
the world's experts on the C language says that volatile is necessary
in a multithreaded environment. Not sufficient, but necessary.
...
So make your own decision as to who knows what they're talking about
here. I will no longer participate in this debate, except to point to
this Web page each time the issue comes up.
...
So Sam is not just agreeing with me. He is expressing his opinion as a
C/C++ expert.
</q>

The web page goes on to make an argument which invokes sequence points
and the abstract machine but ignores synchronization. Its author
believes it proves that "volatile is necessary in a multithreaded
environment. Not sufficient, but necessary." As he has frequently
expressed in microsoft.public.vc.mfc, its author believes all
variables accessed by multiple threads must be declared volatile,
including those always accessed under the protection of a mutex.

Comments?

--
Doug Harrison
d...@mvps.org

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Graeme Prentice

unread,
Mar 16, 2004, 11:10:19 PM3/16/04
to
On 15 Mar 2004 15:26:53 -0500, Doug Harrison wrote:

>In light of recent discussions I've read in this newsgroup on the
>relationship between volatile and multithreading in C++ (e.g. see
>"Subject: using volatile bools instead of mutexes when synchronizing
>threads" from 2004-02-15), I'd be interested to hear what people have
>to say about a new web page on the subject that has been brought to my
>attention by its author in microsoft.public.vc.mfc, a newsgroup
>devoted to the MFC C++ application framework. Here's the link:
>
>http://www.flounder.com/volatile.htm

[ snip ]

>
>The web page goes on to make an argument which invokes sequence points
>and the abstract machine but ignores synchronization. Its author
>believes it proves that "volatile is necessary in a multithreaded
>environment. Not sufficient, but necessary." As he has frequently
>expressed in microsoft.public.vc.mfc, its author believes all
>variables accessed by multiple threads must be declared volatile,
>including those always accessed under the protection of a mutex.
>
>Comments?


I note from some of your posts in the MS newsgroups that you (Doug) are
fairly knowledgeable and competent about the volatile/ multithreaded
issue and that you've expressed frustration with the author of that web
page, but I'll give my "less knowledgable" opinion anyway.

I find the web page very puzzling. The author claims to have much
experience working with and writing about optimizing compilers yet
doesn't present a single example that backs up his claim that volatile
is necessary.

It's also strange regarding the "out of context" quote from Sam Harbison
which seems to be addressing the very basic and well known issue that if
an interrupt routine modifies a volatile variable the non-interrupt code
may need to declare the variable volatile. I say this is strange
because I'm wondering why are two apparently highly experienced C
programmers discussing this as if it's even debatable - i.e. Sam
Harbison says "it is clear you need _volatile_". As far as I can see,
the only thing Sam Harbison can be referring to is the well known issue
that if an interrupt routine modifies a variable, the code probably
needs to declare the variable volatile or enclose access to it with
lock/unlock functions. In an embedded environment, volatile is
typically used because lock/unlock functions are less efficient. If
this is not what Sam Harbison is referring to, then why is there no
example illustrating "the clear need for volatile", and if it is what
Sam Harbison is referring to, then why are two highly experienced
programmers even discussing it? It is puzzling.

If you translate "interrupt routines" to threads (as Sam Harbison does)
i.e. because an interrupt routine can switch the current thread, from
the point of view of the current thread, a second thread is the same as
an interrupt routine.

The argument that "volatile is necessary" as described by Sam Harbison
seems to apply to the simple case where one thread waits on a single
variable that is set by a second thread
i.e. thread1 - while (!shared_variable);
thread2 - shared_variable = true;
You could argue that without volatile, the first thread won't re-read
shared_variable so this makes volatile necessary, however you can also
do while (true) { call_hidden_function(); if (shared_variable) break; }
and volatile is no longer necessary.


The web page mentions that an aggressively optimising compiler might be
able to do some optimisations that cause the use of lock() unlock() to
fail to "protect" shared variables i.e.
lock();
// access shared variables
unlock();

because "C++ scope rules could allow an aggressive optimiser to violate
program semantics" or because lock() unlock() are global functions and
the shared variable is a class member that the global functions somehow
can't access. Without an example, these claims don't prove that
volatile is necessary unless you accept the argument that because it's a
complicated issue, the optimiser *might* get something wrong or there
*might* be a subtle case that fails - subtleties turn up in C++ all the
time.

Is it possible for an optimiser to determine that a function which calls
lock/ unlock can't itself be called by the lock/ unlock function (and on
the same object if it's a member function), and with the same
parameters. If it's a run-once function that accesses shared variables
then it should be protected elsewhere.

Can the compiler determine that f1() can't be called by the scoped_lock
constructor destructor in the following code? What could this code do
with a conforming compiler, assuming no cache coherency or hardware
read/write re-ordering issues? Does making f1() non static solve the
potential problem? Could there be other less obvious cases where the
compiler could determine that a variable with internal linkage can't be
accessed by "hidden" lock() unlock() functions? Would volatile help?

// file1.cpp
void f2(int);
struct scoped_lock { scoped_lock(); ~scoped_lock(); }
static int f1()
{
static int v1;
scoped_lock x;
return ++v1;
}

static void thread1(void *)
{
while(true)
f2( f1() );
}
static void thread2(void *)
{
while(true)
f2( f1() );
}
int main()
{
_beginthread(thread1, 0, NULL);
_beginthread(thread2, 0, NULL);
}

// run thread1/ thread2
}

// file2.cpp
struct scoped_lock { scoped_lock(); ~scoped_lock(); }
void f2(int y) { cout << y; }
scoped_lock::scoped_lock(){};
scoped_lock::~scoped_lock(){};


Graeme

Ian McCulloch

unread,
Mar 16, 2004, 11:17:47 PM3/16/04
to
Doug Harrison wrote:

> In light of recent discussions I've read in this newsgroup on the
> relationship between volatile and multithreading in C++ (e.g. see
> "Subject: using volatile bools instead of mutexes when synchronizing
> threads" from 2004-02-15), I'd be interested to hear what people have
> to say about a new web page on the subject that has been brought to my
> attention by its author in microsoft.public.vc.mfc, a newsgroup
> devoted to the MFC C++ application framework. Here's the link:
>
> http://www.flounder.com/volatile.htm
>
> Here's an excerpt:

[snip]

> If you don't want to read the whole thing, the summary is this: one of
> the world's experts on the C language says that volatile is necessary
> in a multithreaded environment. Not sufficient, but necessary.
> ...
> So make your own decision as to who knows what they're talking about
> here. I will no longer participate in this debate, except to point to
> this Web page each time the issue comes up.
> ...
> So Sam is not just agreeing with me. He is expressing his opinion as a
> C/C++ expert.
> </q>
>
> The web page goes on to make an argument which invokes sequence points
> and the abstract machine but ignores synchronization. Its author
> believes it proves that "volatile is necessary in a multithreaded
> environment. Not sufficient, but necessary." As he has frequently
> expressed in microsoft.public.vc.mfc, its author believes all
> variables accessed by multiple threads must be declared volatile,
> including those always accessed under the protection of a mutex.
>
> Comments?
>

Well, in a trivial sense, since "the standard does not mention processes or
threads" (quote from Dr Harbinson's email), the semantics of volatile in a
multithread program are unspecified, and an implementation that supports
multithreaded programs as an extension could choose any semantics they
like.

The argument for volatile seems to be that a compiler is free, under the "as
if" rule, to reorder arbitary operations, as long as it makes no observable
difference. In the context of multithread operations, this is clearly a
potential problem.

However, "volatile" is only part of the solution. From the standard: "The
observable behavior of the abstract machine is its sequence of reads and
writes to volatile data and calls to library I/O functions. *[Footnote: An
implementation can offer additional library I/O functions as an extension.
Implementations that do so should treat calls to those functions as
``observable behavior'' as well. --- end foonote]" (1.9 [intro.execution])

So, volatile supresses reordering of accesses. But this isn't enough,
because it ignores synchronization, and because you also *must* use memory
barriers. In some cases you can use 'atomic' operations which are
automatically synchronized, but you can never avoid the need for memory
barriers. Any use of 'volatile' without using memory barriers too might
happen to work if it is run on a single CPU but it will then break,
probably in mysterious and hard to reproduce ways, on SMP.

Since memory barriers are not mentioned in the standard, it is hard to write
an even slightly portable program using volatile and barriers. A better
way is using something like POSIX. POSIX doesn't define memory barriers
(AFAIK) but it does define 'synchronization', (ie. pthread_mutex et al).
The POSIX synchronization functions implicitly include memory barriers (on
on implementations that require them), but they also *include the necessary
sequence point guarantees* that you need to use multiple threads.

Thus, if you are using POSIX synchronization primitives, 'volatile' is not
only unnecessary, it is a terrible anti-optimization because it adds lots
of additional restrictions on reordering that are not necessary (on a
properly coded program with correct use of synchronization primitives).

See the comp.programming.threads FAQ (http://www.lambdacs.com/cpt/FAQ.html),
in particular Question 56 "Why don't I need to declare shared variables
VOLATILE?" answers this succinctly.

The issue that Dr Harbison might be concerned with is whether the compiler
might 'optimize' away a variable (into a register, say) and not update its
value as part of the synchronization. Surely, this would violate POSIX
semantics (I don't have a reference for this, but it must be the case); a
compiler that is advanced enough to do such optimizations must also
recognize POSIX synchronization functions and act accordingly (this puts
rather strong restrictions on optimizations across function calls). Thus,
still no use for 'volatile'. This is discussed at length in the email *to*
Dr Harbison, at the bottom of the linked page, I think with much more
insight than Dr Harbison's reply.

Maybe Microsoft does things differently, and define their synchronization
primitives without specifying any sequence point guarantees. I have no
idea. But it certainly doesn't apply on the other side of the fence.

Was this guy really once chairman of C++ standards committee? Is he still
on the committee?

Regards,
Ian McCulloch

ka...@gabi-soft.fr

unread,
Mar 16, 2004, 11:18:41 PM3/16/04
to
Doug Harrison <d...@mvps.org> wrote in message
news:<78b9509p5ofd6siaq...@4ax.com>...

> http://www.flounder.com/volatile.htm

> Here's an excerpt:

> Comments?

The problem is that 1) he was asked the wrong question, and 2) he refers
strictly to the C standard.

He was asked the wrong question, because there was a quote in it: "the
semantics of the C language demand that all side effects be consolidated
at the sequence points." This statement is obviously wrong, and I don't
recall anyone having claimed it in the discussion. And I would expect
any C or C++ expert to point out that it is wrong.

Note that the exact argument he is rebuting talks about the compiler not
being able to know what goes on in a global function, such as m.lock()
in the example. This argument is definitly false -- modern optimizing
compilers DO look beyond function, and even module boundaries. There is
no question about this.

Which leads to the second point: he was asked a question apparently
about the C standard, and he responded according to the C standard.
Which is all fine and good, but the moment you invoke pthread_create,
you are outside of the C (or the C++) standard -- you are counting on
other guarantees as well. And the Posix standard says quite clearly
that the locking requests are required to guarantee synchronization in a
conforming implementation. If you try to base yourself strictly on the
C or the C++ standard here, without volatile, you have undefined
behavior (which the implementation may choose to define, say by claiming
conformance with the Posix standard), but with volatile, as well, you
have undefined behavior (because neither the C nor the C++ standard
define the semantics of pthread_mutex_lock or pthread_mutex_unlock).

In sum, if you only consider the C or the C++ standard, then you have no
guarantee of mutual exclusion with pthread_mutex_lock, so with or
without volatile, your code is undefined. And if you add the guarantees
of the Posix standard, you not only have mutual exclusion, you have
memory synchronization.

I'm less familiar with Windows, but I would find it surprising that they
didn't offer a similar guarantee. I'll admit, however, that I made
this judgement based on what seems to be standard programming procedures
in the Windows world, including at Microsoft. Some of the quotes from
Microsoft's documentation make me wonder however. In the end, maybe the
only real guarantee we have on that side is that the people at Microsoft
don't know how to write an optimizing compiler (but they could learn),
or that they won't accept breaking their own code. If there is anyone
who reads this group who has an in at Microsoft (Herb?), maybe they
could get some clarification.

And until the C or the C++ standards committee address the problem of
threads, all you can argue is about what guarantees the implementation
gives. In the case of Posix, for example, the real expert is Butendorf,
and he doesn't use volatile:-).

--
James Kanze GABI Software mailto:ka...@gabi-soft.fr
Conseils en informatique orientée objet/ http://www.gabi-soft.fr
Beratung in objektorientierter Datenverarbeitung
11 rue de Rambouillet, 78460 Chevreuse, France, +33 (0)1 30 23 45 16

Hyman Rosen

unread,
Mar 16, 2004, 11:22:40 PM3/16/04
to
Doug Harrison wrote:
> variables accessed by multiple threads must be declared volatile,
> including those always accessed under the protection of a mutex.

If you want to write correct multithreaded code, you must have a compiler
which understands multithreaded code, and in particular, understands the
primitives used for protection. Then in the absence of a standard, it's
up to the compiler documentation to tell you whether volatile needs to be
used.

It's very easy to write code which looks correct but isn't. Look for the
threads on why double-checked locking doesn't work, for example. The are
real and common hardware platforms that do out-of-order writes to memory
completely irrespective of the order in which the assembly language does
its operations.

Maciej Sobczak

unread,
Mar 16, 2004, 11:23:02 PM3/16/04
to
Hi,

Doug Harrison wrote:

> In light of recent discussions I've read in this newsgroup

[...]

What about the light on comp.programming.threads?
You have posted a similar question there (subject: "Harbison says
"volatile" necessary for MT programming!") and got insightful responses
from experts, especially from Mr. David Butenhof.

> Comments?

As far as I understand Mr. David Butenhof, the issue is that neither C
nor C++ standard alone is sufficient to draw *any* conclusions about
what is necessary or not in multithreading.
When you want to write MT programs, you have to rely on *separate*
standards, interfaces and object models, which always make specific ANSI
standard paragraphs more or less irrelevant. In particular, anything
that ANSI or ISO standard says about volatile is irrelevant to MT.
In other words, if you write MT programs, you *do not* write in standard
C nor standard C++.

In the case of POSIX Threads, volatile is *neither* necessary nor
sufficient in multithreading, because as far as the POSIX standard is
concerned, the memory visibility guarantees come from appropriate use of
synchronization primitives, not from (mis|ab)using the language type
system. If you write for *POSIX-conforming* platform, this is all you
need to know.

Similarly, whether volatile is suficient or necessary in MFC code
compiled on Microsoft compilers is completely up to Microsoft and I
would recommend that you consult *their* docs on this matter (starting
with MSDN), no matter how much expertise have the experts cited on that
web page.


My personal opinion is that the experts you refer to draw generic
conclusions relying on the fragile assumption that "threads are the same
case" as something else (processes, interruptions, signals, etc.,
depending on what analogy the author *needs* in each particular
context). From the standard viewpoint, threads are *not* "the same
case", they are not similar to anything else and therefore any
conclusions are subjective and always biased towards some specific
platform/compiler/library combination.
If you need threads, choose the model you like and do what *that* model
mandates.

And bringing all this to the topic of this newsgroup: have you
considered using Boost.Threads? ;)

--
Maciej Sobczak : http://www.msobczak.com/
Programming : http://www.msobczak.com/prog/

Jeff Greif

unread,
Mar 17, 2004, 3:37:47 PM3/17/04
to
It should be noted that on March 10, Mr. Harrison started a discussion on
this very topic on comp.programming.threads, where the vociferous contention
of people involved in developing the POSIX standard thread APIs and their
implementation was that S. Harbison was dead wrong. Those tempted to weigh
into the discussion here might profit from looking over what was said in the
other newsgroup.

Jeff

"Doug Harrison" <d...@mvps.org> wrote in message
news:78b9509p5ofd6siaq...@4ax.com...

> In light of recent discussions I've read in this newsgroup on the
> relationship between volatile and multithreading in C++ (e.g. see
> "Subject: using volatile bools instead of mutexes when synchronizing
> threads" from 2004-02-15), I'd be interested to hear what people have
> to say about a new web page on the subject that has been brought to my
> attention by its author in microsoft.public.vc.mfc, a newsgroup
> devoted to the MFC C++ application framework. Here's the link:
>
> http://www.flounder.com/volatile.htm
>

Ben Hutchings

unread,
Mar 17, 2004, 9:21:02 PM3/17/04
to
Doug Harrison wrote:
> In light of recent discussions I've read in this newsgroup on the
> relationship between volatile and multithreading in C++ (e.g. see
> "Subject: using volatile bools instead of mutexes when synchronizing
> threads" from 2004-02-15), I'd be interested to hear what people have
> to say about a new web page on the subject that has been brought to my
> attention by its author in microsoft.public.vc.mfc, a newsgroup
> devoted to the MFC C++ application framework. Here's the link:
>
> http://www.flounder.com/volatile.htm
<snip>
> The web page goes on to make an argument which invokes sequence points
> and the abstract machine but ignores synchronization.
<snip>
> Comments?

First, note that the arguments there cover the C language but not the
C++ language. The formal specifications of the abstract machine and
of the semantics of volatile are somewhat different in the two
languages. However the issues are quite similar.

The expert that the author consulted (Samuel Harbison) rightly
answered the common misconception that sequence points are
synchronisation points. However, that's not the argument that
the no-volatile-needed side is making.

The text Harbison cited from the C standard actually seems to back up
what I and others have argued all along about synchronisation
functions: that

"...at the time of each function entry and function return where
the calling function and the called function are in different
translation units, the values of all externally linked objects and
of all objects accessible via pointers therein would agree with
the abstract semantics..."

I think it's non-normative text but it shows that the authors of the
standard agreed that an implementation cannot move memory accesses
across a function call if it cannot see the body of the function and
if that memory might be reachable from that function.

The author of the web page, Joseph Newcomer, says that

"...one view holds that a compiler which is doing aggressive
optimizations may detect, via either scope knowledge (as
particularly might be available in C++), global optimization
techniques up to and including inlining (whether explicitly
requested by the programmer, or implicitly determined by the
compiler), or whatever, that the call 'unlock(mutex)' in no way
can affect the computation, and would feel free perform code
motions that would move the memory accesses which are above the
unlock call to below the unlock call."

Nonsense. Either the implementation can't see the bodies of these
synchronisation functions or they use intrinsics and the semantics of
the intrinsics prevent it from moving memory accesses across in the
dangerous direction.

"The volatile-necessary view holds that in a compiler where, for
example, x is a member of a C++ class, and lock() and unlock() are
global (perhaps OS API) functions, it could be asserted by an
aggressive compiler that neither call could possibly modify the
value of x, and consequently the optimization suggested by [3]
would be legal."

If the other thread can access x then x must be accessible to the
function that the thread was started with or to some function that it
calls. Some critical part of the implementation of the thread
creation function is also invisible to the implementation. So far
as the implementation can tell, the synchronisation function might
call the thread start function, which might access x.

Ben.

Vladimir Kouznetsov

unread,
Mar 17, 2004, 9:26:52 PM3/17/04
to
"Doug Harrison" <d...@mvps.org> wrote in message
news:78b9509p5ofd6siaq...@4ax.com...
[snip]
> Comments?

I think that the key phrase in the cited answer is "The standard does not
mention processes or threads". The standard can provide neither necessary
nor sufficient tools for multi-threaded environments. A compiler should be
specifically designed to generate thread-safe code (otherwise it will screw
up no matter what). How is that achieved is up to implementers. For example
the compiler can (a) recognize synchronization primitives and make sure no
optimizations happen across boundaries of synchronized sections of code. It
can (b) relay on the fact that synchronization functions are not transparent
(by any mean) and therefore make sure that the relevant values are updated.
It even can (c) put any additional restrictions including mandatory use of
volatile keyword. One can consider (c) as an argument in support of using
volatile for purposes of portability. I wouldn't. You call (c) broken
compiler, I mildly agree.

> Doug Harrison

thanks,
v

Doug Harrison

unread,
Mar 18, 2004, 7:51:29 AM3/18/04
to
Thanks to all for your replies. I think I can summarize the response
so far by saying neither the C nor C++ Standard can be used in
isolation to prove things about multithreaded programs. You need
additional guarantees, such as Posix makes explicit. But what about a
compiler intended for multithreaded programming which is silent on the
issue?

Let me pose a practical question. Is it possible to write a
multithreaded C++ program that is correct WRT its variables shared
between threads under a compiler that requires all such variables to
be declared volatile, even those accessed always under the protection
of a mutex? For programs that use standard library classes such as
std::vector, I'd say the answer to that is no.

***** Example 1

Suppose you have a std::vector<int> "v" that is grown by one thread
and accessed by another, and all access is done under the protection
of a mutex "mx". The access patterns look like the following
(exception safety ignored for clarity):

Thread 1:

int x;
while (read(x))
{
mx.lock();
v.push_back(x);
mx.unlock();
}

Thread 2:

mx.lock();
size_t sz1 = v.size();
mx.unlock();
// ...stuff that doesn't alter v
mx.lock();
size_t sz2 = v.size();
mx.unlock();
cout << sz2-sz1 << '\n';

Now, a super-duper optimizing compiler could see that the Thread 2
code doesn't change v, and it could conceivably propagate the value of
sz1 into sz2. So, what to do? Declare v volatile to kill the
optimizer? If you do that, forget about calling any of its member
functions. You'd have to cast volatile away first, which would seem to
sacrifice the benefit you were hoping to obtain from it in the first
place. There's also that pesky "undefined behavior" bit in the C++
Standard 7.1.5.1/7. So volatile cannot help here, and the compiler
intended for MT programming must restrain itself around the mutex
operations.

***** Example 2

Suppose you have the following struct, and r is initialized to refer
to an int:

struct X
{
X();

int& r;
};

There's an X "x" accessed by multiple threads under the protection of
a mutex "mx", and x.r initially equals 1.

Thread 1:

mx.lock();
x.r = 0;
mx.unlock();

Thread 2:

for (;;)
{
mx.lock();
int r = x.r;
mx.unlock();
if (r == 0)
break;
}

Again, suppose you have that super-duper optimizing compiler which can
see that the Thread 2 code doesn't modify x.r. Is it allowed to
rewrite the loop as below?

mx.lock();
int r = x.r;
mx.unlock();
if (r != 0)
for (;;) continue;

If so, what to do about x? Declare it volatile? As X::r is a
reference, that doesn't help. There doesn't seem to be a general way
to fix this problem except to modify the class X. Again, the compiler
intended for MT programming must restrain itself around the mutex
operations.

Examples like these have led me to conclude that a compiler which
requires all variables accessed by multiple threads under a proper
synchronization protocol to be declared volatile, and/or doesn't
provide the expected semantics for the mutex lock and unlock
operations, is useless for multithreaded programming. Therefore, a
compiler intended to be used for MT programming must not require
volatile for synchronized access and must provide the expected
semantics for the mutex operations. Where it violates these
expectations, it exhibits a bug, plain and simple.

Doug Harrison

unread,
Mar 18, 2004, 7:53:03 AM3/18/04
to
Maciej Sobczak wrote:

>What about the light on comp.programming.threads?
>You have posted a similar question there (subject: "Harbison says
>"volatile" necessary for MT programming!") and got insightful responses
>from experts, especially from Mr. David Butenhof.

Different groups bring different perspectives, and I expect not
everyone who reads this group reads comp.programming.threads and vice
versa. FWIW, I agree with Butenhof, whose input to the threads FAQ and
newsgroup really clicked with me years ago.

>As far as I understand Mr. David Butenhof, the issue is that neither C
>nor C++ standard alone is sufficient to draw *any* conclusions about
>what is necessary or not in multithreading.
>When you want to write MT programs, you have to rely on *separate*
>standards, interfaces and object models, which always make specific ANSI
>standard paragraphs more or less irrelevant. In particular, anything
>that ANSI or ISO standard says about volatile is irrelevant to MT.
>In other words, if you write MT programs, you *do not* write in standard
>C nor standard C++.

Right, as I've noted elsewhere, the language standards are silent on
the issue of multithreading, but that didn't stop the web page. :)

>In the case of POSIX Threads, volatile is *neither* necessary nor
>sufficient in multithreading, because as far as the POSIX standard is
>concerned, the memory visibility guarantees come from appropriate use of
>synchronization primitives, not from (mis|ab)using the language type
>system. If you write for *POSIX-conforming* platform, this is all you
>need to know.
>
>Similarly, whether volatile is suficient or necessary in MFC code
>compiled on Microsoft compilers is completely up to Microsoft and I
>would recommend that you consult *their* docs on this matter (starting
>with MSDN), no matter how much expertise have the experts cited on that
>web page.

The documentation is silent on the issue. However, the absence of
volatile in examples could be considered suggestive. Moreover, I've
come to believe there are real constraints on compilers intended to be
used for multithreaded programming. More on that in another message.

>My personal opinion is that the experts you refer to draw generic
>conclusions relying on the fragile assumption that "threads are the same
>case" as something else (processes, interruptions, signals, etc.,
>depending on what analogy the author *needs* in each particular
>context). From the standard viewpoint, threads are *not* "the same
>case", they are not similar to anything else and therefore any
>conclusions are subjective and always biased towards some specific
>platform/compiler/library combination.
>If you need threads, choose the model you like and do what *that* model
>mandates.
>
>And bringing all this to the topic of this newsgroup: have you
>considered using Boost.Threads? ;)

Not yet. I've been using my own thread library for a long time, and I
currently have no reason to switch.

--
Doug Harrison
d...@mvps.org

Doug Harrison

unread,
Mar 18, 2004, 7:54:23 AM3/18/04
to
Hyman Rosen wrote:

>Doug Harrison wrote:
>> variables accessed by multiple threads must be declared volatile,
>> including those always accessed under the protection of a mutex.

Please note that I do not believe the above. As was clear in my full
message, that's what the author of the web page I referenced believes.

>If you want to write correct multithreaded code, you must have a compiler
>which understands multithreaded code, and in particular, understands the
>primitives used for protection. Then in the absence of a standard, it's
>up to the compiler documentation to tell you whether volatile needs to be
>used.

The documentation doesn't say. However, I believe there are real
constraints on a compiler intended to be used for MT programming. More
in another message on that.

>It's very easy to write code which looks correct but isn't. Look for the
>threads on why double-checked locking doesn't work, for example. The are
>real and common hardware platforms that do out-of-order writes to memory
>completely irrespective of the order in which the assembly language does
>its operations.

FWIW, I've participated in a few of those threads right here in this
group. :)

--
Doug Harrison
d...@mvps.org

Doug Harrison

unread,
Mar 18, 2004, 8:42:03 AM3/18/04
to
ka...@gabi-soft.fr wrote:

> The problem is that 1) he was asked the wrong question, and 2) he
> refers
> strictly to the C standard.
>
> He was asked the wrong question, because there was a quote in it: "the
> semantics of the C language demand that all side effects be
> consolidated
> at the sequence points." This statement is obviously wrong, and I
> don't
> recall anyone having claimed it in the discussion. And I would expect
> any C or C++ expert to point out that it is wrong.
>
> Note that the exact argument he is rebuting talks about the compiler
> not
> being able to know what goes on in a global function, such as m.lock()
> in the example. This argument is definitly false -- modern optimizing
> compilers DO look beyond function, and even module boundaries. There
> is
> no question about this.

Certainly. But I've anticipated that argument in past exchanges by
stating that if the compiler is able to perform inter-procedural
optimizations, a compiler intended for multithreaded programming must
somehow avoid doing so for the mutex operations. I believe there are
real constraints on a compiler intended for MT programming, and I'll
expand on that in another message. However, on Windows at least, the
mutex lock operations are in an opaque system DLL, so a large amount
of correct behavior falls into place naturally; the compiler would
have to go considerably out of its way to get things wrong.

> Which leads to the second point: he was asked a question apparently
> about the C standard, and he responded according to the C standard.
> Which is all fine and good, but the moment you invoke pthread_create,
> you are outside of the C (or the C++) standard -- you are counting on
> other guarantees as well. And the Posix standard says quite clearly
> that the locking requests are required to guarantee synchronization in
> a
> conforming implementation. If you try to base yourself strictly on the
> C or the C++ standard here, without volatile, you have undefined
> behavior (which the implementation may choose to define, say by
> claiming
> conformance with the Posix standard), but with volatile, as well, you
> have undefined behavior (because neither the C nor the C++ standard
> define the semantics of pthread_mutex_lock or pthread_mutex_unlock).
>
> In sum, if you only consider the C or the C++ standard, then you have
> no
> guarantee of mutual exclusion with pthread_mutex_lock, so with or
> without volatile, your code is undefined.

I've noted elsewhere that the language standards are silent on
multithreading, but it didn't take WRT the web page.

> And if you add the guarantees
> of the Posix standard, you not only have mutual exclusion, you have
> memory synchronization.
>
> I'm less familiar with Windows, but I would find it surprising that
> they
> didn't offer a similar guarantee. I'll admit, however, that I made
> this judgement based on what seems to be standard programming
> procedures
> in the Windows world, including at Microsoft. Some of the quotes from
> Microsoft's documentation make me wonder however. In the end, maybe
> the
> only real guarantee we have on that side is that the people at
> Microsoft
> don't know how to write an optimizing compiler (but they could learn),
> or that they won't accept breaking their own code. If there is anyone
> who reads this group who has an in at Microsoft (Herb?), maybe they
> could get some clarification.
>
> And until the C or the C++ standards committee address the problem of
> threads, all you can argue is about what guarantees the implementation
> gives. In the case of Posix, for example, the real expert is
> Butendorf,
> and he doesn't use volatile:-).

I think what Butenhof says about Posix threads applies to any compiler
intended for MT programming. I'll post a follow-up on that.

--
Doug Harrison
d...@mvps.org

ka...@gabi-soft.fr

unread,
Mar 19, 2004, 4:31:11 PM3/19/04
to
Ben Hutchings <do-not-s...@bwsint.com> wrote in message
news:<slrnc5f48t.okv....@shadbolt.i.decadentplace.org.uk>...

> Doug Harrison wrote:
> > In light of recent discussions I've read in this newsgroup on the
> > relationship between volatile and multithreading in C++ (e.g. see
> > "Subject: using volatile bools instead of mutexes when synchronizing
> > threads" from 2004-02-15), I'd be interested to hear what people
> > have to say about a new web page on the subject that has been
> > brought to my attention by its author in microsoft.public.vc.mfc, a
> > newsgroup devoted to the MFC C++ application framework. Here's the
> > link:
> > http://www.flounder.com/volatile.htm
> <snip>
> > The web page goes on to make an argument which invokes sequence
> > points and the abstract machine but ignores synchronization.
> <snip>
> > Comments?

> First, note that the arguments there cover the C language but not the
> C++ language. The formal specifications of the abstract machine and
> of the semantics of volatile are somewhat different in the two
> languages. However the issues are quite similar.

In fact, the authors' of the C++ standard explicitly state in a note
(end of §7.1.5.1) that "the semantics of volatile are intended to be the
same in C++ as they are in C."

> The expert that the author consulted (Samuel Harbison) rightly
> answered the common misconception that sequence points are
> synchronisation points. However, that's not the argument that the
> no-volatile-needed side is making.

More precisely, Sam Harbison answered a question concerning the C
standard, and the C standard only. Given the following bit of code:

extern int i ;
// ...
i = 0 ;
pthread_mutex_lock( &someMutex ) ;
i = i + 1 ;
pthread_mutex_unlock( &someMutex ) ;

according to the C standard, there is no guarantee that the compiler
will not optimize out the read of i in the protected region. With
volatile, there is a guarantee, provided that the "implementation
defined" meaning of access is sufficient for what we want. (This isn't
the case for most compilers I've used. Which is pretty sorry, because
it means that I need assembler in cases where volatile is relevant, like
device drivers, etc.) According to the C standard, however, there is
also no guarantee that pthread_mutex_lock will ensure exclusion with
other processes which use it on the same mutex. Volatile or not, the
code has undefined behavior according to the C standard.

If we add the Posix standard, then we do have guaranteed semantics for
pthread_mutex_lock. We also have the guarantee that memory will be
synchronized by the call -- that a compiler *will* reread i after the
call to pthread_mutex_lock (and that, whatever the implementation
definition for "access" with regards to volatile).

If you are working under a non-Posix compliant system, you will have to
adopt the above text to your system. I have been unable to date to find
anything definitive for Windows, but the example code I've seen
(including some from Microsoft) would suggest that the guarantee is
somewhat similar to that of Posix.

> The text Harbison cited from the C standard actually seems to back up
> what I and others have argued all along about synchronisation
> functions: that

> "...at the time of each function entry and function return where
> the calling function and the called function are in different
> translation units, the values of all externally linked objects and
> of all objects accessible via pointers therein would agree with
> the abstract semantics..."

> I think it's non-normative text but it shows that the authors of the
> standard agreed that an implementation cannot move memory accesses
> across a function call if it cannot see the body of the function and
> if that memory might be reachable from that function.

But since as far as the standard is concerned, the if part is false,
it's not an argument:-). (In practice, while certainly not common,
there are a few compilers for which the if part really is false.)

> The author of the web page, Joseph Newcomer, says that

> "...one view holds that a compiler which is doing aggressive
> optimizations may detect, via either scope knowledge (as
> particularly might be available in C++), global optimization
> techniques up to and including inlining (whether explicitly
> requested by the programmer, or implicitly determined by the
> compiler), or whatever, that the call 'unlock(mutex)' in no way can
> affect the computation, and would feel free perform code motions
> that would move the memory accesses which are above the unlock call
> to below the unlock call."

> Nonsense. Either the implementation can't see the bodies of these
> synchronisation functions or they use intrinsics and the semantics of
> the intrinsics prevent it from moving memory accesses across in the
> dangerous direction.

Yes and no. A Posix compiler could also have magic knowledge about such
functions -- I've already used a C compiler which "knew" that printf
(and a lot of other functions) didn't access user defined variables,
even if they were global.

The key to the guarantees here has nothing to do with trying to second
guess the compiler, and what it can or cannot know. The key is that
there is a standard (perhaps a proprietary one, in the case of Windows
and most non-Posix systems) which says that it works.

It is also quite possible that some standard will state the Posix
guarantees, restricting it however to volatile objects. If a standard
does this, then volatile becomes necessary. And using C++ becomes
almost impossible, given that the this pointer in a constructor or a
destructor is *never* volatile qualified.

--
James Kanze GABI Software mailto:ka...@gabi-soft.fr
Conseils en informatique orientée objet/ http://www.gabi-soft.fr
Beratung in objektorientierter Datenverarbeitung
11 rue de Rambouillet, 78460 Chevreuse, France, +33 (0)1 30 23 45 16

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Vladimir Kouznetsov

unread,
Mar 19, 2004, 4:53:01 PM3/19/04
to
"Doug Harrison" <d...@mvps.org> wrote in message
news:9i4g50tov6hefrpds...@4ax.com...
[snip]

> Now, a super-duper optimizing compiler could see that the Thread 2
> code doesn't change v, and it could conceivably propagate the value of
> sz1 into sz2. So, what to do? Declare v volatile to kill the
> optimizer? If you do that, forget about calling any of its member
> functions. You'd have to cast volatile away first, which would seem to
> sacrifice the benefit you were hoping to obtain from it in the first
> place. There's also that pesky "undefined behavior" bit in the C++
> Standard 7.1.5.1/7. So volatile cannot help here, and the compiler
> intended for MT programming must restrain itself around the mutex
> operations.

A compiler vendor can provide a version of library with all respective
members declared volatile when compiling for multithreading. That would of
course mean one cannot use a drop-in replacement for the library, but they
are mostly compiler specific any way, so whoever is writing the drop-in can
take care about that.
Alternatively the vendor can document that in multithreading environment the
meaning of volatile is different from what the standard says and requires
neither explicit declaration of members not casting it away.


[snip]


> If so, what to do about x? Declare it volatile? As X::r is a
> reference, that doesn't help. There doesn't seem to be a general way
> to fix this problem except to modify the class X. Again, the compiler
> intended for MT programming must restrain itself around the mutex
> operations.

I'm not sure why that doesn't help. With volatile compiler cannot assume
that any bits of x remain the same, so it has to reread x.r. Now it cannot
assume that it's value is referring to the same object unless it caches the
old value which is hardly an optimization. Am I missing anything?

[snip]
> Doug Harrison

thanks,
v

Doug Harrison

unread,
Mar 20, 2004, 9:33:26 AM3/20/04
to
ka...@gabi-soft.fr wrote:

>Given the following bit of code:
>
> extern int i ;
> // ...
> i = 0 ;
> pthread_mutex_lock( &someMutex ) ;
> i = i + 1 ;
> pthread_mutex_unlock( &someMutex ) ;
>
>according to the C standard, there is no guarantee that the compiler
>will not optimize out the read of i in the protected region. With
>volatile, there is a guarantee, provided that the "implementation
>defined" meaning of access is sufficient for what we want.

Provided the compiler cannot see into the mutex operations, and i is
reachable from those operations, it must assume they can modify i and
produce observable behavior involving i. This limits the optimizations
on i that it can perform around the operations. To be useful, a
compiler intended for multithreaded programming must not look into
these operations, discover they don't use i, and perform optimizations
on i which violate the expected semantics of the mutex operations.

On Windows, the operations reside in opaque system DLLs, so the
correct behavior pretty much comes naturally. If these operations were
implemented as compiler intrinsics, or their internals were otherwise
visible to the compiler, they would have to be marked "special" to
suppress unsafe operations involving objects such as i. If this isn't
done, and you require volatile in addition to the synchronization, you
create problems for your users such as those I discussed in another
message in this thread:

Message-ID: <9i4g50tov6hefrpds...@4ax.com>

>If we add the Posix standard, then we do have guaranteed semantics for
>pthread_mutex_lock. We also have the guarantee that memory will be
>synchronized by the call -- that a compiler *will* reread i after the
>call to pthread_mutex_lock (and that, whatever the implementation
>definition for "access" with regards to volatile).
>
>If you are working under a non-Posix compliant system, you will have to
>adopt the above text to your system. I have been unable to date to find
>anything definitive for Windows, but the example code I've seen
>(including some from Microsoft) would suggest that the guarantee is
>somewhat similar to that of Posix.

I don't think there's any other real choice for a compiler intended to
be used for multithreaded programming. Requiring everything to be
declared volatile isn't a viable option.

--
Doug Harrison
d...@mvps.org

Doug Harrison

unread,
Mar 20, 2004, 9:34:50 AM3/20/04
to
Vladimir Kouznetsov wrote:

>"Doug Harrison" <d...@mvps.org> wrote in message
>news:9i4g50tov6hefrpds...@4ax.com...
>[snip]
>> Now, a super-duper optimizing compiler could see that the Thread 2
>> code doesn't change v, and it could conceivably propagate the value of
>> sz1 into sz2. So, what to do? Declare v volatile to kill the
>> optimizer? If you do that, forget about calling any of its member
>> functions. You'd have to cast volatile away first, which would seem to
>> sacrifice the benefit you were hoping to obtain from it in the first
>> place. There's also that pesky "undefined behavior" bit in the C++
>> Standard 7.1.5.1/7. So volatile cannot help here, and the compiler
>> intended for MT programming must restrain itself around the mutex
>> operations.
>
>A compiler vendor can provide a version of library with all respective
>members declared volatile when compiling for multithreading. That would of
>course mean one cannot use a drop-in replacement for the library, but they
>are mostly compiler specific any way, so whoever is writing the drop-in can
>take care about that.

I can't imagine anyone doing that. It just throws out the whole idea
that synchronization is sufficient to use normal thread-unsafe classes
in thread-safe ways. It would replace something simple and effective
with something horribly difficult for the compiler vendor to maintain
and horribly difficult and unnatural for people to use. That approach
would preclude using every codebase and library I've ever seen without
first making massive changes to make everything "volatile-correct".

>Alternatively the vendor can document that in multithreading environment the
>meaning of volatile is different from what the standard says and requires
>neither explicit declaration of members not casting it away.

I'm not sure what you're getting at there.

>[snip]
>> If so, what to do about x? Declare it volatile? As X::r is a
>> reference, that doesn't help. There doesn't seem to be a general way
>> to fix this problem except to modify the class X. Again, the compiler
>> intended for MT programming must restrain itself around the mutex
>> operations.
>
>I'm not sure why that doesn't help. With volatile compiler cannot assume
>that any bits of x remain the same, so it has to reread x.r. Now it cannot
>assume that it's value is referring to the same object unless it caches the
>old value which is hardly an optimization. Am I missing anything?

References cannot be reseated. That is, once initialized, a reference
always refers to the same object. In my example, making x volatile
doesn't make the referent of x.r volatile, and there's no such thing
as a volatile reference. That is, you can't write:

int& volatile r; // Illegal

In my example, that's more or less what happens to x.r when you
declare x volatile, except it's ignored in this context. (See 3.9.3 in
the C++ Standard.)

--
Doug Harrison
d...@mvps.org

Vladimir Kouznetsov

unread,
Mar 23, 2004, 7:29:16 AM3/23/04
to
"Doug Harrison" <d...@mvps.org> wrote in message
news:im8n50tvpbe8ro6lj...@4ax.com...

> Vladimir Kouznetsov wrote:
> >A compiler vendor can provide a version of library with all respective
> >members declared volatile when compiling for multithreading. That would
of
> >course mean one cannot use a drop-in replacement for the library, but
they
> >are mostly compiler specific any way, so whoever is writing the drop-in
can
> >take care about that.
>
> I can't imagine anyone doing that. It just throws out the whole idea
> that synchronization is sufficient to use normal thread-unsafe classes
> in thread-safe ways. It would replace something simple and effective
> with something horribly difficult for the compiler vendor to maintain
> and horribly difficult and unnatural for people to use. That approach
> would preclude using every codebase and library I've ever seen without
> first making massive changes to make everything "volatile-correct".

Or put some restrictions on what can be used in synchronized sections of
code. I gave the example for completeness only. I agree with you - my point
is though it's silly it's not impossible. Well, probably that doesn't add
much to the discussion.

> >I'm not sure why that doesn't help. With volatile compiler cannot assume
> >that any bits of x remain the same, so it has to reread x.r. Now it
cannot
> >assume that it's value is referring to the same object unless it caches
the
> >old value which is hardly an optimization. Am I missing anything?
>
> References cannot be reseated. That is, once initialized, a reference
> always refers to the same object. In my example, making x volatile
> doesn't make the referent of x.r volatile, and there's no such thing
> as a volatile reference. That is, you can't write:
>
> int& volatile r; // Illegal
>
> In my example, that's more or less what happens to x.r when you
> declare x volatile, except it's ignored in this context. (See 3.9.3 in
> the C++ Standard.)

Oh, so you are saying that it's legal for a standard compliant compiler to
assume that the bits designating the reference are not changing no matter if
the object is volatile or not, correct? That is due to the fact that there
is no any standard compliant way to change a reference in a volatile object,
right?

> Doug Harrison

thanks,
v

t...@cs.ucr.edu

unread,
Mar 23, 2004, 5:27:44 PM3/23/04
to
ka...@gabi-soft.fr wrote:
+ Doug Harrison <d...@mvps.org> wrote in message
+ news:<78b9509p5ofd6siaq...@4ax.com>...
+
+> In light of recent discussions I've read in this newsgroup on the
+> relationship between volatile and multithreading in C++ (e.g. see
+> "Subject: using volatile bools instead of mutexes when synchronizing
+> threads" from 2004-02-15), I'd be interested to hear what people have
+> to say about a new web page on the subject that has been brought to my
+> attention by its author in microsoft.public.vc.mfc, a newsgroup
+> devoted to the MFC C++ application framework. Here's the link:
+
+> http://www.flounder.com/volatile.htm
+
+> Here's an excerpt:
+
+> <q>
+> I consulted with Dr. Samuel P. Harbison, co-author of the
+> highly-respected book C: A Reference Manual. It is now in its fifth
+> edition.
+
+> Sam has been an implementor of C compilers for a couple decades. He
+> was project manager of several C and C++ projects, has been a member
+> of the C Standards group, was chairman of the C++ standards committee
+> for three years, and has more years of detailed C and C++ experience
+> than a considerable number of self-declared C or C++ experts.
+> ...
+> He has given me permission to print his response to my question to
+> him. I am reproducing the email in its entirety below.
+
+> If you don't want to read the whole thing, the summary is this: one of
+> the world's experts on the C language says that volatile is necessary
+> in a multithreaded environment. Not sufficient, but necessary.
+> ...
+> So make your own decision as to who knows what they're talking about
+> here. I will no longer participate in this debate, except to point to
+> this Web page each time the issue comes up.
+> ...
+> So Sam is not just agreeing with me. He is expressing his opinion as a
+> C/C++ expert.
+> </q>
+
+> The web page goes on to make an argument which invokes sequence points
+> and the abstract machine but ignores synchronization. Its author
+> believes it proves that "volatile is necessary in a multithreaded
+> environment. Not sufficient, but necessary." As he has frequently
+> expressed in microsoft.public.vc.mfc, its author believes all
+> variables accessed by multiple threads must be declared volatile,
+> including those always accessed under the protection of a mutex.
+
+> Comments?
+
+ The problem is that 1) he was asked the wrong question, and 2) he refers
+ strictly to the C standard.
+
+ He was asked the wrong question, because there was a quote in it: "the
+ semantics of the C language demand that all side effects be consolidated
+ at the sequence points." This statement is obviously wrong, and I don't
+ recall anyone having claimed it in the discussion. And I would expect
+ any C or C++ expert to point out that it is wrong.
+
+ Note that the exact argument he is rebuting talks about the compiler not
+ being able to know what goes on in a global function, such as m.lock()
+ in the example. This argument is definitly false -- modern optimizing
+ compilers DO look beyond function, and even module boundaries. There is
+ no question about this.
+
+ Which leads to the second point: he was asked a question apparently
+ about the C standard, and he responded according to the C standard.
+ Which is all fine and good, but the moment you invoke pthread_create,
+ you are outside of the C (or the C++) standard -- you are counting on
+ other guarantees as well. And the Posix standard says quite clearly
+ that the locking requests are required to guarantee synchronization in a
+ conforming implementation. If you try to base yourself strictly on the
+ C or the C++ standard here, without volatile, you have undefined
+ behavior (which the implementation may choose to define, say by claiming
+ conformance with the Posix standard), but with volatile, as well, you
+ have undefined behavior (because neither the C nor the C++ standard
+ define the semantics of pthread_mutex_lock or pthread_mutex_unlock).
+
+ In sum, if you only consider the C or the C++ standard, then you have no
+ guarantee of mutual exclusion with pthread_mutex_lock, so with or
+ without volatile, your code is undefined. And if you add the guarantees
+ of the Posix standard, you not only have mutual exclusion, you have
+ memory synchronization.

So, volatility is neither necessary nor sufficient for the coherent
handling of thread-shared variables, e.g, under Pthreads, declaring
thread-shared variables to be volatile seriously and unnecessarily
degrades performance.

Tom Payne

ka...@gabi-soft.fr

unread,
Mar 24, 2004, 10:20:03 PM3/24/04
to
t...@cs.ucr.edu wrote in message news:<c3ofnd$8k5$2...@glue.ucr.edu>...
> ka...@gabi-soft.fr wrote:

> > In sum, if you only consider the C or the C++ standard, then you

> > have no guarantee of mutual exclusion with pthread_mutex_lock, so
> > with or without volatile, your code is undefined. And if you add
> > the guarantees of the Posix standard, you not only have mutual
> > exclusion, you have memory synchronization.

> So, volatility is neither necessary nor sufficient for the coherent
> handling of thread-shared variables, e.g, under Pthreads, declaring
> thread-shared variables to be volatile seriously and unnecessarily
> degrades performance.

That's the way I read it.

The problem with volatile is even greater in fact -- in C++, an object
only becomes volatile *after* its constructor has finished, so any
accesses (including writes) in the constructor are not considered
volatile. (And of course, there's also the fact that a number of
compilers don't even give volatile the minimum semantic intended.)

(BTW: apparently your use of '+' as a citing character doesn't only
confuse my reformatting code -- it also confuses the moderation software
enough that it no longer recognizes excessive quoting:-).)

--
James Kanze GABI Software mailto:ka...@gabi-soft.fr
Conseils en informatique orientée objet/ http://www.gabi-soft.fr
Beratung in objektorientierter Datenverarbeitung
11 rue de Rambouillet, 78460 Chevreuse, France, +33 (0)1 30 23 45 16

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

ka...@gabi-soft.fr

unread,
Mar 24, 2004, 10:20:58 PM3/24/04
to
"Vladimir Kouznetsov" <vladimir....@ngrain.com> wrote in message
news:<105urqg...@corp.supernews.com>...

> > int& volatile r; // Illegal

> > In my example, that's more or less what happens to x.r when you
> > declare x volatile, except it's ignored in this context. (See 3.9.3
> > in the C++ Standard.)

> Oh, so you are saying that it's legal for a standard compliant
> compiler to assume that the bits designating the reference are not
> changing no matter if the object is volatile or not, correct?

Supposing that there are bits in the actual reference. A compiler can
always assume a reference refers to what it was initialized to refer
to. If I write something like:

int const& dim = 5 ; // Attention, this is a global...

void
f()
{
std::cout << dim << '\n' ;
// Compiler may output the constant 5,
// no memory accesses necessary.
}

> That is due to the fact that there is no any standard compliant way to
> change a reference in a volatile object, right?

More or less.

--
James Kanze GABI Software mailto:ka...@gabi-soft.fr
Conseils en informatique orientée objet/ http://www.gabi-soft.fr
Beratung in objektorientierter Datenverarbeitung
11 rue de Rambouillet, 78460 Chevreuse, France, +33 (0)1 30 23 45 16

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Doug Harrison

unread,
Mar 25, 2004, 8:22:51 AM3/25/04
to
Vladimir Kouznetsov wrote:

>Or put some restrictions on what can be used in synchronized sections of
>code. I gave the example for completeness only. I agree with you - my point
>is though it's silly it's not impossible. Well, probably that doesn't add
>much to the discussion.

I appreciate that you clarified that. If you can name a compiler
intended for multithreaded programming that requires volatile on top
of synchronization, that I'd be interested in. The person who wrote
the web page I referred to in my first message has refused every
request to provide examples to support what he claims, even though he
has recently claimed to have been shown multiple examples by another
of his experts (see my reply and its follow-ups):

http://groups.google.com/groups?selm=h2evrvkjb2a0r69pp4j0280ibegttvvm9u%404ax.com
http://groups.google.com/groups?selm=cc01sv0e23uiipg3u6evm511lpoe0inq9d%404ax.com

So, I have to ask anyone willing to entertain this notion to back it
up with an example. It's not useful to talk about hypothetical systems
which don't exist and which no one would deliberately create. On the
other hand, if specific problems in real compilers do exist, it would
be good to know about them, so that the compiler vendor could be
informed about a bug in their compiler.

>Oh, so you are saying that it's legal for a standard compliant compiler to
>assume that the bits designating the reference are not changing no matter if
>the object is volatile or not, correct? That is due to the fact that there
>is no any standard compliant way to change a reference in a volatile object,
>right?

Yep. Here's another example to help demonstrate why it would be absurd
for a C++ compiler to require volatile on top of synchronization.

// Assume this was written for a real compiler, which respects
// mutex operations. Then assume a volatile-thinker got his hands
// on it and "fixed" it by declaring x volatile.

struct X
{
X();
int* const p;
};

volatile X x;

void f()
{
// Use but don't modify *p
mx.lock();
int* const p = x.p;
mx.unlock();
// ...
mx.lock();
if (*p)
whatever;
mx.unlock();
// ...
mx.lock();
if (*p)
whatever;
mx.unlock();
}

Under this hypothetical volatile-requiring compiler, x was declared
volatile. However, declaring x volatile only makes x.p volatile, but
not *x.p. The compiler allows copying x.p to p, which does not
represent a type safety violation. But we wanted to synchronize access
to the object x controls, *x.p. Oops. This hypothetical compiler which
doesn't respect synchronization operations in the absence of volatile
might be able to optimize the sequence into:

void f()
{
// Use but don't modify *p
mx.lock();
int* const p = x.p;
int const v = *p;
mx.unlock();
// ...
mx.lock();
if (v)
whatever;
mx.unlock();
// ...
mx.lock();
if (v)
whatever;
mx.unlock();
}

Again, such a compiler would be useless for multithreaded programming.
To get the volatiles right, you'd literally have to examine every
single line of code you're using, and as above, the compiler would not
always be able to help you with the type checking. The proper approach
is for the compiler vendor to honor the expected semantics of the
mutex operations, in which case, volatile is neither sufficient nor
necessary.

--
Doug Harrison
d...@mvps.org

Vladimir Kouznetsov

unread,
Mar 25, 2004, 7:10:29 PM3/25/04
to
<ka...@gabi-soft.fr> wrote in message
news:d6652001.04032...@posting.google.com...

> "Vladimir Kouznetsov" <vladimir....@ngrain.com> wrote in message
> news:<105urqg...@corp.supernews.com>...
> > Oh, so you are saying that it's legal for a standard compliant
> > compiler to assume that the bits designating the reference are not
> > changing no matter if the object is volatile or not, correct?
>
> Supposing that there are bits in the actual reference. A compiler can
> always assume a reference refers to what it was initialized to refer
> to. If I write something like:
>
> int const& dim = 5 ; // Attention, this is a global...
>
> void
> f()
> {
> std::cout << dim << '\n' ;
> // Compiler may output the constant 5,
> // no memory accesses necessary.
> }

This is a good point but I don't think it is relevant in the context as we
were talking about classes. I was thinking about the Doug's example in terms
of potential changes:

bool b = true;

struct X {
int const& r;
X(): r(b? 0: 1)
{
}
};

X x;

void f()
{
std::cout << x.r; // compiler cannot assume s.i_ is 0 unless it can prove
g() is never called before f()
}

void g()
{
b = false;
new(&x) X();
}

> > That is due to the fact that there is no any standard compliant way to
> > change a reference in a volatile object, right?
>
> More or less.

I assume the "less" part relates to your example. If you by a chance meant
that such the way existed please show an example.

> James Kanze GABI Software mailto:ka...@gabi-soft.fr

thanks,
v

Vladimir Kouznetsov

unread,
Mar 26, 2004, 5:42:02 AM3/26/04
to
"Doug Harrison" <d...@mvps.org> wrote in message
news:4hc46094248f3t1kj...@4ax.com...
> [snip] hypothetical systems [snip] which no one would deliberately create.

You are too kind :).

> >Oh, so you are saying that it's legal for a standard compliant compiler
to
> >assume that the bits designating the reference are not changing no matter
if
> >the object is volatile or not, correct? That is due to the fact that
there
> >is no any standard compliant way to change a reference in a volatile
object,
> >right?
>
> Yep. Here's another example to help demonstrate why it would be absurd
> for a C++ compiler to require volatile on top of synchronization.

I agree that opaque for optimization synchronization primitives with memory
barriers make volatile useless and even unwanted; I agree this is the best
way of implementing of a MT compiler; I kind of agree that declaring object
volatile requires volatile methods to access the objects - when we are
talking about multi-threading we are outside of the standard so 7.1.5.1/7
doesn't scare us any more.
I don't understand why do you think that compiler is allowed to make
optimizations like the ones in your examples even considering the standard.
C standard says (6.7.3/6) "An object that has volatile-qualified type may be
modified in ways unknown to the implementation" (C(6.7.3/5) and 7.1.5.1/7
are irrelevant here). For me that looks like compiler can make absolutely no
assumptions on what the content of a volatile object can be at any point.
That includes James' example as well - it cannot even assume that volatile
const reference refers to the same object to skip access to memory. Can
anybody please comment on that?

> Doug Harrison

thanks,
v

Doug Harrison

unread,
Mar 29, 2004, 2:51:06 PM3/29/04
to
Vladimir Kouznetsov wrote:

>I agree that opaque for optimization synchronization primitives with memory
>barriers make volatile useless and even unwanted; I agree this is the best
>way of implementing of a MT compiler; I kind of agree that declaring object
>volatile requires volatile methods to access the objects - when we are
>talking about multi-threading we are outside of the standard so 7.1.5.1/7
>doesn't scare us any more.
>I don't understand why do you think that compiler is allowed to make
>optimizations like the ones in your examples even considering the standard.
>C standard says (6.7.3/6) "An object that has volatile-qualified type may be
>modified in ways unknown to the implementation" (C(6.7.3/5) and 7.1.5.1/7
>are irrelevant here). For me that looks like compiler can make absolutely no
>assumptions on what the content of a volatile object can be at any point.
>That includes James' example as well - it cannot even assume that volatile
>const reference refers to the same object to skip access to memory. Can
>anybody please comment on that?

I really don't know how to answer that, because I'm not sure what
you're objecting to. If you could show a specific example, and then
point out exactly what you think the problem is with it, then I might
be able to comment.

--
Doug Harrison
d...@mvps.org

ka...@gabi-soft.fr

unread,
Mar 29, 2004, 5:19:57 PM3/29/04
to
"Vladimir Kouznetsov" <vladimir....@ngrain.com> wrote in message
news:<1066m08...@corp.supernews.com>...

> <ka...@gabi-soft.fr> wrote in message
> news:d6652001.04032...@posting.google.com...
> > "Vladimir Kouznetsov" <vladimir....@ngrain.com> wrote in message
> > news:<105urqg...@corp.supernews.com>...
> > > Oh, so you are saying that it's legal for a standard compliant
> > > compiler to assume that the bits designating the reference are not
> > > changing no matter if the object is volatile or not, correct?

> > Supposing that there are bits in the actual reference. A compiler
> > can always assume a reference refers to what it was initialized to
> > refer to. If I write something like:

> > int const& dim = 5 ; // Attention, this is a global...

> > void
> > f()
> > {
> > std::cout << dim << '\n' ;
> > // Compiler may output the constant 5,
> > // no memory accesses necessary.
> > }

> This is a good point but I don't think it is relevant in the context
> as we were talking about classes. I was thinking about the Doug's
> example in terms of potential changes:

> bool b = true;

> struct X {
> int const& r;
> X(): r(b? 0: 1)

Just for the record, this is undefined behavior. The temporary used to
initialized the reference ceases to exist at the end of the constructor.

For the purposes of demonstration, I will assume rather that you meant:

// Globales:
int const zero = 0 ;
int const ont = 1 ;

// Constructor :
X() : r( b ? zero : one ) ;

> {
> }
> };

> X x;

> void f()
> {
> std::cout << x.r; // compiler cannot assume s.i_ is 0 unless it can prove
> g() is never called before f()
> }

> void g()
> {
> b = false;
> new(&x) X();
> }

> > > That is due to the fact that there is no any standard compliant
> > > way to change a reference in a volatile object, right?

> > More or less.

> I assume the "less" part relates to your example. If you by a chance
> meant that such the way existed please show an example.

It's an interesting example. I think you are right. The consequences
with regards to multi-threading and volatile are very interesting,
however -- only objects can be volatile, and a reference is not an
object. In addition, the standard explicitly says that references
needed occupy memory, although it is hard to imagine an implementation
using anything but pointers for them. Given, however, that they aren't
required to occupy memory, and are not considered to have a value, what
does it mean if we declare X, above, volatile? Normally, declaring a
struct volatile means that all of its elements are volatile. Which
means in turn that the value of the element may change in ways not
visible to the compiler. Which, when applied to a reference, means?

Volatile was added to C to solve a certain number of problems. In the
context where it was originally defined, it really only has a semantic
when applied to fundamental types, and not always then. (Declaring a
double volatile on a PDP-11 or one an original Intel 8086 didn't have
any real meaning either.)

IMHO, the meaning of volatile doesn't extend well to what we understand
by threading today. It wasn't meant to. This doesn't mean that some
implementation couldn't try to bend it to fit, but particularly in C++,
with references, constructors, etc., I don't think the fit would ever be
very good. Posix doesn't try to make it fit.

This doesn't mean that volatile should be without meaning on a modern
system. I would still expect it to be useful (both necessary and
sufficient) for memory mapped IO, for example, or spin-locking to
synchronize with a hardware interrupt -- these are the jobs it was
designed to do, and I see nothing which should suggest an intent on the
part of the standards committee that it no longer do them. (On the
other hand, the implementation of volatile in Sun CC or g++, on a Sparc,
or arguably in VC++ on a Windows PC, doesn't do them.)

--


James Kanze GABI Software mailto:ka...@gabi-soft.fr

Conseils en informatique orientée objet/ http://www.gabi-soft.fr
Beratung in objektorientierter Datenverarbeitung
11 rue de Rambouillet, 78460 Chevreuse, France, +33 (0)1 30 23 45 16

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Vladimir Kouznetsov

unread,
Mar 29, 2004, 6:28:04 PM3/29/04
to
> I really don't know how to answer that, because I'm not sure what
> you're objecting to. If you could show a specific example, and then
> point out exactly what you think the problem is with it, then I might
> be able to comment.

Sorry for my poor presentment. If I understand correctly you and James claim
that compiler can apply certain kinds of optimization to volatile objects:

struct X
{
X();
int const& r;
}

volatile X x;

void f()
{
int r = x.r; // a
r = x.r; // b
}

According to you (b) can be removed by compiler because standard C++ doesn't
allow changing value of a reference in a volatile object. According to James
(a) even can be replaced with assignment of a constant. Is my understanding
correct so far?

My concern is that according to C(7.6.3/6) which I believe also is
applicable to C++ "An object that has volatile-qualified type may be
modified in ways unknown to the implementation". Therefore neither of the
optimizations is allowed in standard C/C++. Compiler has to generate code
that re-reads all the relevant parts of the object - the bit representation
of the reference in that case. Therefore using volatile objects doesn't
affect semantics of optimized programs, it only can affect performance.

I hope now it's more clear. If it is I appreciate your comments.

> Doug Harrison

thanks,
v

Vladimir Kouznetsov

unread,
Mar 30, 2004, 7:11:45 AM3/30/04
to
<ka...@gabi-soft.fr> wrote in message
news:d6652001.04032...@posting.google.com...
> > bool b = true;
>
> > struct X {
> > int const& r;
> > X(): r(b? 0: 1)
>
> Just for the record, this is undefined behavior. The temporary used to
> initialized the reference ceases to exist at the end of the constructor.
>
> For the purposes of demonstration, I will assume rather that you meant:
>
> // Globales:
> int const zero = 0 ;
> int const ont = 1 ;
>
> // Constructor :
> X() : r( b ? zero : one ) ;
>
> > {
> > }
> > };

You are right of course. Thanks for the correction. Originally I had them as
globals but then carelessly shortened the example.

> Given, however, that they aren't
> required to occupy memory, and are not considered to have a value, what
> does it mean if we declare X, above, volatile? Normally, declaring a
> struct volatile means that all of its elements are volatile. Which
> means in turn that the value of the element may change in ways not
> visible to the compiler. Which, when applied to a reference, means?

We won't be able to play that placement new trick to volatile objects, at
least not under the standard. My point was as I explained in other thread
that something unknown can have the same effect and compiler cannot apply
any optimizations to volatile objects. This by the way probably means that
references inside objects must occupy memory.

> IMHO, the meaning of volatile doesn't extend well to what we understand
> by threading today. It wasn't meant to. This doesn't mean that some
> implementation couldn't try to bend it to fit, but particularly in C++,
> with references, constructors, etc., I don't think the fit would ever be
> very good. Posix doesn't try to make it fit.
>
> This doesn't mean that volatile should be without meaning on a modern
> system. I would still expect it to be useful (both necessary and
> sufficient) for memory mapped IO, for example, or spin-locking to
> synchronize with a hardware interrupt -- these are the jobs it was
> designed to do, and I see nothing which should suggest an intent on the
> part of the standards committee that it no longer do them. (On the
> other hand, the implementation of volatile in Sun CC or g++, on a Sparc,
> or arguably in VC++ on a Windows PC, doesn't do them.)

I agree and thank you for your comments.

> James Kanze GABI Software mailto:ka...@gabi-soft.fr

thanks,
v

ka...@gabi-soft.fr

unread,
Mar 30, 2004, 2:18:51 PM3/30/04
to
"Vladimir Kouznetsov" <vladimir....@ngrain.com> wrote in message
news:<106h9o2...@corp.supernews.com>...

> > I really don't know how to answer that, because I'm not sure what
> > you're objecting to. If you could show a specific example, and then
> > point out exactly what you think the problem is with it, then I
> > might be able to comment.

> Sorry for my poor presentment. If I understand correctly you and James
> claim that compiler can apply certain kinds of optimization to
> volatile objects:

Not to volatile objects. References aren't objects, and cannot be made
volatile.

> struct X
> {
> X();
> int const& r;
> }

> volatile X x;

Note well: the meaning of volatile here is that all of the sub-objects
of x are volatile, applied recursively until we arrive at the lowest
level objects. In this case, however, x doesn't contain any
sub-objects, so the volatile has no meaning.

> void f()
> {
> int r = x.r; // a
> r = x.r; // b
> }

> According to you (b) can be removed by compiler because standard C++
> doesn't allow changing value of a reference in a volatile object.
> According to James (a) even can be replaced with assignment of a
> constant. Is my understanding correct so far?

The compiler is allowed to suppose that the reference refers to whatever
was used to initialize it. A reference isn't an object, and doesn't
(conceptually, at least) occupy space. A reference itself cannot be
volatile (nor const). With regards to the semantics of volatile, it
cannot be modified externally (in a way not knowable to the compiler)
because conceptually it doesn't have any existance externally.

See in particular §3.9.3/3: "[...] each non-static, non-reference data
member of a volatile-qualified class object is volatile-qualified
[...]" and §8.3.2/3: "it is unspecified whether or not a reference
requires storage."

More generall, the intention (clearly stated) of the C++ committee was
that volatile have the same meaning as it does in C. All discussions of
volatile and const in C refer to objects, however. The C++ standard
says explicitely that references are not objects.

> My concern is that according to C(7.6.3/6) which I believe also is
> applicable to C++ "An object that has volatile-qualified type may be
> modified in ways unknown to the implementation".

Quite. But it doesn't apply to references, because they are not
objects, and because they cannot be cv-qualified. (If a
cv-qualification is introduced via a typedef or a template type
argument, the code is legal, but the cv-qualifier is ignored.)

> Therefore neither of the optimizations is allowed in standard C/C++.
> Compiler has to generate code that re-reads all the relevant parts of
> the object - the bit representation of the reference in that
> case.

There is no such thing as a bit representation of a reference. A
reference is not an object, and need not occupy space (bits).

--


James Kanze GABI Software mailto:ka...@gabi-soft.fr

Conseils en informatique orientée objet/ http://www.gabi-soft.fr
Beratung in objektorientierter Datenverarbeitung

9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Motti

unread,
Mar 30, 2004, 2:19:56 PM3/30/04
to
ka...@gabi-soft.fr wrote in message news:<d6652001.04032...@posting.google.com>...
> "Vladimir Kouznetsov" <vladimir....@ngrain.com> wrote in message
> news:<1066m08...@corp.supernews.com>...
>
> > bool b = true;
>
> > struct X {
> > int const& r;
> > X(): r(b? 0: 1)
>
> Just for the record, this is undefined behavior. The temporary used to
> initialized the reference ceases to exist at the end of the constructor.

Doesn't binding a temporary to a const reference keep the temporary
alive until the reference goes out of scope?
In this case when the instance of X is destroyed.

Doug Harrison

unread,
Mar 30, 2004, 2:20:41 PM3/30/04
to
Vladimir Kouznetsov wrote:

>Sorry for my poor presentment. If I understand correctly you and James claim
>that compiler can apply certain kinds of optimization to volatile objects:
>
>struct X
>{
> X();
> int const& r;
>}
>
>volatile X x;
>
>void f()
>{
> int r = x.r; // a
> r = x.r; // b
>}
>
>According to you (b) can be removed by compiler because standard C++ doesn't
>allow changing value of a reference in a volatile object. According to James
>(a) even can be replaced with assignment of a constant. Is my understanding
>correct so far?

I'd say yes.

>My concern is that according to C(7.6.3/6) which I believe also is
>applicable to C++ "An object that has volatile-qualified type may be
>modified in ways unknown to the implementation". Therefore neither of the
>optimizations is allowed in standard C/C++. Compiler has to generate code
>that re-reads all the relevant parts of the object - the bit representation
>of the reference in that case. Therefore using volatile objects doesn't
>affect semantics of optimized programs, it only can affect performance.
>
>I hope now it's more clear. If it is I appreciate your comments.

Can you find a well-defined way to reseat a reference? If not, then
you can throw the standard out anyway, as you get to make up your own
rules as soon as you invoke undefined behavior. Consider the (mostly)
analogous pointer case:

struct X
{
X();
int* const p;
};

volatile X x;

Above, x.p is an object, and there is a type "int* const volatile", so
presumably the compiler is obligated to load x.p every time it's used.
However, per 7.1.5.1/4, modification of x.p is undefined, so how
obligated is it, really? Per 1.3.12, it can do whatever it wants,
including ignoring volatile and treating x.p as a constant.

--
Doug Harrison
d...@mvps.org

Hyman Rosen

unread,
Mar 30, 2004, 7:29:19 PM3/30/04
to
Vladimir Kouznetsov wrote:
> the bit representation of the reference

Rferences don't have a "bit representation". They are
not objects. They can't be reseated, even if they are
in volatile objects.

Vladimir Kouznetsov

unread,
Mar 31, 2004, 6:23:00 PM3/31/04
to
"Doug Harrison" <d...@mvps.org> wrote in message
news:frth60l57uqds8861...@4ax.com...

> >My concern is that according to C(7.6.3/6) which I believe also is
> >applicable to C++ "An object that has volatile-qualified type may be
> >modified in ways unknown to the implementation".
>
> Can you find a well-defined way to reseat a reference? If not, then
> you can throw the standard out anyway, as you get to make up your own
> rules as soon as you invoke undefined behavior. Consider the (mostly)
> analogous pointer case:

This is where the contradictions is (for me anyway): there is no a standard
compliant way to reseat a reference and yet something can modify a volatile
object with absolutely no respect to the rest of the standard in a standard
compliant way (?!). Am I misinterpreting what the standard says? May be the
"ways unknown to the implementation" actually involve undefined behavior?

>
> struct X
> {
> X();
> int* const p;
> };
>
> volatile X x;
>
> Above, x.p is an object, and there is a type "int* const volatile", so
> presumably the compiler is obligated to load x.p every time it's used.
> However, per 7.1.5.1/4, modification of x.p is undefined, so how
> obligated is it, really? Per 1.3.12, it can do whatever it wants,
> including ignoring volatile and treating x.p as a constant.

Interesting. But still there is something that bothers me. Volatile aside
for a moment:

new(&x) X();

doesn't change the const pointer it terminates the old object and constructs
a new one as per 3.8/1.

So if something can change a volatile object in a standard
disrespectful/compliant way it won't violate 7.1.5.1/4.

> Doug Harrison

Thank you for your comments - I'm learning something new.

thanks,
v

Vladimir Kouznetsov

unread,
Mar 31, 2004, 6:51:52 PM3/31/04
to
"Hyman Rosen" <hyr...@mail.com> wrote in message
news:10806587...@master.nyc.kbcfp.com...

> Vladimir Kouznetsov wrote:
> > the bit representation of the reference
>
> Rferences don't have a "bit representation". They are
> not objects. They can't be reseated, even if they are
> in volatile objects.

The standard says it's not specified, which in my opinion means that
implementation should decide that depending on situation and using a
reference as a data member of a class whose instance is declared volatile
calls for a storage.
I may be misinterpreting the standard so feel free to disprove my
superstition.

thanks,
v

Vladimir Kouznetsov

unread,
Mar 31, 2004, 6:54:26 PM3/31/04
to
Thank you very much James, I think I understand that better now. I still
have some doubts though so if you haven't been bored already let me
continue. Not that that is particularly important - last time when I used
volatile was about 10 years ago when I was programming for DOS - so if you
don't feel like continue that's fine.

<ka...@gabi-soft.fr> wrote in message
news:d6652001.04032...@posting.google.com...

> > struct X
> > {
> > X();
> > int const& r;
> > }
>
> > volatile X x;
>
> Note well: the meaning of volatile here is that all of the sub-objects
> of x are volatile, applied recursively until we arrive at the lowest
> level objects. In this case, however, x doesn't contain any
> sub-objects, so the volatile has no meaning.

I'm not sure I understand why is that true. Why do you think that the object
itself is not affected? What if the object has virtual functions?

> The compiler is allowed to suppose that the reference refers to whatever
> was used to initialize it. A reference isn't an object, and doesn't
> (conceptually, at least) occupy space. A reference itself cannot be
> volatile (nor const). With regards to the semantics of volatile, it
> cannot be modified externally (in a way not knowable to the compiler)
> because conceptually it doesn't have any existance externally.
>
> See in particular §3.9.3/3: "[...] each non-static, non-reference data
> member of a volatile-qualified class object is volatile-qualified
> [...]" and §8.3.2/3: "it is unspecified whether or not a reference
> requires storage."


Does reference occupy memory or does not is not specified, so I take it it
can if circumstances call.

> More generall, the intention (clearly stated) of the C++ committee was
> that volatile have the same meaning as it does in C. All discussions of
> volatile and const in C refer to objects, however. The C++ standard
> says explicitely that references are not objects.

I failed to find the place in the standard. Could you please direct me?

> James Kanze GABI Software mailto:ka...@gabi-soft.fr

thanks,
v

Maciej Sobczak

unread,
Mar 31, 2004, 7:05:19 PM3/31/04
to
Hi,

Doug Harrison wrote:

> struct X
> {
> X();
> int* const p;
> };
>
> volatile X x;
>
> Above, x.p is an object, and there is a type "int* const volatile", so
> presumably the compiler is obligated to load x.p every time it's used.
> However, per 7.1.5.1/4, modification of x.p is undefined, so how
> obligated is it, really? Per 1.3.12, it can do whatever it wants,
> including ignoring volatile and treating x.p as a constant.

I'm not so sure about it.
According to 7.1.5.1/4, attempting to modify a const object yields
undefined behavior, but it applies to *your* attempts, expressed in
code. volatile means that the value can be changed by means undetectable
by an implementation.
If you combine const with volatile:

const volatile int i;

you get the effect of not being able to change i by yourself, but at the
same time being able to see changes made by means external to
implementation.
A hardware input device is a common example.

Now:

struct X
{
X();
int* const p;
};

p is a pointer value that you cannot modify (or you get UB if you try
hard enough), but (after adding volatile) that can be modified by
external means, like hardware input device.

Does it make any sense for the member?
With placement new - constructing X in the device-specific address space
- why not?

Do I miss something?

(of course, a reference is a different beast - it is not an object)

--
Maciej Sobczak : http://www.msobczak.com/
Programming : http://www.msobczak.com/prog/

ka...@gabi-soft.fr

unread,
Apr 1, 2004, 7:15:00 AM4/1/04
to
Doug Harrison <d...@mvps.org> wrote in message
news:<frth60l57uqds8861...@4ax.com>...

> Can you find a well-defined way to reseat a reference?

He did just that. His reference was a member of an object with a user
defined constructor (but with a trivial destructor). He reconstructed
the object. According to §3.8/4 "A program may end the lifetime of any
object by reusing the storage which the object occupies [...]."

His example does raise an interesting point with regards to volatile
(and const): the volatile attribute affects the object; when you reuse
the storage, the object ceases to exist, and another object replaces it.
(And volatile and const to not apply during construction and
destruction.)

Forget about volatile and references for an instance, and consider:

struct C
{
int value ;
C( int i ) : value( i ) {}
} ;

C const aC( 1 ) ;

int
main()
{
std::cout << aC.value << '\n' ;
new ( const_cast< C* >( &aC ) ) C( 2 ) ;
std::cout << aC.value << '\n' ;
return 0 ;
}

Legal? If not, why not? The const_cast, itself, is legal. Modifying
a const object is illegal, of course, but in this case, as soon as we
invoke the constructor, the previously existing const object ceases to
exist, and we start constructing (in raw memory) a new object, which
will be const as soon as it is constructed. But in the constructor (or
the destructor), an object is never const (nor volatile).

> If not, then you can throw the standard out anyway, as you get to make
> up your own rules as soon as you invoke undefined behavior. Consider
> the (mostly) analogous pointer case:

> struct X
> {
> X();
> int* const p;
> };

> volatile X x;

> Above, x.p is an object, and there is a type "int* const volatile", so
> presumably the compiler is obligated to load x.p every time it's used.
> However, per 7.1.5.1/4, modification of x.p is undefined, so how
> obligated is it, really?

The volatile obliges it. How is this case any different from the
"extern const volatile int real_time_clock;" used as an example in
§6.7.3/10 of the C standard? In fact, the presence of a non-trivial
constructor also obliges it, according to §3.8/4 of the C++ standard.
Except when applied to a POD, const are pretty much only indications of
intent in C++; in particular, const doesn't guarantee that the apparent
value of the object can't change, since the object itself can change.

Now that I think of it: is this restriction limited to non-POD types?
Suppose in my example above, I replace the class C with a simple int?
Can I use "new (&object) int( 42 )" to modify the object? I have a
sneaking suspicion that the C++ standard says yes, although I doubt that
that was the intent; consider the implications with regards to using a
"const int" in an integral constant expression, for example:

int const dim = 42 ;

void
dontDoThis( void const* p )
{
new ( const_cast< void* >( p ) ) int( 24 ) ;
}

int
main()
{
dontDoThis( &dim ) ;
char c[ dim ] ;
std::cout << sizeof( c )
<< " == "
<< dim
<< "?\n" ;
return 0 ;
}

(Maybe somebody would like to put in a defect report? IMHO, changing
the first sentence of §3.8/4 to start "A program may end the lifetime of
any non-const object ..." would be an adequate fix, but I've not
considered the implications of volatile.)

--


James Kanze GABI Software mailto:ka...@gabi-soft.fr

Conseils en informatique orientée objet/ http://www.gabi-soft.fr
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

ka...@gabi-soft.fr

unread,
Apr 1, 2004, 7:19:31 AM4/1/04
to
kz...@myrealbox.com (Motti) wrote in message
news:<173891e8.04032...@posting.google.com>...

> ka...@gabi-soft.fr wrote in message
> news:<d6652001.04032...@posting.google.com>...
> > "Vladimir Kouznetsov" <vladimir....@ngrain.com> wrote in
> > message news:<1066m08...@corp.supernews.com>...

> > > bool b = true;

> > > struct X {
> > > int const& r;
> > > X(): r(b? 0: 1)

> > Just for the record, this is undefined behavior. The temporary used
> > to initialized the reference ceases to exist at the end of the
> > constructor.

> Doesn't binding a temporary to a const reference keep the temporary
> alive until the reference goes out of scope?

It does, except when it doesn't. In this case, we have an exception to
the exception."

> In this case when the instance of X is destroyed.

$S12.2/5:
The second context is when a reference is bound to a temporary. The
temporary to which the reference is bound or the temporary that is
the complete object to a subobject of which the temporary is bound
persists for the lifetime of the reference except as specified
below. A temporaty bound to a reference member in a constructor's
ctor-initializer persists until the constructor exits. [...]

While I'm at it, since I'm reading the standard particularly carefully
at the moment, there is something strange just before the example in the
same paragraph:

In addition, the destruction of temporaries bound to references
shall take into account the ordering of destruction of objects with
static or automatic storage duration; that is, if obj1 is an object
with static or automatic storage duration created before the
temporary is created, the temporary shall be destroyed before obj1
is destructed; if obj2 is an object with static or automatic storage
duration created after the temporary is created, the temporary shall
be destroyed after obj2 is destroyed.

Literally, this means that:

void
f()
{
C c1 ; // #0
C& c2 = C( 32 ) ; // #1
static C c3 ; // #2
}

The temporary bound to cw at #1 cannot be destroyed before the static c3
is destroyed. I'm pretty sure that this is not what was intended: no
compiler implements it; in fact, it isn't possible to implement it,
since the same sentence requires that the temporary in #1 be destroyed
before c1 is destroyed. (There's also the fact that the temporary
should be constructed each time the function is executed. Only the
first temporary is constructed before c3, however, and should last until
c3 is destructed.)

I don't have an easy correction for the text, however. About the
simplest solution I see is to duplicate the text, once for references
and variables with static duration, and once for those with automatic.

--
James Kanze GABI Software mailto:ka...@gabi-soft.fr
Conseils en informatique orientée objet/ http://www.gabi-soft.fr
Beratung in objektorientierter Datenverarbeitung

9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

ka...@gabi-soft.fr

unread,
Apr 1, 2004, 6:39:52 PM4/1/04
to
"Vladimir Kouznetsov" <vladimir....@ngrain.com> wrote in message
news:<106kegl...@corp.supernews.com>...

> <ka...@gabi-soft.fr> wrote in message
> news:d6652001.04032...@posting.google.com...
> > > struct X
> > > {
> > > X();
> > > int const& r;
> > > }

> > > volatile X x;

> > Note well: the meaning of volatile here is that all of the
> > sub-objects of x are volatile, applied recursively until we arrive
> > at the lowest level objects. In this case, however, x doesn't
> > contain any sub-objects, so the volatile has no meaning.

> I'm not sure I understand why is that true. Why do you think that the
> object itself is not affected? What if the object has virtual
> functions?

I should have been clearer. Volatile does apply to the object with
regards to the type system -- you can't assign an X volatile* to an X*,
for example, without using a const_cast. Volatile doesn't have any real
semantics applied to a class type, however, except that the non-static
non-reference members are volatile. In the end, any semantics of
volatile involve something called "access", and at the lowest level, the
level at which volatile operates, you cannot directly access a class
type.

On some machines, this may also be true for other types, like double,
and on such machines, volatile probably doesn't have any significant
semantics for double either. Never the less, the language treats double
as an "atomic" type -- you cannot portably access parts of a double.

At the language level, one could argue that since I can assign classes,
there is such a thing as accessing an object with class type. However,
the standard defines such assignment (in the case of the implicit
assignment operator) as member by member copy; again, the semantic
effect of volatile applies to the members, and not to the object as a
whole.

I think that this partially explains why the volatile doesn't affect
reference members -- the semantics of volatile concern "access" (for
some implementation-defined definition of access). By definition, you
cannot access the reference itself, so volatile can have no meaning for
it. (I don't know if this is the real justification or not. It is just
an idea which occurs to me. Food for thought, so to speak.)

> > The compiler is allowed to suppose that the reference refers to
> > whatever was used to initialize it. A reference isn't an object,
> > and doesn't (conceptually, at least) occupy space. A reference
> > itself cannot be volatile (nor const). With regards to the
> > semantics of volatile, it cannot be modified externally (in a way
> > not knowable to the compiler) because conceptually it doesn't have
> > any existance externally.

> > See in particular §3.9.3/3: "[...] each non-static, non-reference
> > data member of a volatile-qualified class object is
> > volatile-qualified [...]" and §8.3.2/3: "it is unspecified whether
> > or not a reference requires storage."

> Does reference occupy memory or does not is not specified, so I take
> it it can if circumstances call.

In practice, when used as a function parameter or as a member of a
class, a reference will always be implemented as a pointer, and will
occupy memory. But while the standard does try to recognize existing
practice, and define a language that is in fact implementable, the
formal specifications are independent of implementation aspects.
Formally, a reference need not occupy memory. Ever. (Formally, of
course, the compiler can use whatever memory it needs. The return
address of a function isn't an object in C++, and there is certainly no
requirement that it occupy memory. But I can't imagine an
implementation where it doesn't really exist and occupy memory.)

The difference is subtle. Too subtle, perhaps. Under the as if rule,
nothing ever needs to occupy memory, unless I do something that the
implementation doesn't know how to implement without having the object
occupy memory. An object of class type has an address, and so must
occupy memory. But if I never take its address, and the compiler is
able to optimize away all of my uses of the object, It is in no way
required to actually allocate memory for the object. And in fact, if
the compiler is incapable of implementing the semantics of reference
without allocating memory, it must allocate memory for the referece.
The only real difference is that the semantics of reference are
intentionally limited in such a way as to make it impossible for a
conforming program to determine whether it occupies memory.

> > More generall, the intention (clearly stated) of the C++ committee
> > was that volatile have the same meaning as it does in C. All
> > discussions of volatile and const in C refer to objects, however.
> > The C++ standard says explicitely that references are not objects.

> I failed to find the place in the standard. Could you please direct me?

It seems to be "common and accepted knowledge" amongst the standard
specialists; I've heard it very often, and have always believed it.
However, I can't find a clear and explicite statement of it either,
although there are a number of places where it is strongly implied. To
begin with, the definition of an object in §1.8 says that "An object is
a region of storage." Which implies that it must occupy memory (modulo
optimizations under the as if rule, of course); the fact that a
reference is not required to occupy memory would seem to exclude it from
being an object. Other than that, every time that the standard means
for something to apply to both objects and references, it says "objects
and references" -- if a reference were an object, it would just say
"objects". Consider for example §3.7/4: after talking about the storage
duration of "objects", the standard says that what it is saying applies
to references as well. Also §3.5/2: "A name is said to have linkage
when it might denote the same object, reference, [...]" No need to
mention reference if they were objects.

On the other hand, there are places where it would seem that references
are objects. In particular, §8.5/5:

To zero-initialize storage for an object of type T means:
[...]
-- if T is a reference type, no initialization is performed.

In this case, the standard rather obviously does consider references
objects.

Every time I've actually discussed the issue with someone who was active
in the writing of the standard, they have said that references aren't
objects. So I will continue to suppose that this is the intent.

Note that with regards to my previous posting, there are some open
issues concerning it. In particular, it is proposed that reusing the
storage of a class type containing a reference invalidate all pointer or
references to the original object, so that using such a pointer or
reference becomes undefined behavior. (Core issue 89. Personally, I'd
go one step farther, and extend §3.8/9 to say that "Creating a new
object at the storage location that a const object, or an object of
class type which contains a non-static reference member, occupies,
[...]". I see no imperative for supporting such coding tricks.

(I don't remember exactly, but I think some of my examples infringed on
§3.8/9.)

--


James Kanze GABI Software mailto:ka...@gabi-soft.fr

Conseils en informatique orientée objet/ http://www.gabi-soft.fr
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

John Potter

unread,
Apr 1, 2004, 6:41:59 PM4/1/04
to
On 1 Apr 2004 07:15:00 -0500, ka...@gabi-soft.fr wrote:

> Forget about volatile and references for an instance, and consider:

> struct C
> {
> int value ;
> C( int i ) : value( i ) {}
> } ;

> C const aC( 1 ) ;

> int
> main()
> {
> std::cout << aC.value << '\n' ;
> new ( const_cast< C* >( &aC ) ) C( 2 ) ;
> std::cout << aC.value << '\n' ;
> return 0 ;
> }

> Legal?

No.

> If not, why not?

3.8/9. Creating a new object where a const object is or was is
undefined behavior.

This makes the rest of the post meaningless.

Since references are not objects and are not required to use storage,
3.8/9 does not apply to them.

John

John Potter

unread,
Apr 1, 2004, 6:49:15 PM4/1/04
to
On 1 Apr 2004 07:15:00 -0500, ka...@gabi-soft.fr wrote:

> Doug Harrison <d...@mvps.org> wrote in message
> news:<frth60l57uqds8861...@4ax.com>...

> > Can you find a well-defined way to reseat a reference?

> He did just that. His reference was a member of an object with a user
> defined constructor (but with a trivial destructor). He reconstructed
> the object. According to §3.8/4 "A program may end the lifetime of any
> object by reusing the storage which the object occupies [...]."

I looked back through the thread but could not find any place where he
reconstructed the object. Did I miss it or did you mean that he could
reconstruct the object reseating the reference?

I eventually found his reconstruction of an object with an int*const
member.

In any case, TC1 modified 3.8/7 to prohibit reconstructing an object
with a const or reference member.

John

Hyman Rosen

unread,
Apr 1, 2004, 6:53:22 PM4/1/04
to
ka...@gabi-soft.fr wrote:
> Legal?
No.

> If not, why not?
Because 3.8/9 says so.

Hyman Rosen

unread,
Apr 1, 2004, 6:54:45 PM4/1/04
to
Vladimir Kouznetsov wrote:
> The standard says it's not specified, which in my opinion means that
> implementation should decide that depending on situation and using a
> reference as a data member of a class whose instance is declared volatile
> calls for a storage.
> I may be misinterpreting the standard so feel free to disprove my
> superstition.

I would say that given the following
struct A { int m, &r; A() : r(m) { } };
the implementation is perfectly free to use no representation at all
for A::r, and simply turn all uses of A::r into A::m.

Doug Harrison

unread,
Apr 2, 2004, 3:16:56 AM4/2/04
to
Maciej Sobczak wrote:

>I'm not so sure about it.
>According to 7.1.5.1/4, attempting to modify a const object yields
>undefined behavior, but it applies to *your* attempts, expressed in
>code. volatile means that the value can be changed by means undetectable
>by an implementation.
>If you combine const with volatile:
>
>const volatile int i;
>
>you get the effect of not being able to change i by yourself, but at the
>same time being able to see changes made by means external to
>implementation.
>A hardware input device is a common example.
>
>Now:
>
>struct X
>{
> X();
> int* const p;
>};
>
>p is a pointer value that you cannot modify (or you get UB if you try
>hard enough), but (after adding volatile) that can be modified by
>external means, like hardware input device.

I guess if that wasn't the case, "const volatile" would be pretty
useless. However, as this thread concerns volatile's relationship with
multithreading, I'd like to clarify my post. If you take the position
that the standard doesn't talk about threads, and thus everything is
undefined in a multithreaded C++ program, then there's no point in
talking about any of this. Otherwise, there's the question, how would
a multithreaded program modify x.p? One way would be to cast const
away, and that would be subject to 7.1.5.1/4. Now, for the sake of
argument, take multithreading out of the picture and consider the
single-threaded case:

struct X
{
X();
int* const p;
};

volatile X x;

void f()
{
const_cast<volatile int*&>(x.p) = 0;
}

According to 7.1.5.1/4, this is undefined. Surely, if it's undefined
in the ST case, it can't magically become well-defined in the MT case.

What about recreating x with placement new, as has been suggested in
other messages? That would appear to be undefined in many cases per
3.8/9 and in general by :

http://www.comeaucomputing.com/iso/cwg_defects.html#89

--
Doug Harrison
d...@mvps.org

Doug Harrison

unread,
Apr 2, 2004, 3:17:22 AM4/2/04
to
ka...@gabi-soft.fr wrote:

>He did just that. His reference was a member of an object with a user
>defined constructor (but with a trivial destructor). He reconstructed
>the object. According to §3.8/4 "A program may end the lifetime of any
>object by reusing the storage which the object occupies [...]."

Have you seen this:

http://www.comeaucomputing.com/iso/cwg_defects.html#89

<q>
89. Object lifetime does not account for reference rebinding
Section: 3.8 basic.life Status: TC1 Submitter: AFNOR
Date: 27 Oct 1998
...
Proposed Resolution (10/00):

Add a new bullet to the list of restrictions in 3.8 basic.life
paragraph 7, following the second bullet ("the new object is of the
same type..."):

the type of the original object is not const-qualified, and, if a
class type, does not contain any non-static data member whose type is
const-qualified or a reference type, and
</q>

>His example does raise an interesting point with regards to volatile
>(and const): the volatile attribute affects the object; when you reuse
>the storage, the object ceases to exist, and another object replaces it.
>(And volatile and const to not apply during construction and
>destruction.)
>
>Forget about volatile and references for an instance, and consider:
>
> struct C
> {
> int value ;
> C( int i ) : value( i ) {}
> } ;
>
> C const aC( 1 ) ;
>
> int
> main()
> {
> std::cout << aC.value << '\n' ;
> new ( const_cast< C* >( &aC ) ) C( 2 ) ;
> std::cout << aC.value << '\n' ;
> return 0 ;
> }
>
>Legal? If not, why not? The const_cast, itself, is legal. Modifying
>a const object is illegal, of course, but in this case, as soon as we
>invoke the constructor, the previously existing const object ceases to
>exist, and we start constructing (in raw memory) a new object, which
>will be const as soon as it is constructed. But in the constructor (or
>the destructor), an object is never const (nor volatile).

The object aC has static storage duration and is declared const.
Therefore, the program is undefined per 3.8/9.

>> struct X
>> {
>> X();
>> int* const p;
>> };
>
>> volatile X x;
>
>> Above, x.p is an object, and there is a type "int* const volatile", so
>> presumably the compiler is obligated to load x.p every time it's used.
>> However, per 7.1.5.1/4, modification of x.p is undefined, so how
>> obligated is it, really?
>
>The volatile obliges it. How is this case any different from the
>"extern const volatile int real_time_clock;" used as an example in
>§6.7.3/10 of the C standard? In fact, the presence of a non-trivial
>constructor also obliges it, according to §3.8/4 of the C++ standard.
>Except when applied to a POD, const are pretty much only indications of
>intent in C++; in particular, const doesn't guarantee that the apparent
>value of the object can't change, since the object itself can change.

I still think modification of x.p is undefined. Please see my reply to
Maciej Sobczak.

--
Doug Harrison
d...@mvps.org

Vladimir Kouznetsov

unread,
Apr 2, 2004, 4:34:41 PM4/2/04
to
"Hyman Rosen" <hyr...@mail.com> wrote in message
news:10808341...@master.nyc.kbcfp.com...

> I would say that given the following
> struct A { int m, &r; A() : r(m) { } };
> the implementation is perfectly free to use no representation at all
> for A::r, and simply turn all uses of A::r into A::m.

That was not possible under the version of the standard before TC1 because
it didn't forbid reseating references using placement new. Thanks to John
Potter for pointing out that TC1 closed that hole.

thanks,
v

Vladimir Kouznetsov

unread,
Apr 4, 2004, 8:53:30 AM4/4/04
to
"Doug Harrison" <d...@mvps.org> wrote in message
news:7eno601q61jclfedb...@4ax.com...

> single-threaded case:
>
> struct X
> {
> X();
> int* const p;
> };
>
> volatile X x;
>
> void f()
> {
> const_cast<volatile int*&>(x.p) = 0;
> }
>
> According to 7.1.5.1/4, this is undefined. Surely, if it's undefined
> in the ST case, it can't magically become well-defined in the MT case.

Yes, it's undefined according to standard and it doesn't matter when we are
talking about that wrt MT because MT is not standard. I guess that the point
I and Maciej are trying to make is this: f() is a way known to the
implementation and we don't consider it at all; there is some way "unknown
to the implementation" which is not described in standard C++ but presumably
can do all sort of things to the variable; so even if changing a const
object is forbidden by standard and a hole allowing to reseat a reference is
now closed by TC1 (thanks to you and to John Potter for pointing that out) I
think there should be additional restrictions on what can happen to volatile
objects or else optimizations don't seem possible.

> Doug Harrison


thanks,
v

Vladimir Kouznetsov

unread,
Apr 4, 2004, 8:56:06 AM4/4/04
to
Thanks a lot James. I adjusted my understanding of certain things in C++. I
guess I was under a wrong impression wrt references due to pre-TC1 standard.
Now I think that the intention was to strip any traces of identity from
references to disallow reseating them. The example I used was just
exploiting a hole in the earlier version of the Standard.
A reference is an information about a link between two objects so it has to
be stored somewhere so of course references always occupy some memory but
not in a sense C++ standard is concerned about - this memory is not
accessible by a program. I thought that "in not specified" was kind of a
variant of "as if rule" - if implementation can do without storage, that's
fine. Now I think that the difference between references and objects is
fundamental. Objects require storage and therefore identity and the content
of the storage can be changed using the identity, but implementation can
remove certain redundancies under "as if" rule. References on the other hand
don't have identity and therefore don't require storage (in "normal" sense)
but implementation can use some storage if that is convenient.

thank,
v

Motti

unread,
Apr 4, 2004, 7:05:22 PM4/4/04
to
ka...@gabi-soft.fr wrote in message news:<d6652001.04033...@posting.google.com>...

> kz...@myrealbox.com (Motti) wrote in message
> news:<173891e8.04032...@posting.google.com>...

> > Doesn't binding a temporary to a const reference keep the temporary


> > alive until the reference goes out of scope?
>
> It does, except when it doesn't. In this case, we have an exception to
> the exception."
>

> $S12.2/5:
> The second context is when a reference is bound to a temporary. The
> temporary to which the reference is bound or the temporary that is
> the complete object to a subobject of which the temporary is bound
> persists for the lifetime of the reference except as specified
> below. A temporaty bound to a reference member in a constructor's
> ctor-initializer persists until the constructor exits. [...]


This seems rather arbitrary. Is there a reason for making an
initialization list a special case?

A reason other than introducing a new gotcha that is ;o)

David Olsen

unread,
Apr 5, 2004, 5:34:56 AM4/5/04
to
Motti wrote:
> ka...@gabi-soft.fr wrote in message news:<d6652001.04033...@posting.google.com>...
>>
>>$S12.2/5:
>> The second context is when a reference is bound to a temporary. The
>> temporary to which the reference is bound or the temporary that is
>> the complete object to a subobject of which the temporary is bound
>> persists for the lifetime of the reference except as specified
>> below. A temporaty bound to a reference member in a constructor's
>> ctor-initializer persists until the constructor exits. [...]
>
> This seems rather arbitrary. Is there a reason for making an
> initialization list a special case?

In this case the temporary object is constructed within the context of
the constructor. Since temporary objects normally live on the stack,
having it survive beyond the end of the constructor would be difficult.
But the reference member to which the temporary is bound lives as long
as the object that contains it is alive, which is (almost) always longer
than the constructor.

This is best demonstrated with some code (which is designed to
demonstrate a point in as little code as possible and is not intended to
be an example of good style):

struct A {
~A() { /* do something important */ }
};
struct B {
B();
const A &r;
};
B::B() : r(A()) { }
B *f() {
return new B();
}
void g() {
B *b = f();
delete b;
}

In B's constructor, the reference member B.r is bound to a temporary A
object. If this temporary were to survive as long as the B.r, then it
would have to be destroyed after the "delete b;" in function 'g'. But
'g' can't know anything about the temporary because 'g' and B's
constructor may be in different translation units. Information about
the temporary can't be stored inside the B object itself because the
definition of B may be seen by translation units that don't see B's
constructor and therefore don't know about the temporary. I can't think
of any reasonable way for the compiler to arrange for the temporary to
be destroyed at the "correct" time.

So the initialization list is a special case because anything else would
put an undue burden on the compiler.

--
David Olsen
qg4h9...@yahoo.com

ka...@gabi-soft.fr

unread,
Apr 5, 2004, 9:39:29 AM4/5/04
to
John Potter <jpo...@falcon.lhup.edu> wrote in message
news:<ccUac.6512$NL4....@newsread3.news.atl.earthlink.net>...

> On 1 Apr 2004 07:15:00 -0500, ka...@gabi-soft.fr wrote:

> > Forget about volatile and references for an instance, and consider:

> > struct C
> > {
> > int value ;
> > C( int i ) : value( i ) {}
> > } ;

> > C const aC( 1 ) ;

> > int
> > main()
> > {
> > std::cout << aC.value << '\n' ;
> > new ( const_cast< C* >( &aC ) ) C( 2 ) ;
> > std::cout << aC.value << '\n' ;
> > return 0 ;
> > }

> > Legal?

> No.

> > If not, why not?

> 3.8/9. Creating a new object where a const object is or was is
> undefined behavior.

Yes. I didn't notice this until after my posting. I think I treated it
in a later posting.

Move the const to the int, instead of having it apply to the entire
object, and the question remains (or does it -- do sub-objects count as
objects in 3.8/9?).

As Doug Harrison points out, there is a DR which would solve this.

--
James Kanze GABI Software mailto:ka...@gabi-soft.fr
Conseils en informatique orientée objet/ http://www.gabi-soft.fr
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

ka...@gabi-soft.fr

unread,
Apr 5, 2004, 9:50:51 AM4/5/04
to
kz...@myrealbox.com (Motti) wrote in message
news:<173891e8.04040...@posting.google.com>...

> ka...@gabi-soft.fr wrote in message
> news:<d6652001.04033...@posting.google.com>...
> > kz...@myrealbox.com (Motti) wrote in message
> > news:<173891e8.04032...@posting.google.com>...

> > > Doesn't binding a temporary to a const reference keep the
> > > temporary alive until the reference goes out of scope?

> > It does, except when it doesn't. In this case, we have an exception
> > to the exception."

> > $S12.2/5:
> > The second context is when a reference is bound to a temporary. The
> > temporary to which the reference is bound or the temporary that is
> > the complete object to a subobject of which the temporary is bound
> > persists for the lifetime of the reference except as specified
> > below. A temporaty bound to a reference member in a constructor's
> > ctor-initializer persists until the constructor exits. [...]

> This seems rather arbitrary. Is there a reason for making an
> initialization list a special case?

Yes. Consider:

class T { ~T() ; T( int ) ; } ;

class C
{
public:
C() : r( 5 ) {}
C( T const& i ) : r( i ) {}

private:
T const& r ;
} ;

Two questions:

1. Where should the compiler allocate the temporary bound to r in the
default constructor?

2. How should the compiler determine whether what is bound to r needs
to be destructed in the destructor?

If you have good answers to these questions, I'm sure that the committee
would be interested, and would be willing to remove the exception.

--
James Kanze GABI Software mailto:ka...@gabi-soft.fr
Conseils en informatique orientée objet/ http://www.gabi-soft.fr
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

0 new messages