I understand that in C++ volatile objects ( those non-primitive type
instances qualified with 'volatile' ) can actually call only those
member functions qualified as volatile .
But there is also a construct - const_cast< T *> (p) , that can
convert a volatile object pointer to constant pointer that can be used
to invoke non-volatile methods on the class.
That being the case - what would be the significance of the
'volatile' keyword qualified for c++ member functions in particular -
if we are able to invoke the non-volatile functions by casting. Am I
missing a design picture here.
Please don't multi-post on Usenet, this probably belongs in the other
group you posted to.
--
Ian Collins.
> I am actually trying to get my feet in multi-threaded C++
> programming. While I am aware that the C++ standard does not talk
> about threads (at least, for now - in C++03) - my question is more
> about the language / usage rather than any thread specific question.
> Sorry - if posted my mistake.
> I understand that in C++ volatile objects ( those non-primitive type
> instances qualified with 'volatile' ) can actually call only those
> member functions qualified as volatile .
More precisely: given a volatile qualified lvalue, you can only
call volatile functions on it.
Similar rules affect reference binding and implicit pointer
conversions. A volatile lvalue can only be bound to a reference
to volatile, and a volatile pointer will not convert implicitly
to a non-volatile one.
> But there is also a construct - const_cast< T *> (p) , that can
> convert a volatile object pointer to constant pointer that can be used
> to invoke non-volatile methods on the class.
The same rules apply as for const. If the actual object is
volatile, and it is accessed through a non-volatile lvalue, the
results are undefined behavior.
Thus, for example:
Toto volatile t1 ;
Toto t2 ;
void f( Toto volatile& t )
{
const_cast< Toto& >( t ).someFunction() ;
}
// ...
f( t1 ) ; // causes undefined behavior
f( t2 ) ; // no problem, according to the standard.
> That being the case - what would be the significance of the
> 'volatile' keyword qualified for c++ member functions in particular -
> if we are able to invoke the non-volatile functions by casting. Am I
> missing a design picture here.
What is the signification of const, given that you can invoke
non-const functions by casting? The rules are the same.
More to the point, since you started out by talking about
threading: what is the significance of volatile with regards to
threading? Posix makes no guarantees concerning volatile and
threading; I don't know what Windows "guarantees", but the
current VC++ compiler doesn't generate anything that could
possibly be useful for threading. (Andrei once wrote an article
in which he used volatile to ensure correctly locked accesses,
but his code worked not because it used any of volatiles
semantics, but because it exploited the way volatile works in
the type system.)
--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Actually I think that VC++ generates proper memory barriers for
accesses to
atomic volatile objects. I've never used this feature, but IIRC it is
documented.
> (Andrei once wrote an article
> in which he used volatile to ensure correctly locked accesses,
> but his code worked not because it used any of volatiles
> semantics, but because it exploited the way volatile works in
> the type system.)
More correctly, it happened to work in the specific compiler he was
using (and IIRC he explicitly noted it). There is no (portable)
guarantee that volatile has any meaning for multithreaded code or is
anyway useful for multithreading.
BTW, C++0x will likely have a detailed memory model for threading, but
it gives no new meanings to volatile.
HTH,
gpd
Thanks for the replies. So - if I understand correctly - can we
consider volatile more to be a hint to the compiler - but that the
programmer should never make any assumptions about the same.
>
> Thanks for the replies. So - if I understand correctly - can we
> consider volatile more to be a hint to the compiler - but that the
> programmer should never make any assumptions about the same.
A competent programmer never makes any assumptions. As to the meaning
of volatile, you can probably trust what your compiler's documentation
tells you. But keep in mind that its original purpose was embedded
programming, where reading from a hardware register twice could return
two different values, both of which were important. That, of course,
has nothing to do with multi-threaded programs.
--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of "The
Standard C++ Library Extensions: a Tutorial and Reference
(www.petebecker.com/tr1book)
> Actually I think that VC++ generates proper memory barriers
> for accesses to atomic volatile objects.
I've looked at the generated code from VC++ 8, and they weren't
there. There may be some option to cause the compiler to create
them; I've not studied the issue further.
Microsoft did tell the standards committee once that this was
part of its (then future) specifications, so perhaps future
versions of the the compiler will use it.
> I've never used this feature, but IIRC it is documented.
As I said, I looked at the generated code.
> > (Andrei once wrote an article
> > in which he used volatile to ensure correctly locked accesses,
> > but his code worked not because it used any of volatiles
> > semantics, but because it exploited the way volatile works in
> > the type system.)
> More correctly, it happened to work in the specific compiler
> he was using (and IIRC he explicitly noted it). There is no
> (portable) guarantee that volatile has any meaning for
> multithreaded code or is anyway useful for multithreading.
There is a guarantee that it is taken into account by the type
system, and that's all his code (or at least the parts I looked
at) depended on. Memory synchronization was still done by mutex
locks.
> BTW, C++0x will likely have a detailed memory model for
> threading, but it gives no new meanings to volatile.
I know. Microsoft did vaguely propose defining volatile more
strictly, but there was absolutely no resonance, and as far as I
know, they dropped it. (As far as I know, it was never more
than an informal suggestion made in an oral presentation
anyway.)
> > Thanks for the replies. So - if I understand correctly - can we
> > consider volatile more to be a hint to the compiler - but that the
> > programmer should never make any assumptions about the same.
> A competent programmer never makes any assumptions. As to the meaning
> of volatile, you can probably trust what your compiler's documentation
> tells you.
And of course, the standard explicitly says that most of
volatiles semantics are implementation defined anyway.
> But keep in mind that its original purpose was embedded
> programming, where reading from a hardware register twice
> could return two different values, both of which were
> important. That, of course, has nothing to do with
> multi-threaded programs.
At least on a Sparc, most compilers don't even support that much
with volatile. (I'm not up to date with how modern Intel
handles I/O, but the Sparc architecture specifications states
explicitly that you need additional membar instructions for it
to work.)
> > Actually I think that VC++ generates proper memory barriers
> > for accesses to atomic volatile objects.
> I've looked at the generated code from VC++ 8, and they weren't
> there.
[...]
http://groups.google.com/group/comp.lang.c++.moderated/msg/e161a53de7c2290e
Humm...
> > > Thanks for the replies. So - if I understand correctly - can we
> > > consider volatile more to be a hint to the compiler - but that the
> > > programmer should never make any assumptions about the same.
> > A competent programmer never makes any assumptions. As to the meaning
> > of volatile, you can probably trust what your compiler's documentation
> > tells you.
> And of course, the standard explicitly says that most of
> volatiles semantics are implementation defined anyway.
> > But keep in mind that its original purpose was embedded
> > programming, where reading from a hardware register twice
> > could return two different values, both of which were
> > important. That, of course, has nothing to do with
> > multi-threaded programs.
> At least on a Sparc, most compilers don't even support that much
> with volatile. (I'm not up to date with how modern Intel
> handles I/O, but the Sparc architecture specifications states
> explicitly that you need additional membar instructions for it
> to work.)
Here is Intel documentation:
http://groups.google.com/group/comp.arch/browse_frm/thread/2b848b8547373161
I have to admit that SPARC's 'membar XXX' instruction is way more versatile.
Humm, one more point... SPARC allows you to do naked interlocked operations.
For instance: Notice that the memory barrier is an explicit instruction of
the CASX loop outlined in the following code at the end of the post:
http://groups.google.com/group/comp.arch/msg/04cb5e2ca2a7e19a
This construct is superior to Intel model wrt LOCK prefix... See, the Intel
instruction analog for CASX is CMPXCHG8B. Notice the lack of the LOCK
prefix. SPARC supports naked... Intel cannot do this. Intel equlivant is
LOCK CMPXCHG8B in a loop; in other words, the full blown #StoreLoad barrier
is executed for every compare-and-swap instruction executed...
However, SPARC has its problems indeed:
http://groups.google.com/group/comp.arch/browse_frm/thread/071f8e0094e353e5
OUCH!
In my recent experience the volatile keyword is next to useless on most
compilers, embedded or not, if you want something portable. I find that
today's microprocessor designs have introduced a variety of contexts
that must be synchronized in order to achieve "true" visibility
(caches, pipelines, multiple core access to shared cache/RAM, etc).
On single-core PowerPC, for example, each access to a volatile variable
must be followed by a data pipeline flush in order for it to "take" to
hardware.
I've given up on the use of volalite and have defined a compiler- and
platform-specific library interface instead. It makes the main code
much easier to port.
-dr
Could you explain the purpose of volatile in embedded with an
example program?
I'm afraid i didn't get its purpose from your statement, which seems
to be based on pratical experience...
> >> But keep in mind that its original purpose was embedded
> >> programming, where reading from a hardware register twice
> >> could return two different values, both of which were
> >> important. That, of course, has nothing to do with
> >> multi-threaded programs.
> > At least on a Sparc, most compilers don't even support that much
> > with volatile. (I'm not up to date with how modern Intel
> > handles I/O, but the Sparc architecture specifications states
> > explicitly that you need additional membar instructions for it
> > to work.)
> In my recent experience the volatile keyword is next to
> useless on most compilers, embedded or not, if you want
> something portable.
Almost by definition, anything involving volatile isn't
portable. There are a few exceptions where longjmp is
involved, but there are intentionally no useful semantics
defined by the standard for it. The *intent* is that the
compiler writers provide something that is useful in the context
targeted by that compiler. And thus, that counting on the exact
semantics not be portable.
It's important to realize that the initial motivation for
volatile was memory mapped IO, which, of course, will never be
portable anyway.
> I find that today's microprocessor designs have introduced a
> variety of contexts that must be synchronized in order to
> achieve "true" visibility (caches, pipelines, multiple core
> access to shared cache/RAM, etc).
Embedded microprocessors as well? I know that this is a problem
on general purpose computers, from the low level PC's on up, but
I would have thought that the low end of the embedded processor
range wouldn't have such sofistication.
> On single-core PowerPC, for example, each access to a volatile
> variable must be followed by a data pipeline flush in order
> for it to "take" to hardware.
The Power PC is a bit more than what I was thinking of. When I
last did such work, the predominant processor was the Intel
8051: 128 bytes RAM, 2K ROM. And a store instruction that went
direct to memory (and didn't "terminate" so that the next
instruction could begin until the write was finished).
> I've given up on the use of volalite and have defined a
> compiler- and platform-specific library interface instead. It
> makes the main code much easier to port.
I think you'd want to do this anyway. Where I would imagine
(hope?) that volatile would be useful is in the
platform-specific library---as an alternative, in certain cases,
to inline assembler (or writing the entire function in
assembler).
OK. I'll take a very concrete, albeit dated example. (Dated,
because I've not worked in this field for a long time. Note too
that this is all from memory---the exact details are probably
inaccurate, even though the general picture is correct.)
Consider the Intel 8259 serial interface commandler. It's
accessed by means of two "registers", a data register, and a
command/status register; the command/status register is treated
as a command when it is written, and returns the status when it
is read. Imagine that on a certain machine, this commandler is
installed with the registers memory mapped to the addresses
0xFFF00 and 0xFFF02 (or __FAR_PTR( 0xFFF0, 0 ) and __FAR_PTR(
0xFFF0, 2 ), with the command/status register as the first. (On
a typical compiler for the Intel 8086, the macro __FAR_PTR
converted a segment/offset pair to a far pointer.)
To initialize the chip, it is necessary to start by writing 0
three times to the command/status register, followed by a
configuration command (two bytes, I think). In the status
register, there was a bit which indicated that a character had
been received, and an other which indicated that the chip was
ready to send. The code for handling this might look something
like:
unsigned char volatile* const
commandReg = __FAR_PTR( 0xFFF0, 0 ) ;
unsigned char volatile* const
statusReg = __FAR_PTR( 0xFFF0, 0 ) ;
unsigned char volatile* const
dataReg = __FAR_PTR( 0xFFF0, 2 ) ;
// ...
// initialization.
void
initialize8259()
{
*commandReg = 0 ;
*commandReg = 0 ;
*commandReg = 0 ;
*commandReg = firstInitByte ;
*commandReg = secondInitByte ;
}
unsigned char
read8259()
{
while ( (*statusReg & 0x01) == 0 ) {
}
return *dataReg ;
}
void
write8259(
unsigned char ch )
{
while ( (*statusReg & 0x04) == 0 ) {
}
*dataReg = ch ;
}
Note that without the volatile, any decent compiler will notice
that in the initialize8259 routine, you overwrite the same
memory several times without ever reading the value, and so
suppress all of the writes except the last. Similarly, in the
wait loops in read8259 and write8259, the compiler will see that
there is nothing in the loop which can change the value read
from "memory", and so will simply move it into a CPU register
before executing the loop, and test the value there each time
through the loop, rather than rereading the value from "memory"
(which is in fact the register on the 8259 chip) each time in
the loop.
Such busy loops would never be used in a modern OS, of course,
but device controllers have become much more complicated as
well, and it's not at all rare for a byte oriented device to
require outputting (writing) several bytes to the same address
for a single command, or inputting (reading) several bytes from
the same address to get the complete status.
Thanks James Kanze, i also found this article very useful explaining
the same concept,
http://www.netrino.com/Embedded-Systems/How-To/C-Volatile-Keyword
Unfortunately, a programmer that actually gets paid makes assumptions
all the time. A competent programmer that actually gets paid makes
assumptions but hates it :)
The reason is that most useful APIs are not documented well enough to
both use them and avoid making assumptions about them (such as "I assume
that this function always does what it did today").
--
Tristan Wibberley
Any opinion expressed is mine (or else I'm playing devils advocate for
the sake of a good argument). My employer had nothing to do with this
communication.
>
> On Fri, 2007-11-30 at 15:16 -0500, Pete Becker wrote:
>> On 2007-11-30 12:59:54 -0500, Rakesh Kumar <rakesh...@gmail.com> said:
>>
>>>
>>> Thanks for the replies. So - if I understand correctly - can we
>>> consider volatile more to be a hint to the compiler - but that the
>>> programmer should never make any assumptions about the same.
>>
>> A competent programmer never makes any assumptions. As to the meaning
>> of volatile, you can probably trust what your compiler's documentation
>> tells you. But keep in mind that its original purpose was embedded
>> programming, where reading from a hardware register twice could return
>> two different values, both of which were important. That, of course,
>> has nothing to do with multi-threaded programs.
>
> Unfortunately, a programmer that actually gets paid makes assumptions
> all the time. A competent programmer that actually gets paid makes
> assumptions but hates it :)
>
> The reason is that most useful APIs are not documented well enough to
> both use them and avoid making assumptions about them (such as "I assume
> that this function always does what it did today").
If you don't know what it does, don't use it. There's always another way.
> > Unfortunately, a programmer that actually gets paid makes assumptions
> > all the time. A competent programmer that actually gets paid makes
> > assumptions but hates it :)
> >
> > The reason is that most useful APIs are not documented well enough to
> > both use them and avoid making assumptions about them (such as "I assume
> > that this function always does what it did today").
>
> If you don't know what it does, don't use it. There's always another way.
>
"Another way" that makes the product cost five times more than the
market will pay for it. If the market wants slightly less confidence for
greatly reduced price a smart programmer, competent or not, gives it to
them. I offer stakeholders the choice, I'll take longer but be damned
sure it's right, or do it quickly with some uncertainty over what
happens in bizarre corner cases.
Unless you have the source of the OS, compiler and library routines then
prove its behaviour formally, you will be assuming stuff. The question
is what cost-confidence tradeoff your customers want.
> > > The reason is that most useful APIs are not documented
> > > well enough to both use them and avoid making assumptions
> > > about them (such as "I assume that this function always
> > > does what it did today").
> > If you don't know what it does, don't use it. There's always
> > another way.
Is there? If your software is going to run under Windows and do
anything useful, you're going to have to use the Windows API.
There's no other way. And I find that I'm always making
assumptions about some of its details. (I've yet to find any
really detailed specification of the threading guarantees, for
example, and to date, have just assumed "more or less like
Posix".) That's a special case, however, since...
> "Another way" that makes the product cost five times more than
> the market will pay for it.
Poorly specified software almost certainly means poorly written
software. Which in the end is more expensive than implementing
it yourself.
There are definitely tradeoffs involved, and Pete's statement
may be a bit too categoric. But from experience, using poorly
documented and poorly specified software is rarely a good
tradeoff: it leads to lower quality, and is usually more
expensive in the end anyway.
--
James Kanze (GABI Software) email:james...@gmail.com
> Poorly specified software almost certainly means poorly written
> software.
I once saw a quote, can't remember where:
"If a program has not been specified, it cannot be wrong, it can only be
surprising."
Yes, Windows is exactly the case I was thinking of. But even with POSIX,
the implementation may not be correct and you can't palm your customers
off with "It's the OS author's fault" because the customers might have
been using the platform for ages without a problem and say its all your
fault, etc. It will depend on the customer.
> > "Another way" that makes the product cost five times more than
> > the market will pay for it.
>
> Poorly specified software almost certainly means poorly written
> software. Which in the end is more expensive than implementing
> it yourself.
Oh yes, but specs should only include "must be perfect" if they're
willing to pay for it.
> There are definitely tradeoffs involved, and Pete's statement
> may be a bit too categoric. But from experience, using poorly
> documented and poorly specified software is rarely a good
> tradeoff: it leads to lower quality, and is usually more
> expensive in the end anyway.
I agree, things should be specified (IE agreed), and that includes the
tolerances and their associated prices. But when the platform is a given
and the platform is poorly specified, you just have to use it and make
sure your customer will understand the implications for the
cost-confidence tradeoff.