Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

memcpy() and volatiles...

1,756 views
Skip to first unread message

Peter Seebach

unread,
Dec 17, 1998, 3:00:00 AM12/17/98
to
So, does anyone have any opinions about expected or surprising behavior
from using memcpy to access volatile data? Assume you have a volatle
object which is able to count accesses to it; does anything imply a number
of accesses made by memcpy?

-s
--
Copyright 1998, All rights reserved. Peter Seebach / se...@plethora.net
C/Unix wizard, Pro-commerce radical, Spam fighter. Boycott Spamazon!
Send me money - get cool programs and hardware! No commuting, please.
Visit my new ISP <URL:http://www.plethora.net/> --- More Net, Less Spam!

Dave Hansen

unread,
Dec 17, 1998, 3:00:00 AM12/17/98
to
On Thu, 17 Dec 1998 22:16:34 GMT, se...@plethora.net (Peter Seebach)
wrote:

>So, does anyone have any opinions about expected or surprising behavior
>from using memcpy to access volatile data? Assume you have a volatle
>object which is able to count accesses to it; does anything imply a number
>of accesses made by memcpy?
>

Well, let's see what the standard says:

7.11.2.1 ... The memcpy function copies n characters from the
object pointed to by s2 into the object pointed to by s1. If
copying takes place between objects that overlap, the behavior is
undefined.

Well, nothing there. Nothing anywhere that I can see. I don't think
I'd be surprised by anything, except maybe locations that were skipped
entirely.

If I needed to declare an object volatile, I don't think I'd pass the
handling of that object off to a library function...

Regards,

-=Dave
Just my (10-010) cents
I can barely speak for myself, so I certainly can't speak for B-Tree.
Change is inevitable. Progress is not.

Francis Glassborow

unread,
Dec 18, 1998, 3:00:00 AM12/18/98
to
In article <6pfe2.3282$WZ6.7...@ptah.visi.com>, Peter Seebach
<se...@plethora.net> writes

>So, does anyone have any opinions about expected or surprising behavior
>from using memcpy to access volatile data? Assume you have a volatle
>object which is able to count accesses to it; does anything imply a number
>of accesses made by memcpy?

I believe that most volatile behaviour is implementation dependent. I
would be surprised if such code never produced any surprises.

Francis Glassborow Chair of Association of C & C++ Users
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation

Norman Diamond

unread,
Dec 18, 1998, 3:00:00 AM12/18/98
to
In article <w5gJCXC9...@robinton.demon.co.uk>, Francis Glassborow <fra...@robinton.demon.co.uk> writes:
>In article <6pfe2.3282$WZ6.7...@ptah.visi.com>, Peter Seebach
><se...@plethora.net> writes
>>So, does anyone have any opinions about expected or surprising behavior
>>from using memcpy to access volatile data? Assume you have a volatle
>>object which is able to count accesses to it; does anything imply a number
>>of accesses made by memcpy?
>
>I believe that most volatile behaviour is implementation dependent.

Of course. There is nothing special about library functions in this
respect. Even in this code:
volatile int i, j = 2;
i = j;
there is no way to know the number of read-modify-write cycles on i
other than implementation documentation. For example, the implementation
might make int 32 bits for some reason, even though the data bus transfers
128 bits, so in order to modify i, it is necessary to fetch i and 96 other
bits and then store the modified version. This might even do a redundant
fetch and store on j.

More problematic is whether a library function uses a non-volatile-qualified
lvalue to access an object defined with a volatile-qualified type. I think
the standard says that such an access yields undefined behaviour. But it
isn't clear on whether a call to a library function involves this kind of
access or not, and whether it's using an lvalue this way or not. From the
standard's point of view, even when a library function can be coded in C,
it still isn't the same as being part of the C program, at least not always.

>I would be surprised if such code never produced any surprises.

Uh, right. If you were telling the truth then your statement is indeed
true. Not quite self-fulfilling, but, ummmm. Please direct followups to
alt.fan.hofstadter :-)

--
<< If this were the company's opinion, I would not be allowed to post it. >>
"I paid money for this car, I pay taxes for vehicle registration and a driver's
license, so I can drive in any lane I want, and no innocent victim gets to call
the cops just 'cause the lane's not goin' the same direction as me" - J Spammer

Lawrence Kirby

unread,
Dec 18, 1998, 3:00:00 AM12/18/98
to
In article <367A4558...@usa.net>
psh...@cs.auckland.ac.nz "Parker Shaw" writes:

>Just one more thing to notice:
>
>Not only the behavior of memcpy matters,
>both also the one of the cache.

The cache is essentially irrelevant since it is the job of the compiler
to produce the appropriate code that meets the required semantics,
cache or no cache. As far as memcpy is concerned it is *not* defined
to access objects using volatile semantics. C90 7.11.1 talks about
"arrays of character type", not "arrays of volatile character type"
and the first two parameters of memcpy() are not pointers to volatile types.

The question is whether something like:

volatile int x;
int y = 42;

memcpy((int *)&x, &y, sizeof x);

results in undefined behaviour just as this does

*(int *)&x = y; /* Undefined */

C90 6.5.3 says "If an attempt is made to refer to an object defined with
a volatile-qualified type through use of an lvalue with non-volatile-
qualified type, the behaviour is undefined".

The problem is that standard library functions aren't defined in terms
of C code and lvalues exist purely in C source code as far as the
standard is concerned. So it depends on whether you interpret the
description of memcpy() to be as accessing the objects as if using
lvalues, or maybe this case is undefined due to a lack of any
appropriate definition for it in the standard.

--
-----------------------------------------
Lawrence Kirby | fr...@genesis.demon.co.uk
Wilts, England | 7073...@compuserve.com
-----------------------------------------


Parker Shaw

unread,
Dec 19, 1998, 3:00:00 AM12/19/98
to
Just one more thing to notice:

Not only the behavior of memcpy matters,
both also the one of the cache.

--
________________________________________________
Parker Shaw | Make it correct
psh...@cs.auckland.ac.nz | Make it elegant
________________________________________________

Parker Shaw

unread,
Dec 19, 1998, 3:00:00 AM12/19/98
to
I still believe the behaviour of cache matters. Peter has a
volatile object which is able to count accesses to it, so, for
example:

int foo(void) {
volatile int obj;
int a;

a=obj;
a=obj;
}

With cache, only one read on object obj; without cache, two reads.
Only C compiler knows that obj is volatile, cache doesn't know that.
So, if we have an object "change on read", we must make it non-
cacheable, this is not same as sharing a variable on multi-processor
system.

Francis Glassborow

unread,
Dec 19, 1998, 3:00:00 AM12/19/98
to
In article <367B3C50...@usa.net>, Parker Shaw <parke...@usa.net>
writes

>With cache, only one read on object obj; without cache, two reads.
>Only C compiler knows that obj is volatile, cache doesn't know that.
>So, if we have an object "change on read", we must make it non-
>cacheable, this is not same as sharing a variable on multi-processor
>system.

No, the compiler must do that. Caching values is only a form of
optimisation and accesses (whatever they may be) to volatile variables
must not be optimised away.

Parker Shaw

unread,
Dec 20, 1998, 3:00:00 AM12/20/98
to
> No, the compiler must do that. Caching values is only a form of
> optimisation and accesses (whatever they may be) to volatile variables

> must not be optimised away.

No, you misunderstood what I mean.
Let's have a simple example:

volatile int obj;

int foo(void) {
int a;
a=obj;
a=obj;
}

The compiled x86 code is something like:
...
mov ax, [obj]
mov ax, [obj]
...

Only the first mov instruction will drive the CPU to
access the main memory. For the second mov, the
CPU will NOT initiate another memory read cycle if
cache is enabled on obj.

Discussions from all of you are on the higher level. We don't
mean the same thing.

Francis Glassborow

unread,
Dec 20, 1998, 3:00:00 AM12/20/98
to
In article <367C8533...@usa.net>, Parker Shaw <parke...@usa.net>
writes

>int foo(void) {
> int a;
> a=obj;
> a=obj;
>}
>
>The compiled x86 code is something like:
>...
>mov ax, [obj]
>mov ax, [obj]

But a change to obj's value should have invalidated the cache so
presumably this is OK. If it isn't then the compiler must not compile
the code that way.

>...
>
>Only the first mov instruction will drive the CPU to
>access the main memory. For the second mov, the
>CPU will NOT initiate another memory read cycle if
>cache is enabled on obj.
>
>Discussions from all of you are on the higher level. We don't
>mean the same thing.

Francis Glassborow Chair of Association of C & C++ Users

Geoffrey KEATING

unread,
Dec 21, 1998, 3:00:00 AM12/21/98
to
Parker Shaw <parke...@usa.net> writes:

> > No, the compiler must do that. Caching values is only a form of
> > optimisation and accesses (whatever they may be) to volatile variables
>
> > must not be optimised away.
>
> No, you misunderstood what I mean.
> Let's have a simple example:
>
> volatile int obj;
>

> int foo(void) {
> int a;
> a=obj;
> a=obj;
> }
>
> The compiled x86 code is something like:
> ...
> mov ax, [obj]
> mov ax, [obj]

> ...
>
> Only the first mov instruction will drive the CPU to
> access the main memory. For the second mov, the
> CPU will NOT initiate another memory read cycle if
> cache is enabled on obj.
>
> Discussions from all of you are on the higher level. We don't
> mean the same thing.

It's implementation defined what an "access" is. If the compiler
wants to say that a store instruction is an access, it can do that and
still comply with the C standard.

In practise, this is what most compilers do. If you don't want the
cache to be involved, you should set that piece of memory to be
write-through, or uncached, or whatever suits. You'll have to do
something OS-dependent to get the right piece of physical memory
involved, anyway.

--
Geoff Keating <Geoff....@anu.edu.au>

Parker Shaw

unread,
Dec 22, 1998, 3:00:00 AM12/22/98
to
> But a change to obj's value should have invalidated the cache so
> presumably this is OK. If it isn't then the compiler must not compile

> the code that way.

Yes. Since the object changes itself, it should inform the CPU to
invalidate the cache, or we can just tell CPU make object UN-
cacheable. The compiler usually can't do much in either situation.

"Change on read" doesn't comply the specification of MEMORY.
It is memory-maped I/O. So, this is in fact not a problem of language.

Parker Shaw

unread,
Dec 22, 1998, 3:00:00 AM12/22/98
to
> You'll have to do something OS-dependent to get the right piece
> of physical memory involved, anyway.

100% agree.

0 new messages