-s
--
Copyright 1998, All rights reserved. Peter Seebach / se...@plethora.net
C/Unix wizard, Pro-commerce radical, Spam fighter. Boycott Spamazon!
Send me money - get cool programs and hardware! No commuting, please.
Visit my new ISP <URL:http://www.plethora.net/> --- More Net, Less Spam!
>So, does anyone have any opinions about expected or surprising behavior
>from using memcpy to access volatile data? Assume you have a volatle
>object which is able to count accesses to it; does anything imply a number
>of accesses made by memcpy?
>
Well, let's see what the standard says:
7.11.2.1 ... The memcpy function copies n characters from the
object pointed to by s2 into the object pointed to by s1. If
copying takes place between objects that overlap, the behavior is
undefined.
Well, nothing there. Nothing anywhere that I can see. I don't think
I'd be surprised by anything, except maybe locations that were skipped
entirely.
If I needed to declare an object volatile, I don't think I'd pass the
handling of that object off to a library function...
Regards,
-=Dave
Just my (10-010) cents
I can barely speak for myself, so I certainly can't speak for B-Tree.
Change is inevitable. Progress is not.
I believe that most volatile behaviour is implementation dependent. I
would be surprised if such code never produced any surprises.
Francis Glassborow Chair of Association of C & C++ Users
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation
Of course. There is nothing special about library functions in this
respect. Even in this code:
volatile int i, j = 2;
i = j;
there is no way to know the number of read-modify-write cycles on i
other than implementation documentation. For example, the implementation
might make int 32 bits for some reason, even though the data bus transfers
128 bits, so in order to modify i, it is necessary to fetch i and 96 other
bits and then store the modified version. This might even do a redundant
fetch and store on j.
More problematic is whether a library function uses a non-volatile-qualified
lvalue to access an object defined with a volatile-qualified type. I think
the standard says that such an access yields undefined behaviour. But it
isn't clear on whether a call to a library function involves this kind of
access or not, and whether it's using an lvalue this way or not. From the
standard's point of view, even when a library function can be coded in C,
it still isn't the same as being part of the C program, at least not always.
>I would be surprised if such code never produced any surprises.
Uh, right. If you were telling the truth then your statement is indeed
true. Not quite self-fulfilling, but, ummmm. Please direct followups to
alt.fan.hofstadter :-)
--
<< If this were the company's opinion, I would not be allowed to post it. >>
"I paid money for this car, I pay taxes for vehicle registration and a driver's
license, so I can drive in any lane I want, and no innocent victim gets to call
the cops just 'cause the lane's not goin' the same direction as me" - J Spammer
>Just one more thing to notice:
>
>Not only the behavior of memcpy matters,
>both also the one of the cache.
The cache is essentially irrelevant since it is the job of the compiler
to produce the appropriate code that meets the required semantics,
cache or no cache. As far as memcpy is concerned it is *not* defined
to access objects using volatile semantics. C90 7.11.1 talks about
"arrays of character type", not "arrays of volatile character type"
and the first two parameters of memcpy() are not pointers to volatile types.
The question is whether something like:
volatile int x;
int y = 42;
memcpy((int *)&x, &y, sizeof x);
results in undefined behaviour just as this does
*(int *)&x = y; /* Undefined */
C90 6.5.3 says "If an attempt is made to refer to an object defined with
a volatile-qualified type through use of an lvalue with non-volatile-
qualified type, the behaviour is undefined".
The problem is that standard library functions aren't defined in terms
of C code and lvalues exist purely in C source code as far as the
standard is concerned. So it depends on whether you interpret the
description of memcpy() to be as accessing the objects as if using
lvalues, or maybe this case is undefined due to a lack of any
appropriate definition for it in the standard.
--
-----------------------------------------
Lawrence Kirby | fr...@genesis.demon.co.uk
Wilts, England | 7073...@compuserve.com
-----------------------------------------
Not only the behavior of memcpy matters,
both also the one of the cache.
--
________________________________________________
Parker Shaw | Make it correct
psh...@cs.auckland.ac.nz | Make it elegant
________________________________________________
int foo(void) {
volatile int obj;
int a;
a=obj;
a=obj;
}
With cache, only one read on object obj; without cache, two reads.
Only C compiler knows that obj is volatile, cache doesn't know that.
So, if we have an object "change on read", we must make it non-
cacheable, this is not same as sharing a variable on multi-processor
system.
No, the compiler must do that. Caching values is only a form of
optimisation and accesses (whatever they may be) to volatile variables
must not be optimised away.
> must not be optimised away.
No, you misunderstood what I mean.
Let's have a simple example:
volatile int obj;
int foo(void) {
int a;
a=obj;
a=obj;
}
The compiled x86 code is something like:
...
mov ax, [obj]
mov ax, [obj]
...
Only the first mov instruction will drive the CPU to
access the main memory. For the second mov, the
CPU will NOT initiate another memory read cycle if
cache is enabled on obj.
Discussions from all of you are on the higher level. We don't
mean the same thing.
But a change to obj's value should have invalidated the cache so
presumably this is OK. If it isn't then the compiler must not compile
the code that way.
>...
>
>Only the first mov instruction will drive the CPU to
>access the main memory. For the second mov, the
>CPU will NOT initiate another memory read cycle if
>cache is enabled on obj.
>
>Discussions from all of you are on the higher level. We don't
>mean the same thing.
Francis Glassborow Chair of Association of C & C++ Users
> > No, the compiler must do that. Caching values is only a form of
> > optimisation and accesses (whatever they may be) to volatile variables
>
> > must not be optimised away.
>
> No, you misunderstood what I mean.
> Let's have a simple example:
>
> volatile int obj;
>
> int foo(void) {
> int a;
> a=obj;
> a=obj;
> }
>
> The compiled x86 code is something like:
> ...
> mov ax, [obj]
> mov ax, [obj]
> ...
>
> Only the first mov instruction will drive the CPU to
> access the main memory. For the second mov, the
> CPU will NOT initiate another memory read cycle if
> cache is enabled on obj.
>
> Discussions from all of you are on the higher level. We don't
> mean the same thing.
It's implementation defined what an "access" is. If the compiler
wants to say that a store instruction is an access, it can do that and
still comply with the C standard.
In practise, this is what most compilers do. If you don't want the
cache to be involved, you should set that piece of memory to be
write-through, or uncached, or whatever suits. You'll have to do
something OS-dependent to get the right piece of physical memory
involved, anyway.
--
Geoff Keating <Geoff....@anu.edu.au>
> the code that way.
Yes. Since the object changes itself, it should inform the CPU to
invalidate the cache, or we can just tell CPU make object UN-
cacheable. The compiler usually can't do much in either situation.
"Change on read" doesn't comply the specification of MEMORY.
It is memory-maped I/O. So, this is in fact not a problem of language.
100% agree.