I'm sure there is a logical explanation for this, I have a VC 2005
Project and I have a method that returns a "double" to represent the size of
a file...
-------------
double getFileSize(HANDLE iHandle)
{
ULARGE_INTEGER pULIStartPos;
pULIStartPos.HighPart = 0;
pULIStartPos.LowPart = GetFileSize(iHandle, &pULIStartPos.HighPart);
return(pULIStartPos.QuadPart);
}
-------------
Up until now, this method has worked fine, never failed. Now for some
reason the following happens...
This method is returning the following value to me
360049216
This value is incorrect, it should actually be...
360049224
In fact, if I step through the debugger,
pULIStartPos.QuadPart = 360049224
But when it gets type cast into a double, it looses 8, any idea why?
I changed the following code just as a test...
-------------
...
pULIStartPos.LowPart = GetFileSize(iHandle, &pULIStartPos.HighPart);
long pLngStartPos = pULIStartPos.QuadPart;
double pDblStartPos = pLngStartPos;
return(pDblStartPos );
...
-------------
If I step through the debugger pDblStartPos = 360049224!!! Great!!
Except, when I come to take 1 from the value, it jumps down 8, rather
than 1.
I understand that doubles should not be relied upon for floating point
arithmatic but I am only dealing with whole numbers, what on earth is going
on?
Many thanks for any help in advance!
Nick.
I can't immedialtely see what is wrong but...
>...
> I understand that doubles should not be relied upon for floating point
> arithmatic but I am only dealing with whole numbers, what on earth is
> going on?
...this is a complete misconception. If you are using a double, then you
are doing floating point arithmetic, and there *are* no "whole numbers".
3.00000000000000 and 3.00000100000000 are treated on exactly the same basis
and subject to exactly the same truncation errors. [Think of them as
0.300000000000000 10^1 and 0.300000100000000 10^1 with a possible error
around the final zero, and it may be easier to appreciate why there are no
whole numbers.]
If you want to manipulate integers then don't use floating point. If 32
bits aren't enough, then there's always __int64.
This is by far the best way to approach solving the problem.
Dave
--
David Webber
Author MOZART the music processor for Windows -
http://www.mozart.co.uk
For discussion/support see
http://www.mozart.co.uk/mzusers/mailinglist.htm
Many thanks for your help.
Changing from double to _int64 is going to result in quite a few
changes, which I will do if it is 100% necessary.
But I still don't understand what is happening in order for this error
to occur. Also, the QuadPart returns a ULONGLONG, and according to
documentation...
....
#if !defined(_M_IX86)
typedef unsigned __int64 ULONGLONG;
#else
typedef double ULONGLONG
#endif.... So at some point ULONGLONG can be defined as a double
anyway? So surely doubles should work for this...
Anyways, I guess that covnerting to __int64 is as you say, the easiest
solution.
Nick.
"David Webber" <da...@musical.demon.co.uk> wrote in message
news:eSI1%23OJZH...@TK2MSFTNGP04.phx.gbl...
> #if !defined(_M_IX86)
> typedef unsigned __int64 ULONGLONG;
> #else
> typedef double ULONGLONG
> #endif....
> So at some point ULONGLONG can be defined as a double anyway? So surely
> doubles should work for this...
I have never seen that bit of code before and am not sure under what
circumstances _M_IX86 might be undefined. But it *feels* like a sort of
get out to me - a way of having a ULONGLONG even if __int64 is not
available (or something like that). My guess would be that on a modern PC
it will be __int64.
> Anyways, I guess that converting to __int64 is as you say, the easiest
> solution.
I have used unsigned __int64 all over the place for ages. I have a number
of collections of 64 objects and
I use the 64 bits of an unsigned __int64 as a filter to define arbitrary
subsets.
So __int64 works and I suspect that's what your ULONGLONG is. And in fact
if ULONGLONG were defined as double, then casting it to one would be
unlikely to affect it!
If your function is returning a ULONGLONG, then why define the function type
as double instead of ULONGLONG?
Casting ULONGLONG to long (as in your second example) looks like it could be
disastrous - you can get serious truncation errors. So I'd do:
ULONGLONG getFileSize(HANDLE iHandle)
{
ULARGE_INTEGER pULIStartPos;
pULIStartPos.HighPart = 0;
pULIStartPos.LowPart = GetFileSize(iHandle, &pULIStartPos.HighPart);
return(pULIStartPos.QuadPart);
}
and just work with the result in that form.
But if you really want to cast it, then do it absolutely explicitly. I
don't like it one bit but
double getFileSize(HANDLE iHandle)
{
ULARGE_INTEGER pULIStartPos;
pULIStartPos.HighPart = 0;
pULIStartPos.LowPart = GetFileSize(iHandle, &pULIStartPos.HighPart);
double x = double(pULIStartPos.QuadPart);
return x;
}
might help.
Floating point can exactly represent small integers, but what you have is
not a small integer. The number of significant digits of a floating point
is limited, and you've exceeded that limit. Think of it like scientific
notation:
3 = 3.000 * 10^0 (exactly representable)
3000 = 3.000 * 10*3 (exactly representable)
3456 = 3.456 * 10^3 (exactly representable)
3000000 = 3.000 * 10^6 (exactly representable)
3456789 ~ 3.457 * 10^6 (rounded)
Floating point works exactly like that, except that the exponent is base-2,
and the number of significant digits is in binary digits.
What you're doing should work. Are you observing this problem outside the
debugger? If not, you're debugging the debugger. Try to reproduce the
problem in a short console program. For example, the following works fine
for me:
#define WIN32_LEAN_AND_MEAN
#include <windows.h>
#include <stdio.h>
int main()
{
ULARGE_INTEGER x;
x.QuadPart = 360049224;
double d = x.QuadPart;
printf("%f\n", d);
--d;
printf("%f\n", d);
}
Can you make this fail?
--
Doug Harrison
Visual C++ MVP
>
>"NickP" <a...@a.com> wrote in message
>news:%23obOQBJ...@TK2MSFTNGP05.phx.gbl...
>
>I can't immedialtely see what is wrong but...
>
>>...
>> I understand that doubles should not be relied upon for floating point
>> arithmatic but I am only dealing with whole numbers, what on earth is
>> going on?
>
>...this is a complete misconception. If you are using a double, then you
>are doing floating point arithmetic, and there *are* no "whole numbers".
Sure there are. There are lots of them, and they remain integers for simple
arithmetic as long as the arithmetic doesn't involve fractional parts. The
typical IEEE 64-bit double type has an integer range of +/- 2**53, which is
considerably more capacious than a 32-bit integer. In the absence of a
larger integer type, an IEEE double can be used to work with integers
larger than can fit into int.
>3.00000000000000 and 3.00000100000000 are treated on exactly the same basis
>and subject to exactly the same truncation errors. [Think of them as
>0.300000000000000 10^1 and 0.300000100000000 10^1 with a possible error
>around the final zero, and it may be easier to appreciate why there are no
>whole numbers.]
>
>If you want to manipulate integers then don't use floating point. If 32
>bits aren't enough, then there's always __int64.
>
>This is by far the best way to approach solving the problem.
Certainly, 64-bit integers are the best way to handle 64-bit integers.
After all, 2**64 > 2**53. I'd highly recommend using __int64 instead of
double to hold file sizes.
The value he's working with is well within the range of VC's double type.
Something else is going on.
"NickP" <a...@a.com> wrote in message
news:%23obOQBJ...@TK2MSFTNGP05.phx.gbl...
Many thanks for your help again, I shall remember all of this, that is
for sure.
I have converted all double references to __int64 and it all works great
now. Thanks for your time it's much appreciated and your explanations are
helpful too.
Nick.
"David Webber" <da...@musical.demon.co.uk> wrote in message
news:%23Ur92FL...@TK2MSFTNGP06.phx.gbl...
"Check what is in FP Ñontrol Word."
Sorry, what do you mean?
Nick.
"Alexander Grigoriev" <al...@earthlink.net> wrote in message
news:%236hD7QM...@TK2MSFTNGP06.phx.gbl...
> My guess is that internal FP precision is for some reason set to 24 bits
> ('float'). Check what is in FP Ñontrol Word.
In isolation it worked fine for me too, I have now changed my double
references to __int64 so hopefully this won't happen again.
Nick.
"Doug Harrison [MVP]" <d...@mvps.org> wrote in message
news:b7rav212pn2agh88r...@4ax.com...
I know the OP has moved on from this issue, but...
A double maintains 16 or so significant digits, so what he's doing *should*
work as noted.
Floats, OTOH only have 8 or so significant digits, and the results seen by
the OP suggest that he's losing significance around there.
So, there probably is no direct cast available from the ULONGLONG of
.QuadPart to a double. The compiler might be casting it first to a float,
and then to a double, hence resulting in loss of significant digits.
Maybe the original code would work if there were explicit castings, on each
part of ULARGE_INTERGER:
int main()
{
ULARGE_INTEGER x;
x.QuadPart = 360049224;
double d = (double)x.HighPart*4.294967296E9 + (double)x.LowPart;
"NickP" <a...@a.com> wrote in message
news:uwqqalMZ...@TK2MSFTNGP03.phx.gbl...
>>...this is a complete misconception. If you are using a double, then
>>you
>>are doing floating point arithmetic, and there *are* no "whole numbers".
>
> Sure there are. There are lots of them, and they remain integers for
> simple
> arithmetic as long as the arithmetic doesn't involve fractional parts.
I am afraid I view this very differently. (I don't derive this view from
anyone else in particular but with a PhD in applied maths I am licensed to
make up my own stories <g>.) So let me expound at a little greater length.
I'm sure you know everything I'm saying - but I just look at it differently,
and I think this has big advantages.
In real life, ie mathematics, there is the set of integers and the set of
real numbers. There are operations on can perform on those sets
considering one's universe to consist only of the elements of those sets.
(ie without being allowed numbers outside those sets). The integer is one
kind of animal, the real number is another, and they live in different zoos.
Integers are good for counting things (like the number of bits or bytes in
a file). Real numbers are good for things which can vary continuously
along a line.
Now you *can* make a 1-1 correspondence between the integers and an
infinitessimal subset of the real numbers so that 3 (from the integers)
corresponds with 3.000000000000 (recurring infinitely) from the reals, etc.
and that is fine. But thinking of them as being in all respects the same
can be the first step towards a big trap: they are utterly different
creatures and live in different zoos.
I find it immensly helpful to think of decimal representations of reals as
recurring infinitely. For example 5.000..../3.0000.... is 1.666666666
recurring infinitely. 6.0000..../3.00000... is 2.0000000 recurring
infinitely. Forgetting all the trailing zeroes in the latter case and just
writing "2" is a convention - but it lies on the road to the big trap.
Coming now to computers:
A given integer type can represent a subset of the integers within a certain
range (which depends on the type) for example [0,255] or [-128,127] or ....
Manipulating them is straightforward because the set is contiguous even
though it is of limited range.
A floating point type can represent a much more awkward subset of the reals.
In 8 bytes one can represent 2^64 different values scattered miscellaneously
(non contiguously) along the real line. They are designed so that if you
consider what you write to be 3.7 as 3.700000000 (recurring infinitely) then
there will be a representable one in the range defined from this number and
the one with the 16th or so significant digit raised or lowered by 1. And
that's the one you get when you write double x = 3.7.
Now the IEEE people have designed the system so that when you write double
x=3; you do in fact get the number 3.00000 (recurring infinitely) which was
the target of our 1-1 mapping from the integers. But still I find it
helpful at all times to think of this as a real number, and NOT as an
integer. Even if I am planning not to turn down the road to the trap, it
makes it easier not to do so. And entrances to this road are everywhere,
the easiest being of course
int a,b;
double x,y;
a = 3; b=a/2;
x = a; y=x/2;
and
if( x==y )
and so on.
As you say, if we only had 32 bit integers, then double has a larger span of
values. But now we have __int64 there is absolutely no reason to consider
using double to represent integers (which, when you look at it from my view
point, is a kludge at best).
M.K.O'Ns explanation of the error looks extremely plausible. If he is
correct, then it is an interesting new (to me) entrance to the trap.
If casting ULONGLONG to a double actually does cast it to a float first,
then at first sight there is no problem up to 10^38 as floats can go that
high.
But of course the set of nearest integers (however one stores them) to
every distinct value which can be stored by a float in the range [0,10^38]
does not include *all* the integers in that range because with 7 sig figs in
a float and 38 positive exponents you only have far fewer than 10^38 ways of
constructing different integers. A lot less than 38*10^7 in fact. [You
know it has to be less than the 2^31 ~ 2*10^9 different positive integers
you can store in 31 bits.]
So if you have integers in the range [0,10^38], cast to float and back to
integer then there will be big gaps in the set you end up with. The
non-contiguous nature of the representable subsets of reals has kicked in
and given you a non-contiguous set of integers.
I haven't fallen down this one myself, but then my philosophy that reals are
reals, and integers are integers keeps me well away from it. :-) I
recommend it!