Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Constant expressions and mathematical functions

42 views
Skip to first unread message

Arjen Markus

unread,
Oct 12, 2006, 3:48:23 AM10/12/06
to
Hello,

I just ran into an odd portability problem:

program printpi
real, parameter :: pi = 4.0*atan(1.0)
write(*,*) pi
end program

This simple program fails with the CVF and Intel compiler, but g95
allows
the use of atan() here.

This raises two questions:
- Are functions like atan() not allowed?
- Which functions _are_ allowed?

It seems counter-intuitive, as the expression above has a perfectly
simple
interpretation and can be evaluated at compile time. Though I realise I
may
be naive in these matters ;).

Regards,

Arjen

Brooks Moses

unread,
Oct 12, 2006, 4:04:32 AM10/12/06
to
Arjen Markus wrote:
> I just ran into an odd portability problem:
>
> program printpi
> real, parameter :: pi = 4.0*atan(1.0)
> write(*,*) pi
> end program
>
> This simple program fails with the CVF and Intel compiler, but g95
> allows
> the use of atan() here.
>
> This raises two questions:
> - Are functions like atan() not allowed?
> - Which functions _are_ allowed?

Nope, functions like atan() are not allowed. At least, not in Fortran
95 -- the only elemental intrinsic functions allowed are the ones that
are of type integer or character (and where each argument is of type
integer or character). No floating-point.

Fortran 2003 removes this restriction; this is presumably why g95 allows it.

> It seems counter-intuitive, as the expression above has a perfectly simple
> interpretation and can be evaluated at compile time. Though I realise I may
> be naive in these matters ;).

What if you're using a compiler that's running on a i686 machine, but
cross-compiling an executable that will run on some IBM workstation that
supports 128-bit reals? Do you use the native i686 version of ATAN(),
which won't get nearly the precision that the workstation's version of
ATAN() will get? Or are you required to emulate the target's
floating-point hardware?

So, at least in that case, it does get tricky. The runtime environment
isn't guaranteed to be available at compile time, and the expression
really only has a simple interpretation at runtime.

- Brooks


--
The "bmoses-nospam" address is valid; no unmunging needed.

Arjen Markus

unread,
Oct 12, 2006, 4:22:18 AM10/12/06
to

Brooks Moses schreef:

>
> Fortran 2003 removes this restriction; this is presumably why g95 allows it.
>
> > It seems counter-intuitive, as the expression above has a perfectly simple
> > interpretation and can be evaluated at compile time. Though I realise I may
> > be naive in these matters ;).
>
> What if you're using a compiler that's running on a i686 machine, but
> cross-compiling an executable that will run on some IBM workstation that
> supports 128-bit reals? Do you use the native i686 version of ATAN(),
> which won't get nearly the precision that the workstation's version of
> ATAN() will get? Or are you required to emulate the target's
> floating-point hardware?
>
> So, at least in that case, it does get tricky. The runtime environment
> isn't guaranteed to be available at compile time, and the expression
> really only has a simple interpretation at runtime.
>

I live a sheltered life, I guess, I have never done any
cross-compilation,
as far as I remember. Yes, that does make sense. Although:
- The cross-compiler might decide this computation should actually
be done at start-up of the program
- How does the F2003 standard solve that issue of cross-compilation?

Regards,

Arjen

glen herrmannsfeldt

unread,
Oct 12, 2006, 5:18:17 AM10/12/06
to
Brooks Moses wrote:

(snip)

> What if you're using a compiler that's running on a i686 machine, but
> cross-compiling an executable that will run on some IBM workstation that
> supports 128-bit reals? Do you use the native i686 version of ATAN(),
> which won't get nearly the precision that the workstation's version of
> ATAN() will get? Or are you required to emulate the target's
> floating-point hardware?

Many compilers do software floating point constant expression
evaluation. This was interesting when the pentium FDIV bug came
out, and a test program using constants showed no bug.

I don't know how many do atan() separate from the run time atan()
routine.

-- glen

FX

unread,
Oct 12, 2006, 5:24:19 AM10/12/06
to
> Many compilers do software floating point constant expression
> evaluation. This was interesting when the pentium FDIV bug came out,
> and a test program using constants showed no bug.
>
> I don't know how many do atan() separate from the run time atan()
> routine.

I think Brooks didn't say it was impossible, but only stated a reason why
there is more difficult in implementing it than meet most eyes.

I think any compiler that is capable of cross-compiling has to do in a
host-independent way. For example, gfortran uses MPFR for many such
compile-time evaluations of real constants.

--
FX

Jan Vorbrüggen

unread,
Oct 12, 2006, 5:38:50 AM10/12/06
to
>> Or are you required to emulate the target's floating-point hardware?

Nothing else makes sense, anyway. What about initialization expressions that
don't rely on intrinsics, such as exponentiation? It also isn't restricted
to FP - the target could support integer or character kinds not natively
supported on the host.

> - How does the F2003 standard solve that issue of cross-compilation?

People realised that it was onerous not to be able to use these intrinsics
in these places, and that it was (no longer, perhaps) onerous to support them
in the compiler properly.

I'll betcha that the restriction was already contentious when F90/F95 were
being defined. I'm surprised that CVF didn't support this as an extension.
Steve?

Jan

robert....@sun.com

unread,
Oct 12, 2006, 7:11:34 AM10/12/06
to

One reason the earlier Fortran standards did not require
processors to support compile-time evaluation of all
the intrinsic functions is that the amount of code required
to do so is substantial. Recall that in 1991, a PC with
4 MB of RAM was considered big. Workstations usually
had 16 MB of RAM or less.

Also, there used to be a reluctance to evaluate floating-point
expressions at compile-time because different models of a
given machine might deliver different results. Intel's CPUs
were famous for delivering different results for the hardware
implemented intrinsic functions with each generation of
chips. I recall seeing claims that different steppings of the
same generation of chips delivered different results, but I
have no personal experience of that happening. Integer
functions and operations tend to deliver consistent results,
regardless of the platform.

Bob Corbett

Richard E Maine

unread,
Oct 12, 2006, 11:33:59 AM10/12/06
to
<robert....@sun.com> wrote:

> Also, there used to be a reluctance to evaluate floating-point
> expressions at compile-time because different models of a
> given machine might deliver different results.

Or, more obviously, there is the case of cross-compilation. The target
machine might not even have the same floating-point representation as
the compilation machine. In that case, you'd have had to emulate not
only the math library, but even fundamental floating-point operations.
That would have increased the required size of the compiler even more.
Yes, cross-compilation was a significant issue.

--
Richard Maine | Good judgment comes from experience;
email: my first.last at org.domain| experience comes from bad judgment.
org: nasa, domain: gov | -- Mark Twain

Richard E Maine

unread,
Oct 12, 2006, 11:47:51 AM10/12/06
to
Arjen Markus <arjen....@wldelft.nl> wrote:

> I live a sheltered life, I guess, I have never done any
> cross-compilation,
> as far as I remember. Yes, that does make sense. Although:
> - The cross-compiler might decide this computation should actually
> be done at start-up of the program

That basically doesn't "work". Too many things can depend on
initialization expresssions. If you start saying that initialization
expressions can be put off until start-up of the program, before long
you've basically moved the whole compiler into run-time. You have
basically destroyed the concept of compilation as a separate step. You
have something more like.... I think it is often called just-in-time
compilation or some such thing. While that can be done in principle, and
is done in some imlementations of some languages (Java?), that's *NOT* a
way to keep the target execution environment small. It simply was not an
option in the environments and days in question.

> - How does the F2003 standard solve that issue of cross-compilation?

By being half a century later. Technology has changed. It actually does
make a difference. The Fortran standard didn't have to solve the issue;
it was solved elsewhere. Even if the target system might still be one
with very limitted resources, the system you are compiling on won't be.
Cross-compilation is still of major importance. Almost all embeddded
systems are done that way. (And that's actually quite a lot of systems;
I don't know the numbers, but I recall that the huge majority of
processors go into embedded systems, even if most of the people here
don't work on programming them.) But it doesn't have the same problems
any more. It is practical, and indeed common, to emulate the whole
target architecture on the workstation that hosts the development
environment.

glen herrmannsfeldt

unread,
Oct 12, 2006, 1:42:45 PM10/12/06
to
Arjen Markus <arjen....@wldelft.nl> wrote:

> program printpi
> real, parameter :: pi = 4.0*atan(1.0)
> write(*,*) pi
> end program

I used to see, though not for a while now:

DATA PI/0./
IF(PI.EQ.0) PI=4.*ATAN(1.0)

It would then be initialized once (assuming a static
variable). The result would be the same even if it
was initialized each time.

-- glen

Kevin G. Rhoads

unread,
Oct 12, 2006, 3:40:04 PM10/12/06
to
real pi
double precision pid
real (kind=ridiculous) pi_ridicuous
C-------
pi = 3.14159265
pid = 3.141592653589793238d0
pi_ridicuous = 3.141592653589793238462643383279502884197169399375_ridiculous !sorry, forget the digits after that

glen herrmannsfeldt

unread,
Oct 12, 2006, 4:12:13 PM10/12/06
to
Kevin G. Rhoads <kgrh...@alum.mit.edu> wrote:
> real pi
> double precision pid
> real (kind=ridiculous) pi_ridicuous
> C-------
> pi = 3.14159265
> pid = 3.141592653589793238d0
> pi_ridicuous = 3.141592653589793238462643383279502884197169399375_ridiculous

I once asked in comp.lang.c if it would be legal to have a floating point
type that would expand in precision as needed. I believe it was decided
that it wouldn't be legal.

I did once in a Mathematica test try

N[Pi,100000000]

the calculation isn't so bad, but formatting for display takes
a long time and a lot of memory.

(That is, 100,000,000 digits of pi.)

-- glen

John Harper

unread,
Oct 12, 2006, 5:26:24 PM10/12/06
to
In article <1160639303.8...@c28g2000cwb.googlegroups.com>,

Arjen Markus <arjen....@wldelft.nl> wrote:
>
> real, parameter :: pi = 4.0*atan(1.0)

Why do that? The following gives the same answer and needs less typing:

real, parameter :: pi = acos(-1.0)

(Although that's standard-conforming f2003 but not f95, g95 allows it
even with the -std=f95 option.)

-- John Harper, School of Mathematics, Statistics and Computer Science,
Victoria University, PO Box 600, Wellington 6140, New Zealand
e-mail john....@vuw.ac.nz phone (+64)(4)463 5341 fax (+64)(4)463 5045

Richard E Maine

unread,
Oct 12, 2006, 5:46:25 PM10/12/06
to
John Harper <har...@mcs.vuw.ac.nz> wrote:

> In article <1160639303.8...@c28g2000cwb.googlegroups.com>,
> Arjen Markus <arjen....@wldelft.nl> wrote:
> >
> > real, parameter :: pi = 4.0*atan(1.0)
>
> Why do that? The following gives the same answer and needs less typing:
>
> real, parameter :: pi = acos(-1.0)

Why do you say it gives the same answer? I know of nothing in the
Fortran standard that guarantees that. The pure mathematical answer is
the same, but that doesn't mean the computational answer is the same for
all math libraries. Ideally, it would be nice if math libraries gave
answers accurate to the last bit. Probably a lot more of them do today
than used to. But it isn't guaranteed.

Off-hand, I'd be a lot more confident of the answer with atan, just
because of the slope of the curve there. Seems to me like it would be a
lot easier for the acos answer to be off by more. Sure, it should be
possible for a math library to nail that one pretty well, but are you
going to count on it?

For my own part, I write out the digits instead of using any such
formula. I'm a lot more confident of the decimal-to-binary conversion
being accurate to the last bit than I am of the trig functions. And it
isn't as though I have to hand-type and check the darned thing
repeatedly. Once in pretty much forever is adequate. One could look it
up online and cut&paste if that unconfident of your typing.

Besides, in the 5th grade I thought it cool for some long forgotten
reason to memorize pi to 50 places. Alas for all the wasted brain cells.
That still sticks with me a lot better than other things that I'd rather
remember. :-( Might as well put it to some constructive use. :-)

glen herrmannsfeldt

unread,
Oct 12, 2006, 8:48:52 PM10/12/06
to
Richard E Maine <nos...@see.signature> wrote:
(someone wrote)

>> > real, parameter :: pi = 4.0*atan(1.0)
(someone else wrote)
>> real, parameter :: pi = acos(-1.0)

> Why do you say it gives the same answer? I know of nothing in the
> Fortran standard that guarantees that. The pure mathematical answer is
> the same, but that doesn't mean the computational answer is the same for
> all math libraries. Ideally, it would be nice if math libraries gave
> answers accurate to the last bit. Probably a lot more of them do today
> than used to. But it isn't guaranteed.

For software routines, they always use identities to get the
argument in range for the series expansion.

One library I know of uses:

acos(x)=pi/2-asin(x) for all x
asin(x)=pi/2-asin(sqrt((1-x)/2)) 0.5<x<=1
and
atan(x)=pi/6+atan((x*sqrt(3)-1)/(x+sqrt(3))) for tan(pi/12)<x<=1

another library uses

acos(x)=atan2(1-x*x,x) for all acos()

> Off-hand, I'd be a lot more confident of the answer with atan, just
> because of the slope of the curve there. Seems to me like it would be a
> lot easier for the acos answer to be off by more. Sure, it should be
> possible for a math library to nail that one pretty well, but are you
> going to count on it?

From the above, I don't have a lot of confidence in either
to give last bit accurate answers for pi.



> For my own part, I write out the digits instead of using any such
> formula. I'm a lot more confident of the decimal-to-binary conversion
> being accurate to the last bit than I am of the trig functions. And it
> isn't as though I have to hand-type and check the darned thing
> repeatedly. Once in pretty much forever is adequate. One could look it
> up online and cut&paste if that unconfident of your typing.

I agree. Though the x87 processor has pi built into it, I don't
know of a convenient way to get it out.

-- glen

John Harper

unread,
Oct 12, 2006, 10:49:06 PM10/12/06
to
In article <1hn3n6e.syj0yzz4k9kjN%nos...@see.signature>,

Richard E Maine <nos...@see.signature> wrote:
>John Harper <har...@mcs.vuw.ac.nz> wrote:
>
>> In article <1160639303.8...@c28g2000cwb.googlegroups.com>,
>> Arjen Markus <arjen....@wldelft.nl> wrote:
>> >
>> > real, parameter :: pi = 4.0*atan(1.0)
>>
>> Why do that? The following gives the same answer and needs less typing:
>>
>> real, parameter :: pi = acos(-1.0)
>
>Why do you say it gives the same answer? I know of nothing in the
>Fortran standard that guarantees that.

True. But I would be very unhappy with a Fortran compiler that made
them differ by more than epsilon(1.0)*pi.

>Off-hand, I'd be a lot more confident of the answer with atan, just
>because of the slope of the curve there. Seems to me like it would be a
>lot easier for the acos answer to be off by more.

Richard's argument for 4.0*atan(1.0) being better than acos(-1.0)
suggests that 2.0*atan(huge(1.0)) might be better still.

So I put in my compiler-testing collection a little program that
prints, in single, double and, if available, quad precision:
4.0_p*atan(1.0_p) , 2.0_p*atan(huge(1.0_p)), acos(-1.0_p),
1.5_p*acos(-0.5_p), 3.0_p*acos(0.5_p) , 2.0_p*asin(1.0_p),
6.0_p*asin(0.5_p) , 2.0_p*acos(0.0_p) where p is the relevant kind
parameter. It was comforting to find that with all 4 compilers I can
use none of the results were off by as much as epsilon(1.0_p)*pi,
and intriguing to find that they were all good to the last bit in
single precision, but that in higher precisions some of those using
0.5 or -0.5 were not.

Brooks Moses

unread,
Oct 13, 2006, 12:33:58 AM10/13/06
to
FX wrote:
>>Many compilers do software floating point constant expression
>>evaluation. This was interesting when the pentium FDIV bug came out,
>>and a test program using constants showed no bug.
>>
>>I don't know how many do atan() separate from the run time atan()
>>routine.
>
> I think Brooks didn't say it was impossible, but only stated a reason why
> there is more difficult in implementing it than meet most eyes.

That was what I meant to say, at least. :)

glen herrmannsfeldt

unread,
Oct 13, 2006, 1:55:21 PM10/13/06
to
John Harper <har...@mcs.vuw.ac.nz> wrote:

> So I put in my compiler-testing collection a little program that
> prints, in single, double and, if available, quad precision:
> 4.0_p*atan(1.0_p) , 2.0_p*atan(huge(1.0_p)), acos(-1.0_p),
> 1.5_p*acos(-0.5_p), 3.0_p*acos(0.5_p) , 2.0_p*asin(1.0_p),
> 6.0_p*asin(0.5_p) , 2.0_p*acos(0.0_p) where p is the relevant kind
> parameter. It was comforting to find that with all 4 compilers I can
> use none of the results were off by as much as epsilon(1.0_p)*pi,
> and intriguing to find that they were all good to the last bit in
> single precision, but that in higher precisions some of those using
> 0.5 or -0.5 were not.

Is this all on Intel x86 machines?

In the days before IEEE floating point standard, and before hardware
(or microcode) implementations I would be somewhat less sure of the result.

Especially as, at least for x87, they can be done in extended precision
and rounded appropriately.

-- glen

John Harper

unread,
Oct 15, 2006, 6:10:04 PM10/15/06
to
In article <egoju9$hoe$2...@naig.caltech.edu>,

glen herrmannsfeldt <g...@seniti.ugcs.caltech.edu> wrote:
>John Harper <har...@mcs.vuw.ac.nz> wrote:
>
>> So I put in my compiler-testing collection a little program that
>> prints, in single, double and, if available, quad precision:
>> 4.0_p*atan(1.0_p) , 2.0_p*atan(huge(1.0_p)), acos(-1.0_p),
>> 1.5_p*acos(-0.5_p), 3.0_p*acos(0.5_p) , 2.0_p*asin(1.0_p),
>> 6.0_p*asin(0.5_p) , 2.0_p*acos(0.0_p) where p is the relevant kind
>> parameter. It was comforting to find that with all 4 compilers I can
>> use none of the results were off by as much as epsilon(1.0_p)*pi,
>> and intriguing to find that they were all good to the last bit in
>> single precision, but that in higher precisions some of those using
>> 0.5 or -0.5 were not.
>
>Is this all on Intel x86 machines?

No. The 4 compilers were g95 and Sun f95 on a machine on which printenv
says HOSTTYPE=sun4, VENDOR=sun, OSTYPE=solaris, MACHTYPE=sparc
and NAG and Compaq f95 on a machine on which printenv says
HOSTTYPE=alpha, VENDOR=dec, OSTYPE=osf1, MACHTYPE=alpha.

Walter Spector

unread,
Oct 16, 2006, 9:50:35 AM10/16/06
to
John Harper wrote:
>
> In article <1160639303.8...@c28g2000cwb.googlegroups.com>,
> Arjen Markus <arjen....@wldelft.nl> wrote:
> >
> > real, parameter :: pi = 4.0*atan(1.0)
>
> Why do that? The following gives the same answer and needs less typing:
>
> real, parameter :: pi = acos(-1.0)

I typically use ACOS (-1.0) too. However there are those who will
quibble that there is no guarantee they will return 'good to the last
bit' approximations to pi.

So it is also worth noting that the Standard requires that a couple
of other intrinsics must return the exact approximation of pi, so
to speak. Specifically:

ATAN2 (0.0, some negative non-zero value)
Also:
AIMAG (LOG ((some negative value,0.0)))

IMHO, if a math lib gets any of these wrong, it is a bug.

Walt

$ cat pie.f90
program pie
implicit none

integer, parameter :: dp_k = kind (1.0d0)
integer, parameter :: idp_k = selected_int_kind (12)

call print_pie (4.0*atan (1.0d0))
call print_pie (acos (-1.0d0))
call print_pie (atan2 (0.0d0, -1.0d0))
call print_pie (aimag (log ((-1.0d0, 0.0d0))))

contains

subroutine print_pie (pie)
real(dp_k) :: pie

real(dp_k) :: rchng
integer(idp_k) :: ichng
equivalence (rchng, ichng)

rchng = pie
print '(a, f14.12, a, z16)', 'value: ', pie, ', in hex: ', ichng

end subroutine

end program

$ ftn95 pie.f90 /link
[FTN95/Win32 Ver. 4.6.0 Copyright (C) Salford Software Ltd 1993-2004]
Licensed to: Personal Edition User
Organisation: www.silverfrost.com

NO ERRORS [<PRINT_PIE> FTN95/Win32 v4.6.0]
NO ERRORS [<PIE> FTN95/Win32 v4.6.0]
Creating executable: pie.EXE
$ pie
value: 3.14159265359, in hex: 400921FB54442D18
value: 3.14159265359, in hex: 400921FB54442D18
value: 3.14159265359, in hex: 400921FB54442D18
value: 3.14159265359, in hex: 400921FB54442D18

$

Richard E Maine

unread,
Oct 16, 2006, 6:13:40 PM10/16/06
to
Walter Spector <w6ws_xt...@earthlink.net> wrote:

> So it is also worth noting that the Standard requires that a couple
> of other intrinsics must return the exact approximation of pi, so
> to speak.

I assue you are being facietious or otherwise pointing out a wording
decifiency of the standard. You will find no justification anywhere for
"exact approximation" or any even vaguely equivalent wording. The
applicable and trditional wording is "processor-dependent
approximation", which is a very different thing. I wish, and have heard
othyers express the same wish, that the standard just said something
global about all real computations giving processor-dependent
approximations. That's really what it means, but instead of saying it
that way, the "processor-dependent approximation" wording gets
replicated in a zillion places... and is missed in a zillion others, but
that's still what it means. They tend to get fixed one at a time, though
I'd have preferred the global one-time fix in wording. It has been noted
many times that the standard doesn't even require that 2.0+2.0 give
exactly 4.0 as a result, but coul.d give a darned poor
processsor-dependent approximation of 4.0.

If one takes the wording of the cited intrinsics literally, then they
say that the result must be pi - not approximation at all, but exactly
pi. Of course, that isn't likely to happen no matter what the standard
says. (Yes, I know that one can imagine representations that include
notions of exact multiples of pi, but I'll still claim that it isn't
likely to happen). So you have to apply some interpretation. An
interpretation of it as meaning "exact approximation" or even such more
precisely defined concepts as "nearest representable number" won't hold
up.

Of course, one is free to expect it of a vendor and bitch if you don't
get it. Such last-bit accuracy is probably a plausible expectation in
many environments today. But a claim that the Fortran standard requires
it won't hold up. (Other standards such as the IEEE one are another
maatter that I don't address here; if your compiler claims conformance
to those other standards, you might have some grounds there, though I
don't think IEEE will guaratee anything about those particular functions
either).

Walter Spector

unread,
Oct 17, 2006, 1:01:49 AM10/17/06
to
Richard E Maine wrote:
>
> Walter Spector <w6ws_xt...@earthlink.net> wrote:
>
> > So it is also worth noting that the Standard requires that a couple
> > of other intrinsics must return the exact approximation of pi, so
> > to speak.
>
> I assue you are being facietious or otherwise pointing out a wording
> decifiency of the standard....

Facetious or not...

> If one takes the wording of the cited intrinsics literally, then they
> say that the result must be pi - not approximation at all, but exactly
> pi. Of course, that isn't likely to happen no matter what the standard
> says.

Yup.

> Of course, one is free to expect it of a vendor and bitch if you don't
> get it. Such last-bit accuracy is probably a plausible expectation in
> many environments today. But a claim that the Fortran standard requires
> it won't hold up.

The definitions of those intrinsics, in the above cases, seem (to my
reading at least) to be exceptional. The Standard really
does use the little Greek pi symbol. And as you point out, there is
an uncharacteristic lack of weasel-wording. Intended or not, it
states "the result is pi".

So it seems to me that any "processor dependent approximation" that is
even one bit off of what that processor is capable of representing would
be non-Standard.

W.

Richard Maine

unread,
Oct 17, 2006, 1:33:53 AM10/17/06
to
Walter Spector <w6ws_xt...@earthlink.net> wrote:

> The Standard really
> does use the little Greek pi symbol. And as you point out, there is
> an uncharacteristic lack of weasel-wording. Intended or not, it
> states "the result is pi".
>
> So it seems to me that any "processor dependent approximation" that is
> even one bit off of what that processor is capable of representing would
> be non-Standard.

Where do you get the "what the processor is capable of representing"
part? I don't see that in the standard. I don't see any similar
condition anywhere in the standard. Seems to me you have a problem with
a camel's nose here. The standard doesn't say *ANYTHING* about
approximations - including anything about what the processor is capable
of representing. If you want to hold to the literal words of the
standard, then I don't see where this approximation comes in. As soon as
you allow that it doesn't mean literally, exactly pi, then you have an
interpretation... and one which I don't see any basis for.

--
Richard Maine | Good judgement comes from experience;
email: last name at domain . net | experience comes from bad judgement.
domain: summertriangle | -- Mark Twain

Walter Spector

unread,
Oct 17, 2006, 9:12:58 AM10/17/06
to
Richard Maine wrote:
>
> Walter Spector <w6ws_xt...@earthlink.net> wrote:
>
> > ... Intended or not, it

> > states "the result is pi".
> ...

> Seems to me you have a problem with
> a camel's nose here. The standard doesn't say *ANYTHING* about
> approximations - including anything about what the processor is capable
> of representing. If you want to hold to the literal words of the
> standard, then I don't see where this approximation comes in. As soon as
> you allow that it doesn't mean literally, exactly pi, then you have an
> interpretation... and one which I don't see any basis for.

It seems simple to me. The Standard states that ATAN2 and LOG must return
pi for those cases. If you ask a processor architect what the bit
sequence is for pi on his or her processor, he or she should be able
to provide it.

It could be argued that the Standard also requires those return values to be
between -pi and pi - so may require a return value that is rounded one bit
towards zero compared to the above bit sequence. But other than that,
I can't see much wiggle room.

Walt

Dick Hendrickson

unread,
Oct 17, 2006, 11:30:39 AM10/17/06
to
I think you've forgotten the first line in the ATAN2 result value
section. "The result has a value equal to a processor-dependent
approximation...." That's how all of the real functions that do
actual computations start out. I believe the intent is that the
topic sentence propagates throughout the rest of the paragraph.
That's how topic sentences often work ;). The rest of the paragraph
merely describes special cases and branch cuts. At least, that's how
I read it.

Dick Hendrickson

0 new messages