Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Help with some basic questions about Log[] and x^y

2 views
Skip to first unread message

Terry

unread,
Dec 8, 2003, 12:17:15 AM12/8/03
to
Hi,

I'm a programmer who finds himself in need of some guidance in
navigating the treacherous world of error analysis. Basically I have a
calculation that involves powering, additions, multiplications and
logs that I need to do an error analysis for. The additions and
multiplications are easy enough, but I'm at a loss when it comes to
the powering and logarithms.

So far my research has led me to the following understanding:

- IEEE 754 is the predominant standard that governs the accuracy of
arithmetic operations in modern machines
- The 754 standard stipulates that basic arithmetic operations like
additions and multiplications must be exactly rounded but it says
nothing about transcendentals
- For transcendentals and powering there seem to be many algorithms

So here are some of the questions I'm struggling with:
- Given that I'm currently only using WIntel hardware, is there a
dominant transcendental and powering algorithm in use?
- Is there a paper that describes these algorithms along with an error
analysis?
- If not in the above paper, where might I find out what the absolute
worst error is for a Log[x] and x^y operation where both x and y are
floating point numbers?

Finally, is there a good introductory text on error analysis
specifically focusing on powering and logs suitable for a math
hobbyist such as myself? Or is that a stupid question?

Any and all help will be very much appreciated!

terry
(terryisnow at yahoo.com)

Terje Mathisen

unread,
Dec 8, 2003, 4:47:36 AM12/8/03
to
Terry wrote:
> - IEEE 754 is the predominant standard that governs the accuracy of
> arithmetic operations in modern machines

Right.

> - The 754 standard stipulates that basic arithmetic operations like
> additions and multiplications must be exactly rounded but it says
> nothing about transcendentals
> - For transcendentals and powering there seem to be many algorithms

Right.

>
> So here are some of the questions I'm struggling with:
> - Given that I'm currently only using WIntel hardware, is there a
> dominant transcendental and powering algorithm in use?

x86 makes this quite easy, since they use 80-bit floats internally, and
for all trancendental calculations.

In particular, for in-range inputs, all trancendental functions on
Pentium (P5 and later) Intel cpus will deliver results that are within
1.0 ulp in 80-bit format.

This practially/normally means that as long as you can do your
calculations in this format, the results will be within 0.5 ulp after
rounding to double precision.

> - Is there a paper that describes these algorithms along with an error
> analysis?

There's an Intel paper (pdf) written around the time the Pentium was
released, most probably by Peter Tang (who designed the algorithms).

Terje

--
- <Terje.M...@hda.hydro.com>
"almost all programming can be viewed as an exercise in caching"

glen herrmannsfeldt

unread,
Dec 8, 2003, 3:40:08 PM12/8/03
to
Terry wrote:

> I'm a programmer who finds himself in need of some guidance in
> navigating the treacherous world of error analysis. Basically I have a
> calculation that involves powering, additions, multiplications and
> logs that I need to do an error analysis for. The additions and
> multiplications are easy enough, but I'm at a loss when it comes to
> the powering and logarithms.

(snip)

> Finally, is there a good introductory text on error analysis
> specifically focusing on powering and logs suitable for a math
> hobbyist such as myself? Or is that a stupid question?

The books I know have titles like "Statistical Treatment of Experimental
Data." Mostly the idea is that all measured quantities have an
uncertainty to them, and you need to take that into account when working
with such data.

Now, you ask a different question, related to the effect of computer
arithmetic on such data. The first answer is that your computer
arithmetic should be accurate enough such that experimental error (in
the input data) is larger than the computational error.

Consider some basic operations on data with uncertainties:

(a +/- da) + (b +/- db) = (a + b) +/- sqrt( da**2 + db**2 )

assuming that da and db are statistically independent, in other words,
uncorelated, and it also assumes they are gaussian distributed

(a +/- da) * (b +/- db) = (a*b) +/- sqrt( (a*db)**2 + (b*da)**2)

For any function f(x), f(x +/- dx) = f(x) +/- dx * f'(x)

For functions of more than one variable, the final uncertainty is the
sqrt() of the sum of the squares of the individual uncertainties.

sin(x +/- dx) = sin(x) +/- dx * cos(x)

log(x +/- dx) = log(x) +/- dx/x

exp(x +/- dx) = exp(x) +/- dx * exp(x)

x**y=exp(y*log(x))

(x +/- dx) ** (y) = exp(y*log(x +/- dx)) = exp(y*log(x) +/- y*dx/x)
= x**y * exp( +/- y*dx/x) = x**y +/- x**y*y*dx/x

x ** (y +/- dy) = exp((y +/- dy) * log(x)) = x**y +/- dy*log(x)*x**y

(x +/- dx) ** (y +/- dy) = x**y
+/- x**y*sqrt((y*dx/x)**2+(dy*log(x))**2)

All are done while typing them in, so I could have made mistakes in
them. You can check them yourself to be sure they are right.

-- glen

Nick Maclaren

unread,
Dec 8, 2003, 4:13:29 PM12/8/03
to
In article <Ik5Bb.61524$_M.294386@attbi_s54>,

glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
>Terry wrote:
>
>> I'm a programmer who finds himself in need of some guidance in
>> navigating the treacherous world of error analysis. Basically I have a
>> calculation that involves powering, additions, multiplications and
>> logs that I need to do an error analysis for. The additions and
>> multiplications are easy enough, but I'm at a loss when it comes to
>> the powering and logarithms.
>
>> Finally, is there a good introductory text on error analysis
>> specifically focusing on powering and logs suitable for a math
>> hobbyist such as myself? Or is that a stupid question?
>
>The books I know have titles like "Statistical Treatment of Experimental
>Data." Mostly the idea is that all measured quantities have an
>uncertainty to them, and you need to take that into account when working
>with such data.
>
>Now, you ask a different question, related to the effect of computer
>arithmetic on such data. The first answer is that your computer
>arithmetic should be accurate enough such that experimental error (in
>the input data) is larger than the computational error.

I think that he is asking about a different sort of error analysis.
I.e. the sort that if done (for linear algebra) in Wilkinson and
Reinsch. He needs to find a good numerical analysis textbook, of
the type that attempts to explain the mathematics. Ask on
sci.math.num-analysis.

>Consider some basic operations on data with uncertainties:
>
>(a +/- da) + (b +/- db) = (a + b) +/- sqrt( da**2 + db**2 )
>
>assuming that da and db are statistically independent, in other words,
>uncorelated, and it also assumes they are gaussian distributed

Actually, no, it doesn't. That formula applies to ANY independent
distributions with variances - and the distributions don't even have
to be the same. Not all distributions have variances, of course.

>(a +/- da) * (b +/- db) = (a*b) +/- sqrt( (a*db)**2 + (b*da)**2)
>
>For any function f(x), f(x +/- dx) = f(x) +/- dx * f'(x)
>

> . . .

Approximately, and only when da and db are small relative to |a| and
|b|.


Regards,
Nick Maclaren.

Dik T. Winter

unread,
Dec 8, 2003, 7:45:24 PM12/8/03
to
In article <br2php$n5$1...@pegasus.csx.cam.ac.uk> nm...@cus.cam.ac.uk (Nick Maclaren) writes:
> In article <Ik5Bb.61524$_M.294386@attbi_s54>,
> glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
> >Terry wrote:
> >> Finally, is there a good introductory text on error analysis
> >> specifically focusing on powering and logs suitable for a math
> >> hobbyist such as myself? Or is that a stupid question?
> >
> >The books I know have titles like "Statistical Treatment of Experimental
> >Data." Mostly the idea is that all measured quantities have an
> >uncertainty to them, and you need to take that into account when working
> >with such data.
> >
> >Now, you ask a different question, related to the effect of computer
> >arithmetic on such data. The first answer is that your computer
> >arithmetic should be accurate enough such that experimental error (in
> >the input data) is larger than the computational error.
>
> I think that he is asking about a different sort of error analysis.
> I.e. the sort that if done (for linear algebra) in Wilkinson and
> Reinsch. He needs to find a good numerical analysis textbook, of
> the type that attempts to explain the mathematics. Ask on
> sci.math.num-analysis.

Indeed, statistical error analysis leads to overestimates of the error
to the point where it makes no sense to do the calculation at all.
With such an analysis, solutions of linear systems with 40 unknowns
or more are too unreliable. Also numerical mathematics in general
uses "black-box" analysis, i.e. it is assumed that the input is exact
and the results are based on that assumption. And finally, statistical
error analysis tells you nothing about the additional errors you get
due to the finite precision operations you are performing. To re-quote:


> > The first answer is that your computer
> >arithmetic should be accurate enough such that experimental error (in
> >the input data) is larger than the computational error.

Numerical error analysis will tell you whether that is the case or not.
You can *not* state: my experimental error is in the 5th digit, so when
I perform the arithmetic in 15 digits, that will be sufficient. There
are cases where it is *not* sufficient.

One final point, in numerical error analysis you can go two ways.
Forward and backward. To illustrate, suppose you have a set of
input data I and the algorithm results in a set of output data O.
In forward analysis you assume exact I and find that the computed
O is in some bound of the exact O (with some way to measure). With
backward analysis you show that the computed O is the exact answer
for some I within some bound of the exact I. It is this letter form
that made numerical algebra on large systems possible.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/

Terry

unread,
Dec 8, 2003, 10:14:56 PM12/8/03
to
Wow, thanks for the replies so far!

Actually Terje had it right, I'm only looking for error analysis in
the context of machine arithmetic. In other words I'm only interested
in the errors that are introduced by my computer, all input numbers
are assumed to be perfect once generated within the computer (you
might have guessed this is a simulation of sorts and all data are
contrived to meet specific criteria).

So far I've gone through "What Every Scientist Should Know About
Floating-Point Arithmetic" (http://docs.sun.com/db/doc/800-7895) and
I'm going through Peter Tangs "Table-driven implementation of the
logarithm function in IEEE floating-point arithmetic" (albeit slowly)
but no where am I able to find the absolute worst errors that can
result from x^y calculations. Peter's article does specify an error
for the Log function but I don't know where his algorithm is
implemented. Ideally if I could just find a survey paper that talks
about which algorithm is implemented in which hardware, that would go
a long ways towards answering my questions.

All the other 'how to implement powering function' papers I'm able to
find don't seem to give an error analysis (maybe I'm not looking hard
enough?)

Peter does mention in his "Table-driven implementation of the Expm1
function in IEEE floating-point arithmetic" article that he's planning
on writing another paper on the POW() function but I have yet to find
such a paper.

As always any and all responses are much appreciated!

terry

P.S. I did find a paper also by Peter talking about how arithmetic is
implemented in the Itanium 64bit CPU, but the hardware I'm using is
all pre-P5 :(

Nick Maclaren

unread,
Dec 9, 2003, 3:20:53 AM12/9/03
to

In article <HpLrF...@cwi.nl>,

"Dik T. Winter" <Dik.W...@cwi.nl> writes:
|>
|> Indeed, statistical error analysis leads to overestimates of the error
|> to the point where it makes no sense to do the calculation at all.

Grrk. That is an oversimplification. It can do the converse.
Consider the powering example!

|> With such an analysis, solutions of linear systems with 40 unknowns
|> or more are too unreliable. Also numerical mathematics in general
|> uses "black-box" analysis, i.e. it is assumed that the input is exact
|> and the results are based on that assumption.

That is shoddy - get better books! Wilkinson and Reinsch doesn't.

|> And finally, statistical
|> error analysis tells you nothing about the additional errors you get
|> due to the finite precision operations you are performing.

It depends on how you use it. I have used it for that.


Regards,
Nick Maclaren.

Nick Maclaren

unread,
Dec 9, 2003, 3:26:46 AM12/9/03
to

In article <4a8939d7.03120...@posting.google.com>,

terry...@yahoo.com (Terry) writes:
|>
|> All the other 'how to implement powering function' papers I'm able to
|> find don't seem to give an error analysis (maybe I'm not looking hard
|> enough?)

Your problem is that such analyses date from the days when we
"Just Did It" - i.e. before computer scientists got involved
(or even existed). So there are few coherent descriptions, as
such examples were regarded as not worth writing up.

The analysis might be tricky, but the principles are trivial.
Logically, what you do is to work out the most each calculation
could increase the error on its input, and work through. For
most forms of powering, it is straightforward.


Regards,
Nick Maclaren.

glen herrmannsfeldt

unread,
Dec 9, 2003, 4:56:25 AM12/9/03
to
Nick Maclaren wrote:

> In article <Ik5Bb.61524$_M.294386@attbi_s54>,
> glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

>>Terry wrote:

(snip)

>>>Finally, is there a good introductory text on error analysis
>>>specifically focusing on powering and logs suitable for a math
>>>hobbyist such as myself? Or is that a stupid question?

>>The books I know have titles like "Statistical Treatment of Experimental
>>Data." Mostly the idea is that all measured quantities have an
>>uncertainty to them, and you need to take that into account when working
>>with such data.

>>Now, you ask a different question, related to the effect of computer
>>arithmetic on such data. The first answer is that your computer
>>arithmetic should be accurate enough such that experimental error (in
>>the input data) is larger than the computational error.

> I think that he is asking about a different sort of error analysis.
> I.e. the sort that if done (for linear algebra) in Wilkinson and
> Reinsch. He needs to find a good numerical analysis textbook, of
> the type that attempts to explain the mathematics. Ask on
> sci.math.num-analysis.

Well, it isn't so different. In a computation with multiple steps,
the effect of the error generated on previous steps needs to be
taken into account.

>>Consider some basic operations on data with uncertainties:

>>(a +/- da) + (b +/- db) = (a + b) +/- sqrt( da**2 + db**2 )

>>assuming that da and db are statistically independent, in other words,
>>uncorelated, and it also assumes they are gaussian distributed

> Actually, no, it doesn't. That formula applies to ANY independent
> distributions with variances - and the distributions don't even have
> to be the same. Not all distributions have variances, of course.

Well, the independent is the important part. I had thought that
there were some distributions that it didn't apply to, but yes.

>>(a +/- da) * (b +/- db) = (a*b) +/- sqrt( (a*db)**2 + (b*da)**2)

>>For any function f(x), f(x +/- dx) = f(x) +/- dx * f'(x)

>>. . .

> Approximately, and only when da and db are small relative to |a| and
> |b|.

When you get to the point where that isn't true you are in bad shape.
Though with some algorithms it isn't hard to do.

-- glen

Nick Maclaren

unread,
Dec 9, 2003, 5:12:50 AM12/9/03
to

In article <d%gBb.345094$275.1140696@attbi_s53>,

glen herrmannsfeldt <g...@ugcs.caltech.edu> writes:
|>
|> > I think that he is asking about a different sort of error analysis.
|> > I.e. the sort that if done (for linear algebra) in Wilkinson and
|> > Reinsch. He needs to find a good numerical analysis textbook, of
|> > the type that attempts to explain the mathematics. Ask on
|> > sci.math.num-analysis.
|>
|> Well, it isn't so different. In a computation with multiple steps,
|> the effect of the error generated on previous steps needs to be
|> taken into account.

It is fairly different for things like powering, where the errors
are definitely NOT independent. Statistics doesn't gain a lot with
that one.

|> >>Consider some basic operations on data with uncertainties:
|>
|> >>(a +/- da) + (b +/- db) = (a + b) +/- sqrt( da**2 + db**2 )
|>
|> >>assuming that da and db are statistically independent, in other words,
|> >>uncorelated, and it also assumes they are gaussian distributed
|>
|> > Actually, no, it doesn't. That formula applies to ANY independent
|> > distributions with variances - and the distributions don't even have
|> > to be the same. Not all distributions have variances, of course.
|>
|> Well, the independent is the important part. I had thought that
|> there were some distributions that it didn't apply to, but yes.

There are, but they are the ones without variances. Such as Cauchy
(a.k.a. Student's t with one degree of freedom).

|> >>(a +/- da) * (b +/- db) = (a*b) +/- sqrt( (a*db)**2 + (b*da)**2)
|>
|> >>For any function f(x), f(x +/- dx) = f(x) +/- dx * f'(x)
|>
|> >>. . .
|>
|> > Approximately, and only when da and db are small relative to |a| and
|> > |b|.
|>
|> When you get to the point where that isn't true you are in bad shape.
|> Though with some algorithms it isn't hard to do.

You can get there even with simple multiplication where the error
is comparable to the mean. That is a notoriously hairy area.


Regards,
Nick Maclaren.

Dik T. Winter

unread,
Dec 9, 2003, 8:45:23 AM12/9/03
to
In article <br40l5$sm5$1...@pegasus.csx.cam.ac.uk> nm...@cus.cam.ac.uk (Nick Maclaren) writes:
>
> In article <HpLrF...@cwi.nl>,
> "Dik T. Winter" <Dik.W...@cwi.nl> writes:
> |>
> |> Indeed, statistical error analysis leads to overestimates of the error
> |> to the point where it makes no sense to do the calculation at all.
>
> Grrk. That is an oversimplification. It can do the converse.
> Consider the powering example!

Yes, indeed, it is an oversimplification. "leads" should have been
"can lead".

>
> |> With such an analysis, solutions of linear systems with 40 unknowns
> |> or more are too unreliable. Also numerical mathematics in general
> |> uses "black-box" analysis, i.e. it is assumed that the input is exact
> |> and the results are based on that assumption.
>
> That is shoddy - get better books! Wilkinson and Reinsch doesn't.

Wilkinson's The Algebraic Eigenvalue Problem good enough? Even
backwards error analysis is in terms of exact input.

>
> |> And finally, statistical
> |> error analysis tells you nothing about the additional errors you get
> |> due to the finite precision operations you are performing.
>
> It depends on how you use it. I have used it for that.

Of course, you *can* do it, but you have to know pretty well what you
are doing. You can also subtract nearly equal values (that have errors
in them) when you know exactly what you are doing. (I once did it in
a routine for the calculation of the arcsin. The high relative error
that was the result was small in the final result, due to other things.)

Fred J. Tydeman

unread,
Dec 9, 2003, 12:29:17 PM12/9/03
to
Terry wrote:
>
> So here are some of the questions I'm struggling with:
> - Given that I'm currently only using WIntel hardware, is there a
> dominant transcendental and powering algorithm in use?

Based upon my testing of the math libraries shipped with C compilers,
NO.

> - Is there a paper that describes these algorithms along with an error
> analysis?
> - If not in the above paper, where might I find out what the absolute
> worst error is for a Log[x] and x^y operation where both x and y are
> floating point numbers?

Please look at the program tsin.c on the public FTP site mentioned
below in my signature. In the comments section at the top is a
table of errors on sin(355.0) for many compilers. That should give
you an idea on which compiler and library vendors care about good
(nearly correct) results, and which don't know (or don't care) how
bad their math libraries are. Many Intel x87 based libraries use
the fsin hardware instruction, which for the machines I have results
for, gets 7.09 bits wrong, or 135 ULP error.

I just did a scan of my ULP errors of log() and pow().
For log(), most are 1.0 or better. But the worst is 6.1e12 ULPs.
For pow(), some may be 1.0 ULP (after excluding gross error cases),
but most are in the thousands of ULPs, with the worst being 4.2e18 ULPs.

---
Fred J. Tydeman Tydeman Consulting
tyd...@tybor.com Programming, testing, numerics
+1 (775) 287-5904 Vice-chair of J11 (ANSI "C")
Sample C99+FPCE tests: ftp://jump.net/pub/tybor/
Savers sleep well, investors eat well, spenders work forever.

Terje Mathisen

unread,
Dec 9, 2003, 1:52:31 PM12/9/03
to
Fred J. Tydeman wrote:

> Terry wrote:
>
>>So here are some of the questions I'm struggling with:
>>- Given that I'm currently only using WIntel hardware, is there a
>>dominant transcendental and powering algorithm in use?
>
>
> Based upon my testing of the math libraries shipped with C compilers,
> NO.
>
>
>>- Is there a paper that describes these algorithms along with an error
>>analysis?
>>- If not in the above paper, where might I find out what the absolute
>>worst error is for a Log[x] and x^y operation where both x and y are
>>floating point numbers?
>
>
> Please look at the program tsin.c on the public FTP site mentioned
> below in my signature. In the comments section at the top is a
> table of errors on sin(355.0) for many compilers. That should give
> you an idea on which compiler and library vendors care about good
> (nearly correct) results, and which don't know (or don't care) how
> bad their math libraries are. Many Intel x87 based libraries use
> the fsin hardware instruction, which for the machines I have results
> for, gets 7.09 bits wrong, or 135 ULP error.

This is a bit harsh imho:

The Pentium FSIN is clearly documented (at least in Peter Tang's papers)
to be used on arguments that have been reduced to the +/- pi range.

Due to the 80-bit extended format, it will stay within 0.5 for double
arguments of a somewhat larger range, but using on an exact integer
value that just happens to be _very_ close to N * pi.

Extending the x86 library to use a simple approximate range test up
front, and then a reduction if needed, would add very little to the
average runtime, i.e. something like this:

double sin(double x)
{
double n;

n = round(x * one_over_pi);

if (abs(n) > RANGE_LIMIT) {
/* Use an extended precision value for pi, defined via an
array of float values. This keeps each operation exact. */
x -= n * pi_array[];
}
return FSIN(x);
}

The main problem here is that I really don't like inputting large (i.e.
out-of-range) values to functions like sin(), and then pretend that said
values are exact. :-(

Fred J. Tydeman

unread,
Dec 9, 2003, 11:58:55 PM12/9/03
to
Terje Mathisen wrote:
>
> > Please look at the program tsin.c on the public FTP site mentioned
> > below in my signature. In the comments section at the top is a
> > table of errors on sin(355.0) for many compilers. That should give
> > you an idea on which compiler and library vendors care about good
> > (nearly correct) results, and which don't know (or don't care) how
> > bad their math libraries are. Many Intel x87 based libraries use
> > the fsin hardware instruction, which for the machines I have results
> > for, gets 7.09 bits wrong, or 135 ULP error.
>
> This is a bit harsh imho:
>
> The Pentium FSIN is clearly documented (at least in Peter Tang's papers)
> to be used on arguments that have been reduced to the +/- pi range.

Here are the results I get from testing an Intel Pentium 4.
In the following, FUT means Function Under Test.

Using 80-bit long doubles gets:
Test vector 128: FUT not close enough: +3.440707728423067690000e+17 ulp
error
Input arg=4000c90fdaa22168c234=+3.141592653589793238300e+00
Expected =3fc0c4c6628b80dc1cd1=+1.666748583704175665640e-19
Computed =3fc0c000000000000000=+1.626303258728256651010e-19

Test vector 129: FUT not close enough: +1.376283091369227077000e+18 ulp
error
Input arg=4000c90fdaa22168c235=+3.141592653589793238510e+00
Expected =bfbeece675d1fc8f8cbb=-5.016557612668332023450e-20
Computed =bfbf8000000000000000=-5.421010862427522170040e-20

Using 64-bit doubles gets:
Test vector 64: FUT not close enough: +1.64065729543000000e+11 ulp
error
Input arg=400921fb54442d18=+3.14159265358979312e+00
Expected =3ca1a62633145c07=+1.22464679914735321e-16
Computed =3ca1a60000000000=+1.22460635382237726e-16

Test vector 65: FUT not close enough: +8.20328647710000000e+10 ulp
error
Input arg=400921fb54442d19=+3.14159265358979356e+00
Expected =bcb72cece675d1fd=-3.21624529935327320e-16
Computed =bcb72d0000000000=-3.21628574467824890e-16

Here is the Intel documentation on accuracy of FSIN:

IA-32 IntelŽ Architecture Software Developer's Manual
Volume 1: Basic Architecture, Order Number 245470-006

PROGRAMMING WITH THE X87 FPU

8.3.10. Transcendental Instruction Accuracy

New transcendental instruction algorithms were incorporated into the
IA-32 architecture beginning with the Pentium processors. These new
algorithms (used in transcendental instructions (FSIN, FCOS, FSINCOS,
FPTAN, FPATAN, F2XM1, FYL2X, and FYL2XP1) allow a higher level of
accuracy than was possible in earlier IA-32 processors and x87 math
coprocessors. The accuracy of these instructions is measured in terms
of units in the last place (ulp). For a given argument x, let f(x) and
F(x) be the correct and computed (approximate) function values,
respectively. The error in ulps is defined to be:
... formula would not cut and paste from PDF file ...

With the Pentium and later IA-32 processors, the worst case error on
transcendental functions is less than 1 ulp when rounding to the
nearest (even) and less than 1.5 ulps when rounding in other
modes. The functions are guaranteed to be monotonic, with respect to
the input operands, throughout the domain supported by the
instruction.

The instructions FYL2X and FYL2XP1 are two operand instructions and
are guaranteed to be within 1 ulp only when y equals 1. When y is not
equal to 1, the maximum ulp error is always within 1.35 ulps in round
to nearest mode. (For the two operand functions, monotonicity was
proved by holding one of the operands constant.)


The only reference I can find to the domain of FSIN is:

8.1.2.2 Condition Code Flags

The FPTAN, FSIN, FCOS, and FSINCOS instructions set the C2 flag to 1
to indicate that the source operand is beyond the allowable range of
Ä…2**63 and clear the C2 flag if the source operand is within the
allowable range.

> Due to the 80-bit extended format, it will stay within 0.5 for double
> arguments of a somewhat larger range, but using on an exact integer
> value that just happens to be _very_ close to N * pi.

The results I am getting do not look like 0.5 ULP for doubles near pi.

The only FPU I know (from personally testing) that gets around 0.5 ULP
accuracy for the full input domain of -2**63 to +2**63 for FSIN, is the
AMD K5, done in 1995, designed by Tom Lynch. I believe that it takes
around 190 bits of pi to do correct argument reduction for those values.

Terje Mathisen

unread,
Dec 10, 2003, 3:07:13 AM12/10/03
to
Fred J. Tydeman wrote:

> Terje Mathisen wrote:
>>Due to the 80-bit extended format, it will stay within 0.5 for double
>>arguments of a somewhat larger range, but using on an exact integer
>>value that just happens to be _very_ close to N * pi.
>
>
> The results I am getting do not look like 0.5 ULP for doubles near pi.

I agree, they don't. 'doubles near pi' _are_ convered by my N*pi
argument though, allowing N == 1. :-)


>
> The only FPU I know (from personally testing) that gets around 0.5 ULP
> accuracy for the full input domain of -2**63 to +2**63 for FSIN, is the
> AMD K5, done in 1995, designed by Tom Lynch. I believe that it takes
> around 190 bits of pi to do correct argument reduction for those values.

Last I checked, Pentium use an 83-bit (67-bit mantissa) version of pi
for argument reduction, so this also becomes the effective limit as soon
as you have to do any kind of argument reduction.

I've read about at least one implementation that have a 1024-bit pi just
to allow exact reduction for the entire +/- 10-bit exponent range of
double precision inputs. Sun?

I think we'll have to agree to disagree about the actual validity of
such calculations.

I.e. if I get an angle (by measurement or some other method) which just
happens to be within a small fraction of a degree from +/- pi, I'd
suspect that maybe the real angle should have been exactly pi.

Still, as I wrote yesterday, it is OK for a sw lib to care about these
things, I still wouldn't saddle a hw implementation with such requirements.

Dik T. Winter

unread,
Dec 10, 2003, 5:35:58 AM12/10/03
to
In article <br6k8t$oar$1...@osl016lin.hda.hydro.com> Terje Mathisen <terje.m...@hda.hydro.com> writes:
...

> I've read about at least one implementation that have a 1024-bit pi just
> to allow exact reduction for the entire +/- 10-bit exponent range of
> double precision inputs. Sun?

DEC on the VAX. I do not know whether it was indeed 1024 bits. There
are articles about it in the Signum newsletter.

Raymond Toy

unread,
Dec 10, 2003, 12:10:01 PM12/10/03
to
>>>>> "Terje" == Terje Mathisen <terje.m...@hda.hydro.com> writes:

Terje> I've read about at least one implementation that have a 1024-bit pi
Terje> just to allow exact reduction for the entire +/- 10-bit exponent range
Terje> of double precision inputs. Sun?

Can't say for sure, but glibc has (had?) an implementation of sin that
includes some 1500 (?) bits of pi. The comments say it's from Sun.

Ray

glen herrmannsfeldt

unread,
Dec 10, 2003, 3:18:43 PM12/10/03
to
Terje Mathisen wrote:

(snip)

> Last I checked, Pentium use an 83-bit (67-bit mantissa) version of pi
> for argument reduction, so this also becomes the effective limit as soon
> as you have to do any kind of argument reduction.
>
> I've read about at least one implementation that have a 1024-bit pi just
> to allow exact reduction for the entire +/- 10-bit exponent range of
> double precision inputs. Sun?
>
> I think we'll have to agree to disagree about the actual validity of
> such calculations.

I learned about this when the IBM Fortran library routines would refuse
to do trigonometric functions when the argument was too big.

Then I knew someone in my high school with a brand new HP-55 calculator,
who found the sine of 9.999999999e99 degrees, and assured me that it
was right, though it was not zero.

As 9.999999999e99 is a multiple of 180, I was not convinced.

-- glen

Dik T. Winter

unread,
Dec 10, 2003, 8:40:16 PM12/10/03
to
In article <DcLBb.357108$275.1165325@attbi_s53> glen herrmannsfeldt <g...@ugcs.caltech.edu> writes:
...

> Then I knew someone in my high school with a brand new HP-55 calculator,
> who found the sine of 9.999999999e99 degrees, and assured me that it
> was right, though it was not zero.

Worse is people that enter pi, multiply by 1000, calculate the sine and
next complain when the answer is not zero.

Terje Mathisen

unread,
Dec 11, 2003, 9:39:38 AM12/11/03
to
Raymond Toy wrote:

OK, that's almost certainly the one I was thinking of.

Why would you have more than max_exponent + mantissa bits? I.e. 1023 +
53 + fudge factor, say 1080?

Raymond Toy

unread,
Dec 11, 2003, 1:20:05 PM12/11/03
to
>>>>> "Terje" == Terje Mathisen <terje.m...@hda.hydro.com> writes:

Terje> Raymond Toy wrote:

>>>>>>> "Terje" == Terje Mathisen <terje.m...@hda.hydro.com> writes:
Terje> I've read about at least one implementation that have a
>> 1024-bit pi
Terje> just to allow exact reduction for the entire +/- 10-bit exponent range
Terje> of double precision inputs. Sun?
>> Can't say for sure, but glibc has (had?) an implementation of sin
>> that
>> includes some 1500 (?) bits of pi. The comments say it's from Sun.

Terje> OK, that's almost certainly the one I was thinking of.

Terje> Why would you have more than max_exponent + mantissa bits? I.e. 1023 +
Terje> 53 + fudge factor, say 1080?

You are probably right. I don't have glibc sources around anymore.

But I thought it was rather neat. It allowed Sun's to compute
cos(2^120) correctly. (Well assuming 2^120 was exact, which is what I
think these kinds of routines should do. Only the programmer really
knows how accurate the numbers are. The routines should do the best
reasonable job to produce the desired answer.)

Ray

Terry

unread,
Dec 14, 2003, 2:43:47 AM12/14/03
to
Ok this has definitely been an interesting learning experience. Thanks
again to all those who have replied. I guess my choices at this point
are:

a) carefully select a compiler/math library that will deliver a
known accuracy
b) barring the above option (if for example I'm compiling to Java
byte code, where I'm at the mercy of whatever hardware/os/software
library happen to be supporting my code), incorporate my own library
to do the math that I need (which sounds like something that would be
a bit beyond my abilities unless there was an open source version of
the function that I could adapt).

One thing I'm still fuzzy on is when does a math coprocessor come into
play? I would have thought that most modern CPU's have coprocessor's
to calculate log[] and x^y but it's now obvious that that is not the
case. Why? What determines what math functions are done by a
coprocessor and what are done by a software library? Seems to me that
if it was done in hardware at least there would be a bit more
standardization.

terry

glen herrmannsfeldt

unread,
Dec 14, 2003, 4:14:04 AM12/14/03
to
Terry wrote:

> Ok this has definitely been an interesting learning experience. Thanks
> again to all those who have replied. I guess my choices at this point
> are:
>
> a) carefully select a compiler/math library that will deliver a
> known accuracy

> b) barring the above option (if for example I'm compiling to Java
> byte code, where I'm at the mercy of whatever hardware/os/software
> library happen to be supporting my code), incorporate my own library
> to do the math that I need (which sounds like something that would be
> a bit beyond my abilities unless there was an open source version of
> the function that I could adapt).

Well, with Java you are guaranteed IEEE, though with some features
missing. There is the java math class with most needed math
library functions.

> One thing I'm still fuzzy on is when does a math coprocessor come into
> play? I would have thought that most modern CPU's have coprocessor's
> to calculate log[] and x^y but it's now obvious that that is not the
> case. Why? What determines what math functions are done by a
> coprocessor and what are done by a software library? Seems to me that
> if it was done in hardware at least there would be a bit more
> standardization.

Well, the 8087 had at least some support for most of the trig and log
type functions. There is always a trade off between hardware and
software.

-- glen

Mike Cowlishaw

unread,
Dec 14, 2003, 5:16:26 AM12/14/03
to
> One thing I'm still fuzzy on is when does a math coprocessor come into
> play? I would have thought that most modern CPU's have coprocessor's
> to calculate log[] and x^y but it's now obvious that that is not the
> case. Why? What determines what math functions are done by a
> coprocessor and what are done by a software library? Seems to me that
> if it was done in hardware at least there would be a bit more
> standardization.

From a standards-writer's point of view the difficulty is that the only
'reasonable' result to standardize is that the result be correctly
rounded. That is relatively easy to achieve for +-*/ and sqrt, but
is very much harder for log and power, and is generally considered
to be too hard to implement in hardware.

I forget whether the 8087 gave correctly rounded results for
these (probably not -- a quick glance over the 387 data sheet
didn't find any mention of this) -- but they were undoubtedly
implemented in the 8087's microcode rather than as 'pure
hardware' operations.

Mike Cowlishaw


Nick Maclaren

unread,
Dec 14, 2003, 6:32:54 AM12/14/03
to
In article <brhd9v$lmi$1...@news.btv.ibm.com>,

Mike Cowlishaw <mfc...@attglobal.net> wrote:
>> One thing I'm still fuzzy on is when does a math coprocessor come into
>> play? I would have thought that most modern CPU's have coprocessor's
>> to calculate log[] and x^y but it's now obvious that that is not the
>> case. Why? What determines what math functions are done by a
>> coprocessor and what are done by a software library? Seems to me that
>> if it was done in hardware at least there would be a bit more
>> standardization.
>
>From a standards-writer's point of view the difficulty is that the only
>'reasonable' result to standardize is that the result be correctly
>rounded. That is relatively easy to achieve for +-*/ and sqrt, but
>is very much harder for log and power, and is generally considered
>to be too hard to implement in hardware.

I am sorry, but that is completely wrong. There are many other
reasonable things to standardise, and that erroneous claim has done
immense harm to the cause of getting reliable and accurate results.
The way it has done that is by demanding an unreasonable target,
and so vendors who cannot or don't want to deliver it (a) quietly
gloss over the fact and (b) feel that they may as well be hanged
for a sheep as a lamb and don't do even as well as they could.

For example, a standard could easily specify that there has to be a
precise, documented formula for the maximum error and leave it at
that.

It is almost equally easy to define reasonable accuracy requirements,
either absolutely or relative to implementation-defined constants.
Yes, this needs more design, which is a political issue, but so is
demanding the unreasonable.


Regards,
Nick Maclaren.

Mike Cowlishaw

unread,
Dec 14, 2003, 12:35:35 PM12/14/03
to
Nick Maclaren wrote:
> I am sorry, but that is completely wrong.

Nick, go read what the poster was asking for. It was
a perfectly reasonable request that he or she will get
the same result regardless of the platform, hardware,
or software which provides that answer.

What is 'wrong' with asking for the same result
for a call to the same function with the same argument
and the same result constraints? And what is 'wrong' with
suggesting that that same result should be the correctly
rounded result?

'Idealistic', maybe. But not Wrong.

Mike


Nick Maclaren

unread,
Dec 14, 2003, 2:34:37 PM12/14/03
to

In article <bri71c$bnu$1...@news.btv.ibm.com>,

I was responding to your posting, and I included the appropriate
context. To repeat your statement:

>From a standards-writer's point of view the difficulty is that the only
>'reasonable' result to standardize is that the result be correctly
>rounded. That is relatively easy to achieve for +-*/ and sqrt, but
>is very much harder for log and power, and is generally considered
>to be too hard to implement in hardware.

That is wrong. Simply, factually, wrong. As I pointed out, there
are many other reasonable results to standardise.


The original context was asking about coprocessors, and how they are
involved. The answer to that is quite complex but, as I am sure you
know, a great many such coprocessors have specified results that are
NOT necessarily the "correctly rounded" result. In particular, most
vector units (which are coprocessors) did not.

To answer your new assertion, there is nothing wrong with asking for
that, nor is there anything wrong with suggesting it. What is wrong
is to state that it is the only reasonable thing to standardise,
which is what you said.


Regards,
Nick Maclaren.

Terje Mathisen

unread,
Dec 14, 2003, 3:22:23 PM12/14/03
to
Nick Maclaren wrote:

> In article <bri71c$bnu$1...@news.btv.ibm.com>,
> Mike Cowlishaw <mfc...@attglobal.net> wrote:
>
>>Nick Maclaren wrote:
>>
>>>I am sorry, but that is completely wrong.
>>
>>Nick, go read what the poster was asking for. It was
>>a perfectly reasonable request that he or she will get
>>the same result regardless of the platform, hardware,
>>or software which provides that answer.
>>
>>What is 'wrong' with asking for the same result
>>for a call to the same function with the same argument
>>and the same result constraints? And what is 'wrong' with
>>suggesting that that same result should be the correctly
>>rounded result?
>>
>>'Idealistic', maybe. But not Wrong.
>
>
> I was responding to your posting, and I included the appropriate
> context. To repeat your statement:
>
>>From a standards-writer's point of view the difficulty is that the only
>
>>'reasonable' result to standardize is that the result be correctly
>>rounded. That is relatively easy to achieve for +-*/ and sqrt, but
>>is very much harder for log and power, and is generally considered
>>to be too hard to implement in hardware.
>
>
> That is wrong. Simply, factually, wrong. As I pointed out, there
> are many other reasonable results to standardise.

Standardizing on 'correctly rounded' trancendental functions is a Really
Bad Idea (TM), since it is in general impossible, i.e. you can have a
trancendental function that delivers a true result which is arbitrarily
close to the 0.5 ulp rounding point.

(It is of course possible that a given function (say sin()) turns out to
have no such results for any possible double precision input, but the
odds are good that you'd have to calculate a _lot_ of bits to be sure. :-()

Nick Maclaren

unread,
Dec 14, 2003, 3:40:06 PM12/14/03
to
In article <brigq0$r2f$1...@osl016lin.hda.hydro.com>,

Terje Mathisen <terje.m...@hda.hydro.com> wrote:
>
>Standardizing on 'correctly rounded' trancendental functions is a Really
>Bad Idea (TM), since it is in general impossible, i.e. you can have a
>trancendental function that delivers a true result which is arbitrarily
>close to the 0.5 ulp rounding point.

That is very similar to the reason that it is not done for vector
coprocessors. If you do that, and don't constrain your inputs, you
effectively have to serialise the vector operations.

>(It is of course possible that a given function (say sin()) turns out to
>have no such results for any possible double precision input, but the
>odds are good that you'd have to calculate a _lot_ of bits to be sure. :-()

Sin is a bad example, as it is too civilised. Try erf and inverse
erf :-)


Regards,
Nick Maclaren.

Dik T. Winter

unread,
Dec 14, 2003, 8:28:31 PM12/14/03
to
In article <brigq0$r2f$1...@osl016lin.hda.hydro.com> Terje Mathisen <terje.m...@hda.hydro.com> writes:
> Standardizing on 'correctly rounded' trancendental functions is a Really
> Bad Idea (TM), since it is in general impossible, i.e. you can have a
> trancendental function that delivers a true result which is arbitrarily
> close to the 0.5 ulp rounding point.
>
> (It is of course possible that a given function (say sin()) turns out to
> have no such results for any possible double precision input, but the
> odds are good that you'd have to calculate a _lot_ of bits to be sure. :-()

The latter is the point. When constrained to floating point you can *not*
come arbitrarily close to the 0.5 ulp rounding point. However it will
take a lot of time to find the closest function value for a transcendental
function. For algebraic functions (like square root and cube root) it
is easier because Liouville's theorem specifies lower bounds on the
closeness. Note however that with transcendental functions you will never
get the 0.5 ulp rounding point exactly (unless you have some special
argument). For instance, when x is a floating point number, sin(x) is
rational if and only if x == 0.

Nick Maclaren

unread,
Dec 15, 2003, 3:26:08 AM12/15/03
to

In article <HpwxF...@cwi.nl>,

"Dik T. Winter" <Dik.W...@cwi.nl> writes:
|>
|> The latter is the point. When constrained to floating point you can *not*
|> come arbitrarily close to the 0.5 ulp rounding point. However it will
|> take a lot of time to find the closest function value for a transcendental
|> function. For algebraic functions (like square root and cube root) it
|> is easier because Liouville's theorem specifies lower bounds on the
|> closeness. Note however that with transcendental functions you will never
|> get the 0.5 ulp rounding point exactly (unless you have some special
|> argument). For instance, when x is a floating point number, sin(x) is
|> rational if and only if x == 0.

Yes and no. All of what you and Terje say is true, but it is possible
to do the same analyses for trigonometric functions with more effort,
and even log and exp. I am sure that you both know how, but here is
one method.

IEEE double floating point values can be expressed in terms of
a couple of thousand discrete uniform ranges (i.e. one for each
exponent).

pi has a known continued fraction with civilised properties,
so you can reduce those (purely algebraically) to known formulae
for the residual in the range 0-pi/2.

sin has a civilised Taylor series, so you can reduce that (again
purely algebraically) to a floating-point representation plus a
correction. Q.E.D.

This isn't pretty, but DOES enable a full analysis of the problem
cases, and is how the people (like Fred Gustafson) produced their
'perfect' implementations. While writing the algebra (even using
an advanced package like Axiom) would take skill and effort, I
doubt that it would need all that much computer time to do the
above.

It is clearly possible for power, but we are probably talking about
another order of magnitude complication again (i.e. as much harder
than sin as sin is than sqrt). I wouldn't know how to start with
erf and inverse erf short of brute force, which is way beyond
current technology.

I don't think that it is reasonable for a standard to require it
even for sin, let alone other functions. But it ISN'T impossible
at least for the trigonometric functions.


Regards,
Nick Maclaren.

Mike Cowlishaw

unread,
Dec 15, 2003, 4:35:01 AM12/15/03
to
> I was responding to your posting, and I included the appropriate
> context. To repeat your statement:
>
>> From a standards-writer's point of view the difficulty is that the
>> only 'reasonable' result to standardize is that the result be
>> correctly rounded. That is relatively easy to achieve for +-*/ and
>> sqrt, but
>> is very much harder for log and power, and is generally considered
>> to be too hard to implement in hardware.
>
> That is wrong. Simply, factually, wrong. As I pointed out, there
> are many other reasonable results to standardise.

Yes, that's exactly why I put 'reasonable' in quotes...

Mike Cowlishaw


Nick Maclaren

unread,
Dec 15, 2003, 5:03:15 AM12/15/03
to

In article <brjv8b$jui$1...@news.btv.ibm.com>,

Well, I apologise for misunderstanding you, and contradicting what
you didn't mean, but I am totally baffled by what you did mean.

The problem is that there is a currently widespread dogma that the
'correctly rounded' is the only reasonable thing to standardise,
and that viewpoint has caused and is causing incalculable harm.
I thought that you were promoting it.

[ I put it in quotes to say that I do not necessarily agree that
there is a single best type of rounding, but that aspect is not
relevant to my point. ]

I have many times been told that no alternative forms of accuracy
standardisation (other than 'correct rounding' and 'you get what
you get') would be considered, because it is 'known' that no other
possibilities are 'reasonable'.


Regards,
Nick Maclaren.

Nick Maclaren

unread,
Dec 15, 2003, 5:11:00 AM12/15/03
to

In article <brjv8b$jui$1...@news.btv.ibm.com>,
"Mike Cowlishaw" <mfc...@attglobal.net> writes:

Well, I apologise for misunderstanding you, and contradicting what


you didn't mean, but I am totally baffled by what you did mean.

The problem is that there is a currently widespread dogma that the
'correctly rounded' is the only reasonable thing to standardise,
and that viewpoint has caused and is causing incalculable harm.

I thought that you were promoting it in that response.

glen herrmannsfeldt

unread,
Dec 15, 2003, 3:47:41 PM12/15/03
to
Dik T. Winter wrote:

(snip)

> The latter is the point. When constrained to floating point you can *not*
> come arbitrarily close to the 0.5 ulp rounding point. However it will
> take a lot of time to find the closest function value for a transcendental
> function. For algebraic functions (like square root and cube root) it
> is easier because Liouville's theorem specifies lower bounds on the
> closeness. Note however that with transcendental functions you will never
> get the 0.5 ulp rounding point exactly (unless you have some special
> argument). For instance, when x is a floating point number, sin(x) is
> rational if and only if x == 0.

How about for base pi floating point numbers? Yes, I know there is no
base pi hardware, and never will be. Or maybe base pi/6 or so?

I have heard suggestions for base e, but never for base pi.

-- glen

Nick Maclaren

unread,
Dec 16, 2003, 3:19:46 AM12/16/03
to

In article <N5pDb.58336$8y1.229723@attbi_s52>,
glen herrmannsfeldt <g...@ugcs.caltech.edu> writes:

I invented it some 50 years back, to the surprise of my teacher.
Much later, I discovered how often it had been invented before :-)
It offers virtually no advantages, and all non-integer bases are
a real pain to handle in discrete arithmetic.

More plausibly, trigonometric functions do not necessarily need
their arguments in radians; if the arguments are degrees, grads
or revolutions, then there ARE exact results. And, despite what
people are taught at school, there is nothing more mathematically
natural about using radians than using revolutions - you just make
different things more natural.


Regards,
Nick Maclaren.

Terje Mathisen

unread,
Dec 16, 2003, 4:26:06 AM12/16/03
to
Nick Maclaren wrote:
> |> How about for base pi floating point numbers? Yes, I know there is no
> |> base pi hardware, and never will be. Or maybe base pi/6 or so?
> |>
> |> I have heard suggestions for base e, but never for base pi.
>
> I invented it some 50 years back, to the surprise of my teacher.
> Much later, I discovered how often it had been invented before :-)
> It offers virtually no advantages, and all non-integer bases are
> a real pain to handle in discrete arithmetic.
>
> More plausibly, trigonometric functions do not necessarily need
> their arguments in radians; if the arguments are degrees, grads
> or revolutions, then there ARE exact results. And, despite what
> people are taught at school, there is nothing more mathematically
> natural about using radians than using revolutions - you just make
> different things more natural.

Still used every day:

All Garmin GPSs use scaled revolutions: 32 bit signed integers for a
full circle. Range reduction becomes quite trivial. :-)

I.e. base 2pi

Dik T. Winter

unread,
Dec 16, 2003, 5:39:08 AM12/16/03
to
In article <brmf72$731$1...@pegasus.csx.cam.ac.uk> nm...@cus.cam.ac.uk (Nick Maclaren) writes:
...

> More plausibly, trigonometric functions do not necessarily need
> their arguments in radians; if the arguments are degrees, grads
> or revolutions, then there ARE exact results.

Indeed, see the Ada standard, which also puts constraints on the
accuracy of the functions.

glen herrmannsfeldt

unread,
Dec 16, 2003, 3:17:16 PM12/16/03
to
Nick Maclaren wrote:

(snip)

> More plausibly, trigonometric functions do not necessarily need
> their arguments in radians; if the arguments are degrees, grads
> or revolutions, then there ARE exact results. And, despite what
> people are taught at school, there is nothing more mathematically
> natural about using radians than using revolutions - you just make
> different things more natural.

PL/I has the sind(), cosd(), and I believe other functions with
arguments in degrees. There might be other languages that
do, but it doesn't seem to be as common as it should be.

With radians, it is difficult to get exact results for the easy cases.

It might be that it is fixed now, but I used to hear about problems
in doing a 90 degree or 180 degree rotation in postscript. The
rotation matrix for 90 degrees should come out

0 1
-1 0

but those 0's didn't come out exactly 0, because they couldn't
multiply the degrees (supplied to the rotate command) to radians
and then to the cos function and get 0 out.

-- glen

Dik T. Winter

unread,
Dec 16, 2003, 7:12:34 PM12/16/03
to
In article <gLJDb.129859$_M.669815@attbi_s54> glen herrmannsfeldt <g...@ugcs.caltech.edu> writes:

> Nick Maclaren wrote:
> > More plausibly, trigonometric functions do not necessarily need
> > their arguments in radians; if the arguments are degrees, grads
> > or revolutions, then there ARE exact results. And, despite what
> > people are taught at school, there is nothing more mathematically
> > natural about using radians than using revolutions - you just make
> > different things more natural.
>
> PL/I has the sind(), cosd(), and I believe other functions with
> arguments in degrees. There might be other languages that
> do, but it doesn't seem to be as common as it should be.

Ada has two forms of those functions. With a single argument (in that
case it is the standard function) and with two arguments. The second
arcument is the measure of the complete circle. So sin(x, 360) would
be in degrees.

Terje Mathisen

unread,
Dec 17, 2003, 1:55:23 AM12/17/03
to
Dik T. Winter wrote:

> In article <gLJDb.129859$_M.669815@attbi_s54> glen herrmannsfeldt <g...@ugcs.caltech.edu> writes:
> > Nick Maclaren wrote:
> > > More plausibly, trigonometric functions do not necessarily need
> > > their arguments in radians; if the arguments are degrees, grads
> > > or revolutions, then there ARE exact results. And, despite what
> > > people are taught at school, there is nothing more mathematically
> > > natural about using radians than using revolutions - you just make
> > > different things more natural.
> >
> > PL/I has the sind(), cosd(), and I believe other functions with
> > arguments in degrees. There might be other languages that
> > do, but it doesn't seem to be as common as it should be.
>
> Ada has two forms of those functions. With a single argument (in that
> case it is the standard function) and with two arguments. The second
> arcument is the measure of the complete circle. So sin(x, 360) would
> be in degrees.

"Add everything _and_ the kitchen sink!"
:-)

So the regular function version (for radians) becomes a two-argument
call with a default second parameter of 2pi?

Dik T. Winter

unread,
Dec 17, 2003, 6:16:09 AM12/17/03
to
In article <brouks$61h$3...@osl016lin.hda.hydro.com> Terje Mathisen <terje.m...@hda.hydro.com> writes:
> Dik T. Winter wrote:
...

> > Ada has two forms of those functions. With a single argument (in that
> > case it is the standard function) and with two arguments. The second
> > arcument is the measure of the complete circle. So sin(x, 360) would
> > be in degrees.
>
> "Add everything _and_ the kitchen sink!"
> :-)
>
> So the regular function version (for radians) becomes a two-argument
> call with a default second parameter of 2pi?

No. On purpose they are different functions. The two argument functions
must do exact range reduction mod the second argument. The one argument
function is allowed to do range reduction with an extended precision
value of 2pi.

Mike Cowlishaw

unread,
Dec 24, 2003, 11:27:25 AM12/24/03
to
[And my apologies for slow response to this one .. somehow it got 'marked as
read' when it wasn't...]

Nick Maclaren wrote:
> In article <brjv8b$jui$1...@news.btv.ibm.com>,
> "Mike Cowlishaw" <mfc...@attglobal.net> writes:
>>>> I was responding to your posting, and I included the appropriate
>>>> context. To repeat your statement:
>>>>
>>>>> From a standards-writer's point of view the difficulty is that the
>>>>> only 'reasonable' result to standardize is that the result be
>>>>> correctly rounded. That is relatively easy to achieve for +-*/
>>>>> and sqrt, but
>>>>> is very much harder for log and power, and is generally considered
>>>>> to be too hard to implement in hardware.
>>>>
>>>> That is wrong. Simply, factually, wrong. As I pointed out, there
>>>> are many other reasonable results to standardise.
>>>
>>> Yes, that's exactly why I put 'reasonable' in quotes...
>
> Well, I apologise for misunderstanding you, and contradicting what
> you didn't mean, but I am totally baffled by what you did mean.
>
> The problem is that there is a currently widespread dogma that the
> 'correctly rounded' is the only reasonable thing to standardise,
> and that viewpoint has caused and is causing incalculable harm.
> I thought that you were promoting it in that response.
>
> [ I put it in quotes to say that I do not necessarily agree that
> there is a single best type of rounding, but that aspect is not
> relevant to my point. ]

OK. I'm referring here not to what is 'technically plausible' but
more 'what a committee can agree to' (which is the basis of
just about all formal standards). For a function such as multiply,
or add, or square root, it has been shown that 'correctly rounded'
is attainable efficiently. It is therefore relatively easy ('reasonable')
for a standards committee to standardize these functions.

For other functions (such as power, x^y) the correctly-rounded
answer is hard to derive. Therefore one ends up in a debate as
to how 'loose' an answer is acceptable (and for which rounding
modes this applies, etc.). And others point out the difficulty of
testing, generating testcases, etc.

An individual can rapidly come to a conclusion as to the
'right' answer in any such case. But in practice (in a
committee) there will be many varied views and none
of the individual suggestions will be clearly the 'best'.

And in this situation, the only 'reasonable' thing to do is
to not standardize.

Mike Cowlishaw

Nick Maclaren

unread,
Dec 30, 2003, 11:28:42 AM12/30/03
to
In article <bsceph$fj0$1...@news.btv.ibm.com>,

Mike Cowlishaw <mfc...@attglobal.net> wrote:
>
>OK. I'm referring here not to what is 'technically plausible' but
>more 'what a committee can agree to' (which is the basis of
>just about all formal standards). For a function such as multiply,
>or add, or square root, it has been shown that 'correctly rounded'
>is attainable efficiently. It is therefore relatively easy ('reasonable')
>for a standards committee to standardize these functions.

One can argue whether the performance penalty is worth the benefit,
but I agree that is debating what should be standardised. It is
certainly reasonable to standardise 'perfect' accuracy in those cases,
though it is also reasonable not to.

>For other functions (such as power, x^y) the correctly-rounded
>answer is hard to derive. Therefore one ends up in a debate as
>to how 'loose' an answer is acceptable (and for which rounding
>modes this applies, etc.). And others point out the difficulty of
>testing, generating testcases, etc.

Um. That IS so, but I think that it is overstating the case. Such
debates are normal for any language facility.

>An individual can rapidly come to a conclusion as to the
>'right' answer in any such case. But in practice (in a
>committee) there will be many varied views and none
>of the individual suggestions will be clearly the 'best'.

Ditto.

>And in this situation, the only 'reasonable' thing to do is
>to not standardize.

Definitely NOT. Inter alia, that approach would eliminate all known
language standards, as well as most of the IEEE standard. There are
AT LEAST two reasonable things to standardise in this case:

One is to standardise a known achievable bound, and say that an
implementation must achieve at least that. For example, you could
require 1 ULP accuracy - where that accuracy could be taken up by
any argument was well as the results. This would work for power,
sin, and so on.

Another is to standardise an accuracy metric (or at least a form of
description) and state that the achieved bound is implementation
defined. This would provide users with the information they need
to make a reasonable 'contract' with the implementor.


Regards,
Nick Maclaren.

0 new messages