93 views

Skip to first unread message

Jul 2, 2022, 4:37:38 PMJul 2

to

Forth code for calculation over a large set of spherical Bessel and

spherical Neumann functions, with standard normalization, is available

in the kForth repos. -- see, for example,

https://github.com/mynenik/kForth-64/blob/master/forth-src/fsl/extras/sph_bes_neu.4th

While the FSL provides a spherical Bessel function calculator for the

first ten functions, j_0(x) -- j_9(x), the present code extends the

calculation range to beyond j_100(x). The spherical Neumann functions,

n_l(x) (also often denoted by y_l(x)) are also computed over the same

range of l.

Test code is included, with reference values obtained from Wolfram

Alpha. The original source for this code is a paper in Computers in

Physics vol. 2, pp. 62--72 (1988), linked below, in which they provide

an accuracy analysis. The test code appears to show this version,

running under kForth, exceeds the accuracy reported in the paper for

double precision calculations. One should note that kForth's FSIN and

FCOS make use of the C math library routines for computing sin(x) and

cos(x), instead of the f.p.u. processor instruction FSINCOS -- the

latter has a well-known degradation of performance for large arguments,

x, while the C math library functions modulo the argument into a range

with full accuracy.

The original 1988 paper is available at

https://aip.scitation.org/doi/pdf/10.1063/1.168296

--

Krishna Myneni

spherical Neumann functions, with standard normalization, is available

in the kForth repos. -- see, for example,

https://github.com/mynenik/kForth-64/blob/master/forth-src/fsl/extras/sph_bes_neu.4th

While the FSL provides a spherical Bessel function calculator for the

first ten functions, j_0(x) -- j_9(x), the present code extends the

calculation range to beyond j_100(x). The spherical Neumann functions,

n_l(x) (also often denoted by y_l(x)) are also computed over the same

range of l.

Test code is included, with reference values obtained from Wolfram

Alpha. The original source for this code is a paper in Computers in

Physics vol. 2, pp. 62--72 (1988), linked below, in which they provide

an accuracy analysis. The test code appears to show this version,

running under kForth, exceeds the accuracy reported in the paper for

double precision calculations. One should note that kForth's FSIN and

FCOS make use of the C math library routines for computing sin(x) and

cos(x), instead of the f.p.u. processor instruction FSINCOS -- the

latter has a well-known degradation of performance for large arguments,

x, while the C math library functions modulo the argument into a range

with full accuracy.

The original 1988 paper is available at

https://aip.scitation.org/doi/pdf/10.1063/1.168296

--

Krishna Myneni

Jul 3, 2022, 7:17:03 AMJul 3

to

On Saturday, July 2, 2022 at 10:37:38 PM UTC+2, Krishna Myneni wrote:

> The test code appears to show this version,

> running under kForth, exceeds the accuracy reported in the paper for

> double precision calculations. One should note that kForth's FSIN and

> FCOS make use of the C math library routines for computing sin(x) and

> cos(x), instead of the f.p.u. processor instruction FSINCOS -- the

> latter has a well-known degradation of performance for large arguments,

> x, while the C math library functions modulo the argument into a range

> with full accuracy.

Thanks for publishing this. It was a breeze to port to 4tH and ran straight
> The test code appears to show this version,

> running under kForth, exceeds the accuracy reported in the paper for

> double precision calculations. One should note that kForth's FSIN and

> FCOS make use of the C math library routines for computing sin(x) and

> cos(x), instead of the f.p.u. processor instruction FSINCOS -- the

> latter has a well-known degradation of performance for large arguments,

> x, while the C math library functions modulo the argument into a range

> with full accuracy.

away. That having said, the accuracy I achieved was MUCH lower - and probably

due to the Remez algorithm of my high level FSIN + FCOS functions.

But if these ranges are that large, why not use your own "fsin near pi" function?

: fsin_near_npi ( rdelta n -- r )

2 mod if fnegate then

fdup 7 fpow 5040 s>f f/

fover 5 fpow 120 s>f f/ f-

fover 3 fpow 6 s>f f/ f+

f-

;

Hans Bezemer

Jul 3, 2022, 7:40:18 AMJul 3

to

On 7/3/22 06:17, Hans Bezemer wrote:

> On Saturday, July 2, 2022 at 10:37:38 PM UTC+2, Krishna Myneni wrote:

>> The test code appears to show this version,

>> running under kForth, exceeds the accuracy reported in the paper for

>> double precision calculations. One should note that kForth's FSIN and

>> FCOS make use of the C math library routines for computing sin(x) and

>> cos(x), instead of the f.p.u. processor instruction FSINCOS -- the

>> latter has a well-known degradation of performance for large arguments,

>> x, while the C math library functions modulo the argument into a range

>> with full accuracy.

>

> Thanks for publishing this. It was a breeze to port to 4tH and ran straight

> away. That having said, the accuracy I achieved was MUCH lower - and probably

> due to the Remez algorithm of my high level FSIN + FCOS functions.

>

I'm mistaken about the accuracy being exceeded under kForth over what is
> On Saturday, July 2, 2022 at 10:37:38 PM UTC+2, Krishna Myneni wrote:

>> The test code appears to show this version,

>> running under kForth, exceeds the accuracy reported in the paper for

>> double precision calculations. One should note that kForth's FSIN and

>> FCOS make use of the C math library routines for computing sin(x) and

>> cos(x), instead of the f.p.u. processor instruction FSINCOS -- the

>> latter has a well-known degradation of performance for large arguments,

>> x, while the C math library functions modulo the argument into a range

>> with full accuracy.

>

> Thanks for publishing this. It was a breeze to port to 4tH and ran straight

> away. That having said, the accuracy I achieved was MUCH lower - and probably

> due to the Remez algorithm of my high level FSIN + FCOS functions.

>

reported in the original paper. I was comparing to the estimated

accuracies for the unnormalized version (shown in Table V) which were in

the 10^-15 range. The accuracies reported in Table III for the

normalized functions are consistent with what my test code finds against

the reference values from Wolfram Alpha.

> But if these ranges are that large, why not use your own "fsin near pi" function?

>

> : fsin_near_npi ( rdelta n -- r )

> 2 mod if fnegate then

> fdup 7 fpow 5040 s>f f/

> fover 5 fpow 120 s>f f/ f-

> fover 3 fpow 6 s>f f/ f+

> f-

> ;

exceed 300 in the test code, which is likely too small to see a

difference between using the native FSINCOS fpu instruction vs the C

library FSIN and FCOS functions. A comparison against values for large x

should show the problem.

I'm not familiar with the algorithm above. Doesn't range reduction

require higher precision arithmetic?

--

KM

Jul 3, 2022, 8:21:58 AMJul 3

to

On Sunday, July 3, 2022 at 1:40:18 PM UTC+2, Krishna Myneni wrote:

> The range of the argument to the different j_l(x) and n_l(x) does not

> exceed 300 in the test code, which is likely too small to see a

> difference between using the native FSINCOS fpu instruction vs the C

> library FSIN and FCOS functions. A comparison against values for large x

> should show the problem.

>

> I'm not familiar with the algorithm above. Doesn't range reduction

> require higher precision arithmetic?

I dunno - YOU tell ME ;-)
> The range of the argument to the different j_l(x) and n_l(x) does not

> exceed 300 in the test code, which is likely too small to see a

> difference between using the native FSINCOS fpu instruction vs the C

> library FSIN and FCOS functions. A comparison against values for large x

> should show the problem.

>

> I'm not familiar with the algorithm above. Doesn't range reduction

> require higher precision arithmetic?

HB

\ High accuracy calculation of double-precision sin(x),

\ using Taylor Series approximation when x is near

\ a multiple of pi.

\ Due to finite precision in the representation of x,

\ the calculation of sin(x) can suffer a loss of

\ significant digits when x is near pi.

\ The Intel and AMD FPUs also suffer a loss of accuracy in

\ the FSIN instruction, for an argument, x, which is

\ near a multiple of pi, n*pi, for n>1. This function uses

\ an 10th order Taylor Series approximation to compute sin(x)

\ for those cases:

\ sin(pi + delta) = -delta + delta^3/6 - delta^5/120

\ + delta^7/5040 + ...

\ Higher order terms are not used.

\ For |delta| <= 1e-5, the accuracy in double precision

\ will be 16 significant digits.

===>> \ Krishna Myneni, <krishn...@ccreweb.org> <<===

\ Compute sin(n*pi + rdelta) to high accuracy, where

\ rdelta is an offset and n = 0,1,2, ...

[UNDEFINED] fsin_near_pi [IF]

[UNDEFINED] fpow [IF] include lib/fpow.4th [THEN]

: fsin_near_npi ( rdelta n -- r )

2 mod if fnegate then

fdup 7 fpow 5040 s>f f/

fover 5 fpow 120 s>f f/ f-

fover 3 fpow 6 s>f f/ f+

f-

;

[THEN]
2 mod if fnegate then

fdup 7 fpow 5040 s>f f/

fover 5 fpow 120 s>f f/ f-

fover 3 fpow 6 s>f f/ f+

f-

;

Jul 3, 2022, 10:37:20 AMJul 3

to

the argument is very close to pi for sin(x) (delta <= 1e-5). Something

similar can be done for cos(x) when the argument is near integer

multiples of pi/2, but, in general, when x is arbitrary and greater than

pi, the accuracy of the fpu FSIN and FCOS instructions is diminished as

|x| increases. So, the algorithm above wouldn't be useful for general

arguments to the spherical Bessel function calculations when |x| is

large. Range reduction of a general argument into the interval -pi,pi

requires higher precision to avoid degrading the precision of the

original argument.

--

KM

--

Jul 3, 2022, 11:40:05 AMJul 3

to

On Sunday, July 3, 2022 at 4:37:20 PM UTC+2, Krishna Myneni wrote:

> The above can be considered a special case of range reduction only when

> the argument is very close to pi for sin(x) (delta <= 1e-5).

Oh! I didn't know that! I thought it was +/-0.5 pi ;-)
> The above can be considered a special case of range reduction only when

> the argument is very close to pi for sin(x) (delta <= 1e-5).

> Range reduction of a general argument into the interval -pi,pi

> requires higher precision to avoid degrading the precision of the

> original argument.

So, SIN(1000 PI) is always a problem (when given as an FP number).

But many thanks for your clarifications! I learned something!

Hans Bezemer

Jul 4, 2022, 8:13:59 AMJul 4

to

illustrative for others who are not aware of the issue we are discussing.

In kForth FSINCOS standard floating point word uses the native fpu

instruction for computing the sin(x) and cos(x) functions -- this

provides a fast way of computing these functions when it is known in

advance that the range of x will be limited. In contrast the standard

FSIN and FCOS words call GCC's math library functions, which use proper

range reduction before calling the native fpu instructions. These words

are useful for obtaining the best accuracy over an unknown range of

arguments.

Below is a comparison of using the library version of FSIN vs the fpu

instruction for FSIN in kForth:

17 set-precision

ok

\ x = 10,000.

1.0e4 fsin fs.

-3.0561438888825215e-01 ok \ with range reduction

1.0e4 fsincos fdrop fs.

-3.0561438888825215e-01 ok \ native fpu instruction

\ x = 100,000.

1.0e5 fsin fs.

3.5748797972016508e-02 ok \ with range reduction

1.0e5 fsincos fdrop fs.

3.5748797972016383e-02 ok \ native fpu instruction

\ x = 1,000,000.

1.0e6 fsin fs.

-3.4999350217129294e-01 ok \ with range reduction

1.0e6 fsincos fdrop fs.

-3.4999350217129177e-01 ok \ native fpu instruction

The above results may be compared with high precision reference values

for sin(x), e.g. using maxima, Wolfram Alpha, etc.

It occurred to me that we should be able to implement proper range

reduction in Forth using Julian Noble's double-double arithmetic words,

for cases where it is important, without resorting to C math library

functions. I have this issue in kForth-Win32 -- the Digital Mars C++

compiler's math library does not do proper range reduction for the trig

functions.

--

Krishna

Jul 4, 2022, 8:25:58 AMJul 4

to

In article <t9uli5$3ces2$1...@dont-email.me>,

Krishna Myneni <krishna...@ccreweb.org> wrote:

<SNIP>

packet.

For a sufficient large x the uncertainly in x is +/-pi , and the

sin and cosine may be all over the place. in the range [-1,+1].

How can range reduction help, unless by accident it is known that

x is mathematically exact (which I guess would be rare for physics

calculations) ?

>

>--

>Krishna

Groetjes Albert

--

"in our communism country Viet Nam, people are forced to be

alive and in the western country like US, people are free to

die from Covid 19 lol" duc ha

albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

Krishna Myneni <krishna...@ccreweb.org> wrote:

<SNIP>

>

>The above results may be compared with high precision reference values

>for sin(x), e.g. using maxima, Wolfram Alpha, etc.

>

>It occurred to me that we should be able to implement proper range

>reduction in Forth using Julian Noble's double-double arithmetic words,

>for cases where it is important, without resorting to C math library

>functions. I have this issue in kForth-Win32 -- the Digital Mars C++

>compiler's math library does not do proper range reduction for the trig

>functions.

I looked at range reduction working on tforth's floating point
>The above results may be compared with high precision reference values

>for sin(x), e.g. using maxima, Wolfram Alpha, etc.

>

>It occurred to me that we should be able to implement proper range

>reduction in Forth using Julian Noble's double-double arithmetic words,

>for cases where it is important, without resorting to C math library

>functions. I have this issue in kForth-Win32 -- the Digital Mars C++

>compiler's math library does not do proper range reduction for the trig

>functions.

packet.

For a sufficient large x the uncertainly in x is +/-pi , and the

sin and cosine may be all over the place. in the range [-1,+1].

How can range reduction help, unless by accident it is known that

x is mathematically exact (which I guess would be rare for physics

calculations) ?

>

>--

>Krishna

Groetjes Albert

--

"in our communism country Viet Nam, people are forced to be

alive and in the western country like US, people are free to

die from Covid 19 lol" duc ha

albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

Jul 4, 2022, 9:25:17 AMJul 4

to

On 7/4/22 07:25, albert wrote:

> In article <t9uli5$3ces2$1...@dont-email.me>,

> Krishna Myneni <krishna...@ccreweb.org> wrote:

> <SNIP>

>>

>> The above results may be compared with high precision reference values

>> for sin(x), e.g. using maxima, Wolfram Alpha, etc.

>>

>> It occurred to me that we should be able to implement proper range

>> reduction in Forth using Julian Noble's double-double arithmetic words,

>> for cases where it is important, without resorting to C math library

>> functions. I have this issue in kForth-Win32 -- the Digital Mars C++

>> compiler's math library does not do proper range reduction for the trig

>> functions.

>

> I looked at range reduction working on tforth's floating point

> packet.

> For a sufficient large x the uncertainly in x is +/-pi , and the

> sin and cosine may be all over the place. in the range [-1,+1].

> How can range reduction help, unless by accident it is known that

> x is mathematically exact (which I guess would be rare for physics

> calculations) ?

>

All of the arguments used in the above examples are exactly represented
> In article <t9uli5$3ces2$1...@dont-email.me>,

> Krishna Myneni <krishna...@ccreweb.org> wrote:

> <SNIP>

>>

>> The above results may be compared with high precision reference values

>> for sin(x), e.g. using maxima, Wolfram Alpha, etc.

>>

>> It occurred to me that we should be able to implement proper range

>> reduction in Forth using Julian Noble's double-double arithmetic words,

>> for cases where it is important, without resorting to C math library

>> functions. I have this issue in kForth-Win32 -- the Digital Mars C++

>> compiler's math library does not do proper range reduction for the trig

>> functions.

>

> I looked at range reduction working on tforth's floating point

> packet.

> For a sufficient large x the uncertainly in x is +/-pi , and the

> sin and cosine may be all over the place. in the range [-1,+1].

> How can range reduction help, unless by accident it is known that

> x is mathematically exact (which I guess would be rare for physics

> calculations) ?

>

numbers in IEEE floating point. If your calculations can afford the

intermediate loss of accuracy from the native trig function

instructions, then you should use them.

--

Krishna

Jul 4, 2022, 1:36:38 PMJul 4

to

CR 1e4 fdup +e. fsin -0.3056143888882521413609100352325069742318500438618062391101551450e f- space +e.

CR 1e5 fdup +e. fsin 0.0357487979720165093164705006958088290090456925781088968546167365e f- space +e.

CR 1e6 fdup +e. fsin -0.3499935021712929521176524867807714690614066053287162738570590540e f- space +e. ;

FORTH> TEST

1.0000000000000000000e+0004 -1.2251484549086200104e-0017

1.0000000000000000000e+0005 -1.2865752842435018710e-0016

1.0000000000000000000e+0006 1.2060122865642508572e-0015 ok

For up to 1e5 we are still ok with double precision. Above that, accuracy

decreases linearly.

It might be more important that the approximation is continuous and smooth

than that it is highly accurate.

-marcel

Jul 4, 2022, 4:02:23 PMJul 4

to

when modelling heat propagation for thick-walled steam generator

components. FWIW there were always physically measured initial

conditions along with their naturally limited low measuring accuracies,

like pressures and temperatures at component walls. Measurement

error would always outweigh theoretical calculation precisions.

So what practical applications do really require those discussed enormous

calculation accuracies for Bessel functions??

Jul 4, 2022, 4:20:30 PMJul 4

to

On 7/4/22 12:36, Marcel Hendrix wrote:

> On Monday, July 4, 2022 at 2:13:59 PM UTC+2, Krishna Myneni wrote:

...
> On Monday, July 4, 2022 at 2:13:59 PM UTC+2, Krishna Myneni wrote:

"Biased Exponent of Extended Precision input Argument" indicates that

the log-log plot of error vs magnitude is linear, not the error vs

magnitude.

> It might be more important that the approximation is continuous and smooth

> than that it is highly accurate.

--

Krishna

Jul 4, 2022, 4:53:21 PMJul 4

to

calculations from a finite interaction region in which the interaction

with a scattering center is spherically symmetric. I haven't used it for

such a numerical calculation, so I can't comment on practical required

accuracies, but it is likely to be highly problem-dependent, e.g. on the

energy/momentum distribution of the incident particle and on the

strength of the interaction.

A more general comment is that for a library module providing special

function values for use in calculations at some standard precision, the

ideal case is to have computed values of the function be accurate to

that level of precision (double precision f.p. in this case). Of course

it's not usually possible to have 1 ulp accuracy in the calculation of

special functions, particularly across a wide range of arguments, and

the present module is no exception.

The best accuracy within the precision format, over the widest range of

arguments, allows for the widest range of applications. But tradeoffs to

improve computational efficiency may be available if the argument range

is further restricted or the required accuracy is lower. It's important

to establish the accuracy of the calculation over an argument range to

enable further analysis of calculation errors.

--

KM

Jul 4, 2022, 5:39:52 PMJul 4

to

Reply all

Reply to author

Forward

0 new messages

Search

Clear search

Close search

Google apps

Main menu