Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Recommend an alternative to Numerical Recipes?

512 views
Skip to first unread message

Daniel Carrera

unread,
May 28, 2011, 4:29:22 AM5/28/11
to
Hello,

I have heard often in this forum about the perils of Numerical Recipes.
Today I saw a web page with more criticism along similar lines to what
has been said on this list:

http://www.lysator.liu.se/c/num-recipes-in-c.html


Now my question: Can anyone recommend an alternative to NR?


I'm hoping that someone can suggest a good book or other reference on
numerical algorithms that I could use instead. Here is my wish list:


1) Trustworthy algorithms. Efficient and correct. This is of course the
most important, otherwise I'd just use NR which I already own.

2) Not too expensive. I am a grad student.

3) Suitable for a non-specialist. I am an astrophysicist with an above
average computers + math background, but I am *not* a numerical analyst.
Also, I cannot take too much time out of my work to learn a new algorithm.

4) Preferably a broad range of topics. I understand it can't be as broad
as NR, since part of the problem is that nobody can be an expert on
every type of problem domain.

5) If possible, I would prefer code examples either in Modern Fortran or
pseudo-code. I prefer to not have to parse either C++ or F77 to learn
how an algorithm works.


Any suggestions?

Thanks.

Cheers,
Daniel.

Daniel H

unread,
May 28, 2011, 4:58:54 AM5/28/11
to
On 05/28/2011 10:29 AM, Daniel Carrera wrote:
> Now my question: Can anyone recommend an alternative to NR?
>
>
> I'm hoping that someone can suggest a good book or other reference on
> numerical algorithms that I could use instead. Here is my wish list:
>
>
> 1) Trustworthy algorithms. Efficient and correct. This is of course the
> most important, otherwise I'd just use NR which I already own.
>
> 2) Not too expensive. I am a grad student.
>
> 3) Suitable for a non-specialist. I am an astrophysicist with an above
> average computers + math background, but I am *not* a numerical analyst.
> Also, I cannot take too much time out of my work to learn a new algorithm.
>
> 4) Preferably a broad range of topics. I understand it can't be as broad
> as NR, since part of the problem is that nobody can be an expert on
> every type of problem domain.
>
> 5) If possible, I would prefer code examples either in Modern Fortran or
> pseudo-code. I prefer to not have to parse either C++ or F77 to learn
> how an algorithm works.
>

Basically the same question has been discussed here about three months
ago, search for the subject line 'Numerical algorithms books'. I had a
look at a couple of suggestions made there. They are all good books
(with readable pseudo-code and/or Fortran references), in particular I
liked Higham 'Accuracy and Stability of Numerical Algorithms', but none
covers the variety of algorithms that I regularly need - they are much
more specialized than NR.

Daniel

Daniel Carrera

unread,
May 28, 2011, 5:19:20 AM5/28/11
to
On 05/28/2011 10:58 AM, Daniel H wrote:
> Basically the same question has been discussed here about three months
> ago, search for the subject line 'Numerical algorithms books'. I had a
> look at a couple of suggestions made there. They are all good books
> (with readable pseudo-code and/or Fortran references), in particular I
> liked Higham 'Accuracy and Stability of Numerical Algorithms', but none
> covers the variety of algorithms that I regularly need - they are much
> more specialized than NR.

Found the thread. Thank you.

Daniel.

Phillip Helbig---undress to reply

unread,
May 28, 2011, 7:11:42 AM5/28/11
to
In article <irqbp2$4mn$1...@dont-email.me>, Daniel Carrera
<dan...@gmail.com> writes:

> Hello,
>
> I have heard often in this forum about the perils of Numerical Recipes.
> Today I saw a web page with more criticism along similar lines to what
> has been said on this list:
>
> http://www.lysator.liu.se/c/num-recipes-in-c.html

Note that this also has favourable reviews.

If your prime motivation is the criticism of Numerical Recipes, you need
to keep 4 things in mind:

o Some of the criticism is unfounded.

o Some of the criticism is no longer applicable due to corrections.

o Alternatives might also have problems.

o NR was never meant to be an alternative to, say, the NAG
libraries, but rather an explicatory book with working examples
in source code.

> 1) Trustworthy algorithms. Efficient and correct. This is of course the
> most important, otherwise I'd just use NR which I already own.
>
> 2) Not too expensive. I am a grad student.
>
> 3) Suitable for a non-specialist. I am an astrophysicist with an above
> average computers + math background, but I am *not* a numerical analyst.
> Also, I cannot take too much time out of my work to learn a new algorithm.
>
> 4) Preferably a broad range of topics. I understand it can't be as broad
> as NR, since part of the problem is that nobody can be an expert on
> every type of problem domain.
>
> 5) If possible, I would prefer code examples either in Modern Fortran or
> pseudo-code. I prefer to not have to parse either C++ or F77 to learn
> how an algorithm works.

Note that there are also Numerical Recipes in Fortran 90.

I have used routines from almost all of the chapters in NR, mainly from
the Fortran 90 version and the second-edition version for Fortran 77.
I've never run into any problems. Yes, they might not be the best
algorithms in the world, but they work, have no serious errors and you
have a chance to understand what is going on, which you should in any
case. Even if, say, NAG libraries are better (they are also more
expensive), you don't have the source code and the time you spend
learning how to use them might not make up for any increase in
efficiency.

My advice: Get the NR in Fortran 90, understand the code, and get to
work.

Daniel Carrera

unread,
May 28, 2011, 7:42:17 AM5/28/11
to
On 05/28/2011 01:11 PM, Phillip Helbig---undress to reply wrote:
> Note that this also has favourable reviews.
>
> If your prime motivation is the criticism of Numerical Recipes, you need
> to keep 4 things in mind:


Thanks. Indeed, I'd say that my prime motivation is criticism of NR. I
am afraid of relying on it for work and having problems down the line.
If it wasn't for that, I would praise NR for readability and breath of
topics.


> Note that there are also Numerical Recipes in Fortran 90.

Only an older edition, and in any case, I already own NR in C (third
edition) and I find it readable enough. My only qualm is the concerns
posted in this forum and elsewhere about flaws in the algorithms.


> I have used routines from almost all of the chapters in NR, mainly from
> the Fortran 90 version and the second-edition version for Fortran 77.
> I've never run into any problems. Yes, they might not be the best
> algorithms in the world, but they work, have no serious errors and you
> have a chance to understand what is going on, which you should in any
> case.


Ok. It's hard to know what to think. Some people seem to love NR and
others seem to hate it.


Daniel.

nm...@cam.ac.uk

unread,
May 28, 2011, 7:53:48 AM5/28/11
to
In article <irqn2q$vvn$1...@dont-email.me>,

Daniel Carrera <dan...@gmail.com> wrote:
>
>Ok. It's hard to know what to think. Some people seem to love NR and
>others seem to hate it.

There is an extremely high (indeed, almost perfect) correlation
between expertise and dislike of the book.

The first edition was an utter disgrace. Inter alia, I had
refereed random number algorithms for 25 years, and its ones
were a new low. MUCH worse than RANDU. Now, THAT'S impressive!
Damn the random walk, a simple uniformity test went wrong with
a sample of 200. The second edition was merely very bad.

I looked at other areas that I am an expert on, and they were
similar, though not as impressively incompetent as the first
edition's random numbers. I have better things to do than to
track every variation and edition.

The points that are missed by the people who claim to have had
no trouble are:

Not noticing trouble is not the same as there not being any.
Even in their own analyses, why were they sure that the answers
were right? Merely not falling over is a very poor measure of
quality.

Most practical problems can be solved with a trivial, bad
solution most of the time. The reason that experts encourage
the use of good ones is that (a) they are less likely to fail
and (b) are more likely to detect when they do.


Regards,
Nick Maclaren.

robin

unread,
May 28, 2011, 9:49:01 AM5/28/11
to
"Daniel Carrera" <dan...@gmail.com> wrote in message news:irqbp2$4mn$1...@dont-email.me...

| Hello,
|
| I have heard often in this forum about the perils of Numerical Recipes.
| Today I saw a web page with more criticism along similar lines to what
| has been said on this list:
|
| http://www.lysator.liu.se/c/num-recipes-in-c.html

Those comments are largely very dated and most are worthless,
as their authors make sweeping criticisms. A few seem valuable,
but again, are dated.


Phillip Helbig---undress to reply

unread,
May 28, 2011, 10:34:02 AM5/28/11
to
In article <irqnoc$8ic$1...@gosset.csi.cam.ac.uk>, nm...@cam.ac.uk writes:

> The first edition was an utter disgrace. Inter alia, I had
> refereed random number algorithms for 25 years, and its ones
> were a new low. MUCH worse than RANDU. Now, THAT'S impressive!
> Damn the random walk, a simple uniformity test went wrong with
> a sample of 200. The second edition was merely very bad.

Let me say that the random-number generators were NOT among the routines
that I used; I always use RANLUX:

http://www.astro.multivax.de:8000/helbig/fortran/ranlux.html

robin

unread,
May 28, 2011, 10:52:13 AM5/28/11
to
"Daniel Carrera" <dan...@gmail.com> wrote in message news:irqbp2$4mn$1...@dont-email.me...
|
| I have heard often in this forum about the perils of Numerical Recipes.
| Today I saw a web page with more criticism along similar lines to what
| has been said on this list:

| http://www.lysator.liu.se/c/num-recipes-in-c.html

That's for the first edition, 1985.

You should look at the third edition, published in 2007.


Daniel Carrera

unread,
May 28, 2011, 10:55:08 AM5/28/11
to
On 05/28/2011 01:53 PM, nm...@cam.ac.uk wrote:
> There is an extremely high (indeed, almost perfect) correlation
> between expertise and dislike of the book.

The page I found made a similar comment. It said that most praise came
from scientists and engineers who like the breath and convenience of NR
and most criticism came from numerical analysts.


> The first edition was an utter disgrace. Inter alia, I had
> refereed random number algorithms for 25 years, and its ones
> were a new low. MUCH worse than RANDU. Now, THAT'S impressive!
> Damn the random walk, a simple uniformity test went wrong with
> a sample of 200. The second edition was merely very bad.

Ugh... Even I can tell that failing a uniformity test with a sample of
200 is impressively bad. Is this the same algorithm the authors were so
confident on that they had a $100,000 price for any bugs?


> The points that are missed by the people who claim to have had
> no trouble are:
>
> Not noticing trouble is not the same as there not being any.
> Even in their own analyses, why were they sure that the answers
> were right? Merely not falling over is a very poor measure of
> quality.

I remember thinking something similar when my prof said that he'd never
had trouble with NR's RAND function. I didn't say anything, but I
wondered how you'd even know if the RAND function was not working well.


> Most practical problems can be solved with a trivial, bad
> solution most of the time. The reason that experts encourage
> the use of good ones is that (a) they are less likely to fail
> and (b) are more likely to detect when they do.


Ok.

One of the things I want from my code is that when it fails it fails
noisily, rather than silently give me the wrong answer.


Daniel.

Daniel Carrera

unread,
May 28, 2011, 11:00:25 AM5/28/11
to

Why not use Marsaglia's KISS?

Daniel.

Daniel Carrera

unread,
May 28, 2011, 11:01:47 AM5/28/11
to

That's the one I own, but I haven't seen a lot of comments for that
edition in particular.

Daniel.

robin

unread,
May 28, 2011, 11:01:47 AM5/28/11
to
"Daniel Carrera" <dan...@gmail.com> wrote in message news:irqbp2$4mn$1...@dont-email.me...
|
| Now my question: Can anyone recommend an alternative to NR?

BTW, have you looked at Engeln-Mullges & Uhlig, "Numerical Algorithms with Fortran",
Springer, published in 1996?


Daniel Carrera

unread,
May 28, 2011, 11:10:05 AM5/28/11
to
On 05/28/2011 05:01 PM, robin wrote:
> "Daniel Carrera"<dan...@gmail.com> wrote in message news:irqbp2$4mn$1...@dont-email.me...
> |
> | Now my question: Can anyone recommend an alternative to NR?
>
> BTW, have you looked at Engeln-Mullges& Uhlig, "Numerical Algorithms with Fortran",
> Springer, published in 1996?

I wasn't familiar with that book. Thanks. I just bookmarked it on Amazon.

Daniel.

robin

unread,
May 28, 2011, 11:15:50 AM5/28/11
to
"Daniel Carrera" <dan...@gmail.com> wrote in message news:irr2or$62r$2...@dont-email.me...

As they say, "No news is good news".


nm...@cam.ac.uk

unread,
May 28, 2011, 10:46:01 AM5/28/11
to
In article <irr2cc$30k$1...@dont-email.me>,

Daniel Carrera <dan...@gmail.com> wrote:
>
>> There is an extremely high (indeed, almost perfect) correlation
>> between expertise and dislike of the book.
>
>The page I found made a similar comment. It said that most praise came
>from scientists and engineers who like the breath and convenience of NR
>and most criticism came from numerical analysts.

Yes.

>> The first edition was an utter disgrace. Inter alia, I had
>> refereed random number algorithms for 25 years, and its ones
>> were a new low. MUCH worse than RANDU. Now, THAT'S impressive!
>> Damn the random walk, a simple uniformity test went wrong with
>> a sample of 200. The second edition was merely very bad.
>
>Ugh... Even I can tell that failing a uniformity test with a sample of
>200 is impressively bad. Is this the same algorithm the authors were so
>confident on that they had a $100,000 price for any bugs?

No. They would have been trampled underfoot in the rush :-)

>> Not noticing trouble is not the same as there not being any.
>> Even in their own analyses, why were they sure that the answers
>> were right? Merely not falling over is a very poor measure of
>> quality.
>
>I remember thinking something similar when my prof said that he'd never
>had trouble with NR's RAND function. I didn't say anything, but I
>wondered how you'd even know if the RAND function was not working well.

It varies. Underestimating errors is a common effect. For an
introduction to testing the things (and, yes, I mean that), read
Knuth. Also see Marsaglia's DIEHARD. I have another test that
picks up problems he doesn't, while still being realistic!

>One of the things I want from my code is that when it fails it fails
>noisily, rather than silently give me the wrong answer.

That's good science!


Regards,
Nick Maclaren.

nm...@cam.ac.uk

unread,
May 28, 2011, 10:51:12 AM5/28/11
to
In article <4de10e63$0$67779$c30e...@exi-reader.telstra.net>,

Thanks for the reference. But did they HAVE to use Fortran 77,
in 1996?


Regards,
Nick Maclaren.

nm...@cam.ac.uk

unread,
May 28, 2011, 11:01:02 AM5/28/11
to
Why this is marked as abuse? It has been marked as abuse.
Report not abuse

Has anyone taken a thorough look at Numerical Algorithms with Fortran
(or C), Gisela Engeln-Muellges, Frank Uhlig and M. Schon?

Any pointers to serious reviews (preferably in numerical analysis
journals) would be especially welcome.


Regards,
Nick Maclaren.

robin

unread,
May 28, 2011, 11:43:49 AM5/28/11
to
<nm...@cam.ac.uk> wrote in message news:irr250$anr$1...@gosset.csi.cam.ac.uk...

A book of 600+ pages takes more than a few years to write.
The authors state that the book was begun about 20 years before.
Several editions have been published over the years, each edition updated.

As well as F77, they also include F90 versions.
They could well have dropped the F77 versions.


glen herrmannsfeldt

unread,
May 28, 2011, 2:36:28 PM5/28/11
to
nm...@cam.ac.uk wrote:

(big snip on NR)


> Not noticing trouble is not the same as there not being any.
> Even in their own analyses, why were they sure that the answers
> were right? Merely not falling over is a very poor measure of
> quality.

> Most practical problems can be solved with a trivial, bad
> solution most of the time. The reason that experts encourage
> the use of good ones is that (a) they are less likely to fail
> and (b) are more likely to detect when they do.

and (c) much harder to explain in detail how they work.

At the time NR came out, the usual source of numerical routines
was IMSL, closed source. They had to be "less likely to fail"
as you wouldn't know what to do if they did fail. (Not that I
know that they were, but closed source you couldn't look into
one to see what went wrong.)

It seems to me that there are a number of introductory books,
meant for a first course in numerical analysis, but barely
getting to any algorithms at all.

There is also the peer reviewed literature on current research,
and a small collection of detailed, highly specialized, books.

I haven't read NR so recently, but I don't beleive that they
claim to replace the more specialized books. They do have a
detailed bibliography and suggestions on which other books
you should buy. Many good books are available for very
reasonable prices (less than lunch at many restaurants).

You could buy NR, just read the bibliography, and buy the
listed books.

Maybe an explanation in terms of recipes makes sense.
(That is where the title came from.) There is Betty Crocker
for someone just learning to cook. There are books for
chefs who have spent years in French cooking schools,
needing ingredients that you can't get in the usual stores,
(and probably written in French). Then there are some in
between.

Introductory NR books stop at trapezoidal rule, or maybe
Simpson's rule, without explaining either. NR is the
only one that I know of that explains extended Simpson's
rule, explaining away the magic coefficients. They go
a little into the better algorithms, and then refer one
to books that explain the rest.

-- glen

David W Noon

unread,
May 28, 2011, 2:44:16 PM5/28/11
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Sat, 28 May 2011 16:01:02 +0100 (BST), nm...@cam.ac.uk wrote about
Re: Recommend an alternative to Numerical Recipes?:

>Has anyone taken a thorough look at Numerical Algorithms with Fortran
>(or C), Gisela Engeln-Muellges, Frank Uhlig and M. Schon?

I bought a copy almost 15 years ago, which listed only Engeln-Müllges
and Uhlig as the authors. It reminded me of the text I used for first
year numerical analysis in 1973: Conte & de Boor.

It covered all the usual areas of numerical analysis: interpolation,
iterative methods, numerical linear algebra, quadrature and differential
equations. I saw no glaring errors in the text, but I have never used
any of the code, which was entirely in F77. Since I still have both
the book and the CD-ROM, I suppose I could give their code a try, but
I'm not sure what the licensing arrangements are on the code, so I
might not be allowed to re-publish it.
- --
Regards,

Dave [RLU #314465]
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
dwn...@spamtrap.ntlworld.com (David W Noon)
Remove spam trap to reply by e-mail.
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.17 (GNU/Linux)

iEYEARECAAYFAk3hQoYACgkQ9MqaUJQw2MmkgACeKJ78uBpN7IwX8D6SHRJJ+G8o
1zYAniN1vvgcn4PaOeWGQiOpdcMJj3Uo
=af4M
-----END PGP SIGNATURE-----

Richard Maine

unread,
May 28, 2011, 3:17:45 PM5/28/11
to
glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

> At the time NR came out, the usual source of numerical routines
> was IMSL, closed source.

I suppose that was probably right by that date (I had to check the date
that NR first came out). Note that not too many years previously, before
IMSL was sold to Visual Numerics, IMSL had one of the most liberal
source licenses around - considerably more liberal in particular than
the NR license (which is not an open source license by the usual
definition). IMSL was distributed with source code and you could
redistribute that source code with any program of yours that used it. I
well remember that because I did so (and only after carefully reading
the parts of the license that allowed it).

The IMSL license changed hugely when Visual Numerics took over. I forget
all the details, but I recall thinking it was a major contrast. You
couldn't even run your executable program on a different machine at your
same site unless you had an IMSL license for both machines.

By the way, the NR license has some even more draconian restrictions for
free use. I bet that a lot of people don't abide by it (or even read
it), but I seem to recall something about restricting it to a single
machine and actually requiring you to type it in by hand or some such
thing. I'm too lazy to go grab my copy to recheck right now.

--
Richard Maine | Good judgment comes from experience;
email: last name at domain . net | experience comes from bad judgment.
domain: summertriangle | -- Mark Twain

nm...@cam.ac.uk

unread,
May 28, 2011, 3:53:20 PM5/28/11
to
In article <irrfbc$lj3$1...@dont-email.me>,

glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
>
>> Not noticing trouble is not the same as there not being any.
>> Even in their own analyses, why were they sure that the answers
>> were right? Merely not falling over is a very poor measure of
>> quality.
>
>> Most practical problems can be solved with a trivial, bad
>> solution most of the time. The reason that experts encourage
>> the use of good ones is that (a) they are less likely to fail
>> and (b) are more likely to detect when they do.
>
>and (c) much harder to explain in detail how they work.

In the case of several of the areas I looked at, that was NOT
true. If anything, their code was MORE complicated than the
simplest good algorithm - both in design and programming.
That certainly applied to both the random numbers and the sorting.

>At the time NR came out, the usual source of numerical routines

>was IMSL, closed source. ...

NAG would dispute that. So would its users.

>I haven't read NR so recently, but I don't beleive that they
>claim to replace the more specialized books. They do have a
>detailed bibliography and suggestions on which other books
>you should buy. Many good books are available for very
>reasonable prices (less than lunch at many restaurants).
>
>You could buy NR, just read the bibliography, and buy the
>listed books.

In which case, God help you. The advice wasn't much good,
either, and may still not be, and the biblography wasn't much
help unless you already knew where to look. You could easily
end up spending thousands or tens of thousands, and some of
those books were outdated.


Regards,
Nick Maclaren.

timprince

unread,
May 28, 2011, 4:40:47 PM5/28/11
to
On 5/28/2011 3:17 PM, Richard Maine wrote:
> glen herrmannsfeldt<g...@ugcs.caltech.edu> wrote:
>
>> At the time NR came out, the usual source of numerical routines
>> was IMSL, closed source.
>
> I suppose that was probably right by that date (I had to check the date
> that NR first came out). Note that not too many years previously, before
> IMSL was sold to Visual Numerics, IMSL had one of the most liberal
> source licenses around - considerably more liberal in particular than
> the NR license (which is not an open source license by the usual
> definition). IMSL was distributed with source code and you could
> redistribute that source code with any program of yours that used it. I
> well remember that because I did so (and only after carefully reading
> the parts of the license that allowed it).
>
> The IMSL license changed hugely when Visual Numerics took over. I forget
> all the details, but I recall thinking it was a major contrast. You
> couldn't even run your executable program on a different machine at your
> same site unless you had an IMSL license for both machines.
>
> By the way, the NR license has some even more draconian restrictions for
> free use.
>

Strange, the Wikipedia article avoids the question of when the F77
version of NR came out (1985, I have it here).
I first used IMSL in 1970. According to Wikipedia, it was brand new
then. Customer sites had to buy a source license to build and install
locally.
Again according to Wikipedia, netlib BLAS has been out since 1979, and
SLATEC since 1977. These open source libraries have been used in plenty
of investigations, including published academic ones, yet it seems
difficult to find useful discussions of their merits and demerits, or
any full updates for current language standards or platforms. Still,
they (and commercially supported variants) are in much wider use than NR
or IMSL.
The licensing terms for netlib code are so indefinite that we sometimes
have to get email permission from the original authors for derivative
use (to ensure that we don't violate any rights). In my experience,
those authors have been entirely supportive.

David W Noon

unread,
May 28, 2011, 4:27:10 PM5/28/11
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Sat, 28 May 2011 17:10:05 +0200, Daniel Carrera wrote about Re:
Recommend an alternative to Numerical Recipes?:

>On 05/28/2011 05:01 PM, robin wrote:
[snip]
>> BTW, have you looked at Engeln-Mullges& Uhlig, "Numerical
>> Algorithms with Fortran", Springer, published in 1996?
>
>I wasn't familiar with that book. Thanks. I just bookmarked it on
>Amazon.

Just be aware that that is a textbook on mathematics. It contains no
FORTRAN source code, except an index in the back that relates
algorithms derived in the text to subroutines and functions on the
accompanying CD-ROM. The book does contain quite a lot of algebra and
calculus, though.
- --
Regards,

Dave [RLU #314465]
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
dwn...@spamtrap.ntlworld.com (David W Noon)
Remove spam trap to reply by e-mail.
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.17 (GNU/Linux)

iEYEARECAAYFAk3hWqUACgkQ9MqaUJQw2MluXwCffdKLBKkas9sRVGr/8Whb/Uw4
mUgAnAlpuXWM5rPObekMiO/t+L1Il6fs
=YVQR
-----END PGP SIGNATURE-----

timprince

unread,
May 28, 2011, 4:48:37 PM5/28/11
to
On 5/28/2011 4:40 PM, timprince wrote:
> On 5/28/2011 3:17 PM, Richard Maine wrote:
>
>>
>> The IMSL license changed hugely when Visual Numerics took over. I forget
>> all the details, but I recall thinking it was a major contrast. You
>> couldn't even run your executable program on a different machine at your
>> same site unless you had an IMSL license for both machines.
>>
Rogue Wave is the current owner of IMSL. The licensing terms and
availability of updates seem to have changed yet again since the time of
which Richard speaks.
--
Tim Prince

Tim Prince

unread,
May 28, 2011, 5:26:31 PM5/28/11
to
On 5/28/2011 4:40 PM, timprince wrote:
> On 5/28/2011 3:17 PM, Richard Maine wrote:
>> glen herrmannsfeldt<g...@ugcs.caltech.edu> wrote:
>>
>>> At the time NR came out, the usual source of numerical routines
>>> was IMSL, closed source.
>>

>>


> the Wikipedia article avoids the question of when the F77
> version of NR came out (1985, I have it here).

Correction: companion exercise book 1985, main text 1986

--
Tim Prince

glen herrmannsfeldt

unread,
May 29, 2011, 1:11:36 AM5/29/11
to
Richard Maine <nos...@see.signature> wrote:

(I wrote)


>> At the time NR came out, the usual source of numerical routines
>> was IMSL, closed source.

> I suppose that was probably right by that date (I had to check the date
> that NR first came out). Note that not too many years previously, before
> IMSL was sold to Visual Numerics, IMSL had one of the most liberal
> source licenses around - considerably more liberal in particular than
> the NR license (which is not an open source license by the usual
> definition).

I might have been remembering NR in C, which is the first one
that I actually bought.

> IMSL was distributed with source code and you could
> redistribute that source code with any program of yours that used it. I
> well remember that because I did so (and only after carefully reading
> the parts of the license that allowed it).

(snip)

> By the way, the NR license has some even more draconian restrictions for
> free use. I bet that a lot of people don't abide by it (or even read
> it), but I seem to recall something about restricting it to a single
> machine and actually requiring you to type it in by hand or some such
> thing. I'm too lazy to go grab my copy to recheck right now.

Well, as I understand it, though I could be wrong, the idea is
that you use the supplied code to understand the algorithm,
then rewrite it for your particular case. There is an explanation
somewhere near the beginning explaining that many of the programs
have been adapted from older programs and/or algorithms, along with
the rule that you can't copyright an algorithm, but only an
implmentation of one.

The exact amount of change needed such that you aren't copying,
but reimplementing, I don't know. I do suppose that way too
often the programs are copied directly and redistibuted.
I probably shouldn't say more without rereading the license.
(and, for that matter, hiring a copyright lawyer.)

-- glen

glen herrmannsfeldt

unread,
May 29, 2011, 1:23:01 AM5/29/11
to
nm...@cam.ac.uk wrote:

(snip)


>>> Most practical problems can be solved with a trivial, bad
>>> solution most of the time. The reason that experts encourage
>>> the use of good ones is that (a) they are less likely to fail
>>> and (b) are more likely to detect when they do.

>>and (c) much harder to explain in detail how they work.

> In the case of several of the areas I looked at, that was NOT
> true. If anything, their code was MORE complicated than the
> simplest good algorithm - both in design and programming.
> That certainly applied to both the random numbers and the sorting.

Hmmm. I usually read Knuth for random numbers and sorting.

And there is also the strange way that they did the conversion
to C with 1 origin arrays which makes them much harder to read.



>>At the time NR came out, the usual source of numerical routines
>>was IMSL, closed source. ...

> NAG would dispute that. So would its users.

Well, how about in the sense that one can get the source
and just study it. As Richard mentioned, you are supposed
to be able to include a routine unmodified (I believe) in
a program that you distribute. (Or, at least at one time could.)

I used to sometimes read programs and try to understand how
they worked. That was before I every saw a real NA book.
Some that I remember were FFT algorithms and the IBM OS/360
Fortran library.


>>I haven't read NR so recently, but I don't beleive that they
>>claim to replace the more specialized books. They do have a
>>detailed bibliography and suggestions on which other books
>>you should buy. Many good books are available for very
>>reasonable prices (less than lunch at many restaurants).

>>You could buy NR, just read the bibliography, and buy the
>>listed books.

> In which case, God help you. The advice wasn't much good,
> either, and may still not be, and the biblography wasn't much
> help unless you already knew where to look. You could easily
> end up spending thousands or tens of thousands, and some of
> those books were outdated.

-- glen

Clive Page

unread,
May 29, 2011, 5:30:56 AM5/29/11
to
On 28/05/2011 12:53, nm...@cam.ac.uk wrote:
> Most practical problems can be solved with a trivial, bad
> solution most of the time.

Yes, I think that describes what most of us (as mere scientists) are
seeking do do most of the time, i.e. find a trivial bad solution that
just about works, so we can get on with the science. :-)

What I like about NR is that it provides explanations and discussions of
the various algorithms, at a level that (as a mere scientist, who last
took a maths course many years ago) can understand. I don't find it
anything like as easy to understand other texts on numerical methods.

But as for the code in NR, I also have reservations, and never use them
exactly as provided. In particular the error-detection and handling
methods need a lot of attention. In some cases, as you say, the
algorithms are also poor.

--
Clive Page

nm...@cam.ac.uk

unread,
May 29, 2011, 6:02:43 AM5/29/11
to
In article <94ei2l...@mid.individual.net>,

Clive Page <use...@page2.eu> wrote:
>
>> Most practical problems can be solved with a trivial, bad
>> solution most of the time.
>
>Yes, I think that describes what most of us (as mere scientists) are
>seeking do do most of the time, i.e. find a trivial bad solution that
>just about works, so we can get on with the science. :-)

And damn the fact that the paper you publish is at best unreliable
and is often simply wrong :-( Sorry, but that is a SERIOUS problem,
and is getting worse - it ain't what you don't know that causes the
trouble, it's what you know for sure that ain't so.

God help me, there are 'respectable' journals where over half the
papers are self-evidently wrong! Not, thank God, in this area,
but it does show what can happen.

The reason that good books and courses are complicated are because
they tell people where the boundary to the safe path is, and where
the grues are known to lurk. Very few people want to do PRECISELY
what the book's examples do, so they need to know how much they
can modify it.

>What I like about NR is that it provides explanations and discussions of
>the various algorithms, at a level that (as a mere scientist, who last
>took a maths course many years ago) can understand. I don't find it
>anything like as easy to understand other texts on numerical methods.

H.L. Mencken said "For every complex problem there is an answer that
is clear, simple, and wrong." NR isn't that bad, but it DOES lead
non-experts to code such solutions, when they unwittingly step over
the boundary to a problematic case.

This isn't theoretical, incidentally - I have seen several users
caught by recommendations to do things that simply would not work,
and were well-known not to.


Regards,
Nick Maclaren.

robin

unread,
May 29, 2011, 7:30:11 AM5/29/11
to
Older ones include :

* IBM's Scientific Subroutine Package, Version II (FORTRAN). H20-0205
There's an edition about 1969, and there may have been later editions.

* IBM's Scientific Subroutine Package PL/I, 1969. GH20-0586

* ACM's collection of algorithms.


Ron Shepard

unread,
May 29, 2011, 1:33:11 PM5/29/11
to
In article <irqbp2$4mn$1...@dont-email.me>,
Daniel Carrera <dan...@gmail.com> wrote:

> Now my question: Can anyone recommend an alternative to NR?

It is telling that no one has answered your question. I think the
reason is that there really isn't a better introductory reference
than NR for the broad range of topics that it covers. There are, of
course, more complete specialized discussions of any of the
individual topics covered by NR.

For linear algebra topics, for example, after reading the
appropriate discussion in NR, I would recommend reading the LAPACK
documentation (which is free online). From there you can trace back
to either the LAPACK working notes or to the published literature
references.

I have wondered why there has been no authoritative replacement for
NR. It has been over two decades now since the first version, and
despite all the complaints by various critics, none of them, either
individually or in groups, have stepped up and written a
replacement. With facilities like Wikipedia, it would seem that
such a task would be fairly straightforward now.

Speaking of Wikipedia, that isn't a bad starting point for numerical
information either; you can't really trust anything there at face
value, of course, but the discussions and references are sometimes
quite good.

My main complaint about NR is that the authors have not written a
fortran version of the latest edition. Scanning quickly through
www.nr.com, I see no indication that they plan to do so. And
looking at the latest C++ version is not much help for a fortran
programmer. The mathematical equations have themselves been
corrupted into a form suitable for C and C++, so even the clarity of
the underlying presentation, which was one of the best features of
the earlier versions, has lost much of its appeal. After looking at
a library version of the latest edition a couple of years ago, I
decided not to purchase my own copy; I simply don't think it would
be sufficiently useful to justify the cost of having that reference
on my bookshelf. And that is despite the fact that there are
interesting and useful new topics and algorithms in the latest
edition that are not in the earlier editions.

$.02 -Ron Shepard

steve

unread,
May 29, 2011, 1:49:02 PM5/29/11
to
On May 29, 10:33 am, Ron Shepard <ron-shep...@NOSPAM.comcast.net>
wrote:
> In article <irqbp2$4m...@dont-email.me>,

>  Daniel Carrera <dan...@gmail.com> wrote:
>
> > Now my question: Can anyone recommend an alternative to NR?
>
> It is telling that no one has answered your question.  I think the
> reason is that there really isn't a better introductory reference
> than NR for the broad range of topics that it covers.  There are, of
> course, more complete specialized discussions of any of the
> individual topics covered by NR.

I disagree with this assertion. One example is "Introduction to
Numerical Analysis" by F. B. Hildebrand. This book covers much of
what NR does and covers topics not included NR. The book does
not include code; only the mathematics and algorithms upon which
one can base their own code. One can get a copy from Dover for
$20.

--
steve

Dan Nagle

unread,
May 29, 2011, 4:00:39 PM5/29/11
to
Hello,

I'll disagree as well.

Try _The Nature of Mathematical Modeling_
by Neil Gershenfeld

You might get more suggestions in sci.math.num-analysis

--
Cheers!

Dan Nagle

glen herrmannsfeldt

unread,
May 29, 2011, 5:05:37 PM5/29/11
to
Ron Shepard <ron-s...@nospam.comcast.net> wrote:
> In article <irqbp2$4mn$1...@dont-email.me>,
> Daniel Carrera <dan...@gmail.com> wrote:

>> Now my question: Can anyone recommend an alternative to NR?

> It is telling that no one has answered your question. I think the
> reason is that there really isn't a better introductory reference
> than NR for the broad range of topics that it covers. There are, of
> course, more complete specialized discussions of any of the
> individual topics covered by NR.

(snip)
> I have wondered why there has been no authoritative replacement for
> NR. It has been over two decades now since the first version, and
> despite all the complaints by various critics, none of them, either
> individually or in groups, have stepped up and written a
> replacement. With facilities like Wikipedia, it would seem that
> such a task would be fairly straightforward now.

> Speaking of Wikipedia, that isn't a bad starting point for numerical
> information either; you can't really trust anything there at face
> value, of course, but the discussions and references are sometimes
> quite good.

There is also mathworld, though maybe not so much for the applied
(numerical) information.



> My main complaint about NR is that the authors have not written a
> fortran version of the latest edition. Scanning quickly through
> www.nr.com, I see no indication that they plan to do so.

I suppose I wouldn't mind a Java version.

I might believe that one is not expected to use the programs
as given, but to understand the algorithms and rewrite them
as appropriate.

Knuth writes his examples in MIXAL, the assembly language for
a machine that doesn't exist. (There might be emulators around,
but it is not in common use.) It does seem that he expects
one to rewrite the routines in a more common language.

> And
> looking at the latest C++ version is not much help for a fortran
> programmer. The mathematical equations have themselves been
> corrupted into a form suitable for C and C++, so even the clarity of
> the underlying presentation, which was one of the best features of
> the earlier versions, has lost much of its appeal.

I have only looked at the first and second editions.

> After looking at
> a library version of the latest edition a couple of years ago, I
> decided not to purchase my own copy; I simply don't think it would
> be sufficiently useful to justify the cost of having that reference
> on my bookshelf. And that is despite the fact that there are
> interesting and useful new topics and algorithms in the latest
> edition that are not in the earlier editions.

-- glen

Terence

unread,
May 29, 2011, 7:20:14 PM5/29/11
to
On May 29, 1:43 am, "robin" wrote:
...

> | Thanks for the reference.  But did they HAVE to use Fortran 77,
> | in 1996?
>
> A book of 600+ pages takes more than a few years to write.
> The authors state that the book was begun about 20 years before.
> Several editions have been published over the years, each edition updated.
>
> As well as F77, they also include F90 versions.
> They could well have dropped the F77 versions.

The point about using F77 in algorithms is that most non-specialits
can better understand the processes being used, if you have at least a
good knowledge of Algebra (which was the whole thrust of the Bachus
team in dsigning Fortran; although Algol comes to mind as a familiar
alternative), or have some familiarity with Basic, if not Fortran.

Yes, you can stuff some of the basic matrix operations behind the
curtain of a later and more compact Fortran representation, with
Matrix built-in operations and fewr labels to trace, but part of the
reason for recipes is to show the how and why (the cookery book theme
is appropriate).

David Duffy

unread,
May 29, 2011, 7:29:54 PM5/29/11
to
In comp.lang.fortran nm...@cam.ac.uk wrote:

The review in Computer Physics Communications 103 (1997) 100-101 was not too
flattering:

"The majority of the chapters stop short of describing (and therefore
providing algorithms and code for) state of the art algorithms."

"In other cases the advise section appears to be badly out of date. The
section on stiff ordinary differential equations does not mention any
modern codes or techniques, indeed the majority of algorithms discussed
and the recommended bibliography are almost entirely pre 1980."

robin

unread,
May 29, 2011, 9:43:25 PM5/29/11
to
"David Duffy" <dav...@orpheus.qimr.edu.au> wrote in message news:irukti$cvh$1...@air.soe.uq.edu.au...

| In comp.lang.fortran nm...@cam.ac.uk wrote:
|
| > Has anyone taken a thorough look at Numerical Algorithms with Fortran
| > (or C), Gisela Engeln-Muellges, Frank Uhlig and M. Schon?
|
| > Any pointers to serious reviews (preferably in numerical analysis
| > journals) would be especially welcome.
|
| The review in Computer Physics Communications 103 (1997)

Of what?
Would it be of the second edition, that was published in 1992?
Or the first?

| 100-101 was not too flattering:
|
| "The majority of the chapters stop short of describing (and therefore
| providing algorithms and code for) state of the art algorithms."
|
| "In other cases the advise section appears to be badly out of date. The
| section on stiff ordinary differential equations does not mention any
| modern codes or techniques, indeed the majority of algorithms discussed
| and the recommended bibliography are almost entirely pre 1980."

That would be appropriate if the review was of the first edition.

In any case, the review is obsolete. The third edition was published
in 2007.


robin

unread,
May 29, 2011, 9:50:54 PM5/29/11
to
"Terence" <tbwr...@cantv.net> wrote in message
news:c598b987-d569-4ff2...@y27g2000prb.googlegroups.com...

>On May 29, 1:43 am, "robin" wrote:
>...
>> | Thanks for the reference. But did they HAVE to use Fortran 77,
>> | in 1996?
>>
>> A book of 600+ pages takes more than a few years to write.
>> The authors state that the book was begun about 20 years before.
>> Several editions have been published over the years, each edition updated.
>>
>> As well as F77, they also include F90 versions.
>> They could well have dropped the F77 versions.

>The point about using F77 in algorithms is that most non-specialits
>can better understand the processes being used,

That's nonsense.
It's actually clearer in F90+ (You don't have to use everything available in F90+)

robin

unread,
May 30, 2011, 1:30:24 AM5/30/11
to
"robin" <rob...@dodo.mapson.com.au> wrote in message news:4de2f648$0$89996$c30e...@exi-reader.telstra.net...

| "David Duffy" <dav...@orpheus.qimr.edu.au> wrote in message news:irukti$cvh$1...@air.soe.uq.edu.au...
|| In comp.lang.fortran nm...@cam.ac.uk wrote:
||
|| > Has anyone taken a thorough look at Numerical Algorithms with Fortran
|| > (or C), Gisela Engeln-Muellges, Frank Uhlig and M. Schon?
||
|| > Any pointers to serious reviews (preferably in numerical analysis
|| > journals) would be especially welcome.
||
|| The review in Computer Physics Communications 103 (1997)
|
| Of what?
| Would it be of the second edition, that was published in 1992?

Oops ! Please ignore my remarks in my previous post.
I thought this was still about NR (as in the subject heading).


Daniel Carrera

unread,
May 30, 2011, 4:23:22 AM5/30/11
to
On 05/29/2011 07:49 PM, steve wrote:
> I disagree with this assertion. One example is "Introduction to
> Numerical Analysis" by F. B. Hildebrand. This book covers much of
> what NR does and covers topics not included NR. The book does
> not include code; only the mathematics and algorithms upon which
> one can base their own code. One can get a copy from Dover for
> $20.

Thanks! I think this is the first low-cost book that I've seen
mentioned in this list. I found it on Amazon too.

All other things being equal, I think I prefer a book with no code than
a book with C++ code, because I find C++ harder to read.

Daniel.

Eli Osherovich

unread,
May 30, 2011, 4:23:29 AM5/30/11
to
On 05/28/2011 11:29 AM, Daniel Carrera wrote:
> Hello,
>
> I have heard often in this forum about the perils of Numerical Recipes.
> Today I saw a web page with more criticism along similar lines to what
> has been said on this list:
>
> http://www.lysator.liu.se/c/num-recipes-in-c.html

>
>
> Now my question: Can anyone recommend an alternative to NR?
>
>
> I'm hoping that someone can suggest a good book or other reference on
> numerical algorithms that I could use instead. Here is my wish list:
>
>
> 1) Trustworthy algorithms. Efficient and correct. This is of course the
> most important, otherwise I'd just use NR which I already own.
>
> 2) Not too expensive. I am a grad student.
>
> 3) Suitable for a non-specialist. I am an astrophysicist with an above
> average computers + math background, but I am *not* a numerical analyst.
> Also, I cannot take too much time out of my work to learn a new algorithm.
>
> 4) Preferably a broad range of topics. I understand it can't be as broad
> as NR, since part of the problem is that nobody can be an expert on
> every type of problem domain.
>
> 5) If possible, I would prefer code examples either in Modern Fortran or
> pseudo-code. I prefer to not have to parse either C++ or F77 to learn
> how an algorithm works.
>
>
> Any suggestions?
>
> Thanks.
>
> Cheers,
> Daniel.
"Have a look at Numerical Methods in Scientific Computing" by Dahlquist
and Bjork.

Their first book entitled "Numerical Methods" must be dirt cheap now.

The new edition (with a new title) comes in two volumes (the second
volume should be published soon). Moreover, during the work-in-progress
era both volumes were available for download from Bjork's website (with
some chapters missing).

It seems that Volume 1 is not for download anymore, but the second
volume is still available here http://www.mai.liu.se/~akbjo/dqbjVol2.pdf

Daniel Carrera

unread,
May 30, 2011, 4:31:41 AM5/30/11
to
I'm a bit concerned that all the books mentioned seem to be a bit old.
"Intro to Numerical Analysis" is from 1988, and David Duffy just posted
a review of "Numerical Algorithms with Fortran" saying that the
algorithms there are all pre-1980.


Daniel.

Daniel Carrera

unread,
May 30, 2011, 4:39:56 AM5/30/11
to
On 05/30/2011 10:23 AM, Eli Osherovich wrote:
> "Have a look at Numerical Methods in Scientific Computing" by Dahlquist
> and Bjork.
>
> Their first book entitled "Numerical Methods" must be dirt cheap now.
>
> The new edition (with a new title) comes in two volumes (the second
> volume should be published soon). Moreover, during the work-in-progress
> era both volumes were available for download from Bjork's website (with
> some chapters missing).
>
> It seems that Volume 1 is not for download anymore, but the second
> volume is still available here http://www.mai.liu.se/~akbjo/dqbjVol2.pdf


Thanks! I've downloaded the second volume. I'll read some of it before
making buying decision. This book is a major buying decision, but it
looks very modern and very thorough.

Daniel.

Eli Osherovich

unread,
May 30, 2011, 4:51:34 AM5/30/11
to
Daniel, I believe you can find the draft of the first volume also.

Also, check the SIAM page for purchasing. They have serious discounts
for their members (membership is free for students, by the way).

Eli Osherovich

unread,
May 30, 2011, 5:01:32 AM5/30/11
to
On 05/30/2011 11:39 AM, Daniel Carrera wrote:

There is another introductory book that I like:

"An introduction to numerical analysis" by Suli and Mayers

nm...@cam.ac.uk

unread,
May 30, 2011, 4:58:03 AM5/30/11
to
In article <irukti$cvh$1...@air.soe.uq.edu.au>,

Thanks very much. Well, that's better than most of the traditional
ones, which were one or more decades before that :-(


Regards,
Nick Maclaren.

nm...@cam.ac.uk

unread,
May 30, 2011, 5:23:22 AM5/30/11
to
In article <irvl5h$bs9$1...@dont-email.me>,

Daniel Carrera <dan...@gmail.com> wrote:
>On 05/30/2011 10:23 AM, Eli Osherovich wrote:
>> "Have a look at Numerical Methods in Scientific Computing" by Dahlquist
>> and Bjork.
>>
>> It seems that Volume 1 is not for download anymore, but the second
>> volume is still available here http://www.mai.liu.se/~akbjo/dqbjVol2.pdf
>
>Thanks! I've downloaded the second volume. I'll read some of it before
>making buying decision. This book is a major buying decision, but it
>looks very modern and very thorough.

Grrk. Not thorough, unfortunately, but perhaps I'm asking too
much. I looked at a few aspects that I know about, and it was
very much the same old, blinkered, traditional numerical analysis
approach. HOWEVER, those aspects are a bit subtle.

Don't read too much into these remarks, as I can make them about
virtually every numerical analysis book I have ever read! Two of
the three are important to statisticians, but most physicists
wouldn't recognise the requirements if they were bitten in the
backside by them!

It was very poor on the LL' factorisation of positive semi-definite
matrices, which is actually a soluble problem. As far as I know, I
was the first person to extract the technique from Wilksinson and
Reinsch, but it's now fairly widely used (probably by reinvention).

Its description on the reasons for blocking and hence the analysis
of why to do it is just plain misleading. I am sorry, but the
number of arithmetic operations versus data accesses ceased to be
the issue about 1980. It's caching, and that affects the tradeoffs.
It mentions caching, briefly, only under the BLAS.

It uses the term "Generalized" in the context of least squares in
a sense that is so restrictive as to make me raise my eyebrows!
In terms of least squares problems as encountered in practice,
that ain't generalised - no way, Jose!

So, overall, whether it is excellent or not, it IS specialised.


Regards,
Nick Maclaren.

Daniel Carrera

unread,
May 30, 2011, 6:19:47 AM5/30/11
to
On 05/30/2011 11:23 AM, nm...@cam.ac.uk wrote:
> Its description on the reasons for blocking and hence the analysis
> of why to do it is just plain misleading. I am sorry, but the
> number of arithmetic operations versus data accesses ceased to be
> the issue about 1980. It's caching, and that affects the tradeoffs.
> It mentions caching, briefly, only under the BLAS.


This is probably a dumb question, but isn't caching part of what we mean
by data access? The point being that accessing something from main
memory is extremely expensive so you want to minimize cache misses. No?


> So, overall, whether it is excellent or not, it IS specialised.

Is "specialised" a good thing or a bad thing? In which way is it
specialised? If it is specialised for the typical problems that a
physicist will want to solve, I'm happy :-)

Daniel.

nm...@cam.ac.uk

unread,
May 30, 2011, 5:53:43 AM5/30/11
to
In article <irvr05$c70$1...@dont-email.me>,

Daniel Carrera <dan...@gmail.com> wrote:
>
>> Its description on the reasons for blocking and hence the analysis
>> of why to do it is just plain misleading. I am sorry, but the
>> number of arithmetic operations versus data accesses ceased to be
>> the issue about 1980. It's caching, and that affects the tradeoffs.
>> It mentions caching, briefly, only under the BLAS.
>
>This is probably a dumb question, but isn't caching part of what we mean
>by data access? The point being that accessing something from main
>memory is extremely expensive so you want to minimize cache misses. No?

Yes, but the real issues are at levels beyond that. The point is
that his criterion and best performance lead to different design
approaches. But, from your postings, I doubt that you are ready
to starting using the difference. No offence intended.

>> So, overall, whether it is excellent or not, it IS specialised.
>
>Is "specialised" a good thing or a bad thing? In which way is it
>specialised? If it is specialised for the typical problems that a
>physicist will want to solve, I'm happy :-)

Dunno. I am not, nor have I ever been, a physicist. I wasn't
totally impressed at the amount of time it spent on details of
the traditional approach, without even mentioning more modern
ones - which are rarely used by physicists, though sometimes
they should be.


Regards,
Nick Maclaren.

gmail-unlp

unread,
May 30, 2011, 6:59:12 AM5/30/11
to
On May 30, 6:53 am, n...@cam.ac.uk wrote:
> In article <irvr05$c7...@dont-email.me>,

> Daniel Carrera  <dan...@gmail.com> wrote:
>
>
>
> >> Its description on the reasons for blocking and hence the analysis
> >> of why to do it is just plain misleading.  I am sorry, but the
> >> number of arithmetic operations versus data accesses ceased to be
> >> the issue about 1980.  It's caching, and that affects the tradeoffs.
> >> It mentions caching, briefly, only under the BLAS.
>
> >This is probably a dumb question, but isn't caching part of what we mean
> >by data access? The point being that accessing something from main
> >memory is extremely expensive so you want to minimize cache misses. No?
>
> Yes, but the real issues are at levels beyond that.  The point is
> that his criterion and best performance lead to different design
> approaches.  But, from your postings, I doubt that you are ready
> to starting using the difference.  No offence intended.

Please assume we are ready, and explain something about

> The point is
> that his criterion and best performance lead to different design
> approaches.

Thanks in advance,

Fernando.

Daniel Carrera

unread,
May 30, 2011, 7:19:38 AM5/30/11
to
On 05/30/2011 11:53 AM, nm...@cam.ac.uk wrote:
> But, from your postings, I doubt that you are ready
> to starting using the difference. No offence intended.


Nobody is an expert on everything. I know more about computers and
algorithms that most in my department, but I am very far from a
numerical analyst. In turn, I suspect I know more GR than the typical
numerical analyst. Even in my department there is a wide range of
skills. I have friend who is an expert on fluid mechanics, a subject I
know almost nothing about.

I do try to learn. If you think you can explain the cache and
performance issues in a language that an interested, competent but
non-specialist reader can understand, I would like to hear it.


> Dunno. I am not, nor have I ever been, a physicist. I wasn't
> totally impressed at the amount of time it spent on details of
> the traditional approach, without even mentioning more modern
> ones - which are rarely used by physicists, though sometimes
> they should be.

That's unfortunate. The book is recent, so I hoped it would include
modern algorithms. That's a problem I see in basically every book
recommended. They are all old, or use old algorithms. I don't want yet
another book that teaches Runge-Kutta 4.

Daniel.

Gordon Sande

unread,
May 30, 2011, 7:20:16 AM5/30/11
to
On 2011-05-30 05:31:41 -0300, Daniel Carrera said:

> I'm a bit concerned that all the books mentioned seem to be a bit old.
> "Intro to Numerical Analysis" is from 1988, and David Duffy just posted
> a review of "Numerical Algorithms with Fortran" saying that the
> algorithms there are all pre-1980.

And not only that much of the mathematics is from the 1800s! The mathematics
remains true so it is not an issue. Much of the basic numerical analysis
was first described well in the 1970s. Later descriptions have become more
abstract which is not always easier to understand. More specialized topics
have newer references.

nm...@cam.ac.uk

unread,
May 30, 2011, 7:07:11 AM5/30/11
to
In article <1ffdb09a-7928-46ec...@f2g2000yqh.googlegroups.com>,

gmail-unlp <ftin...@gmail.com> wrote:
>>
>> >> Its description on the reasons for blocking and hence the analysis
>> >> of why to do it is just plain misleading. I am sorry, but the
>> >> number of arithmetic operations versus data accesses ceased to be
>> >> the issue about 1980. It's caching, and that affects the tradeoffs.
>> >> It mentions caching, briefly, only under the BLAS.
>>
>> >This is probably a dumb question, but isn't caching part of what we mean
>> >by data access? The point being that accessing something from main
>> >memory is extremely expensive so you want to minimize cache misses. No?
>>
>> Yes, but the real issues are at levels beyond that. The point is
>> that his criterion and best performance lead to different design
>> approaches. But, from your postings, I doubt that you are ready
>> to starting using the difference. No offence intended.
>
>Please assume we are ready, and explain something about

It refers to the number of floating-point operations relative to
the number of data accesses; that is a very outdated way of
thinking about it. The key is how much of the work can be done
independently on separated (or copied) sections of data with a
tightly-bounded working set.

Specifically, one needs to think in terms of data transfers,
where any two accesses to 'close' locations in different blocks
need a data transfer if either is an update. Similarly, when
transferring control from one block to another, one needs to
consider how much of the data needs transferring, and how much
of that will cause serialisation.

In the pre-caching models, separate locations are independent.
With caching, that is true only if they are well separated or
all accesses are read-only. Also, using data in control
statements (e.g. as a pivot) that is not immediately available
forces serialisation.

So, to a great extent, the key is to 'think parallel independent',
even when coding for serial execution.


Regards,
Nick Maclaren.

Eli Osherovich

unread,
May 30, 2011, 8:25:07 AM5/30/11
to
>
> Grrk. Not thorough, unfortunately, but perhaps I'm asking too
> much. I looked at a few aspects that I know about, and it was
> very much the same old, blinkered, traditional numerical analysis
> approach. HOWEVER, those aspects are a bit subtle.
>
...
>
> Regards,
> Nick Maclaren.


Nick, can you recommend some *good* books.

Thanks.


nm...@cam.ac.uk

unread,
May 30, 2011, 8:06:29 AM5/30/11
to
In article <irvugb$vmj$1...@dont-email.me>,

Daniel Carrera <dan...@gmail.com> wrote:
>
>I do try to learn. If you think you can explain the cache and
>performance issues in a language that an interested, competent but
>non-specialist reader can understand, I would like to hear it.

Hmm. That's a thought. It would make a useful annex to some of
my courses. I mention the issues, and general principles, but
don't show how they map to code. I can't do it now, however.

>> Dunno. I am not, nor have I ever been, a physicist. I wasn't
>> totally impressed at the amount of time it spent on details of
>> the traditional approach, without even mentioning more modern
>> ones - which are rarely used by physicists, though sometimes
>> they should be.
>
>That's unfortunate. The book is recent, so I hoped it would include
>modern algorithms. That's a problem I see in basically every book
>recommended. They are all old, or use old algorithms. I don't want yet
>another book that teaches Runge-Kutta 4.

Oh, yes. But it's better than that, and may ev en be modern in
the areas of the problems it is thinking of.

It certainly isn't in terms of computing performance, but that's
another can of worms!


Regards,
Nick Maclaren.

Daniel Carrera

unread,
May 30, 2011, 8:44:51 AM5/30/11
to
On 05/30/2011 10:51 AM, Eli Osherovich wrote:
> On 05/30/2011 11:39 AM, Daniel Carrera wrote:
> Daniel, I believe you can find the draft of the first volume also.

I found an earlier draft. It's about 200 pages shorter than the final
book, but it's still good.

> Also, check the SIAM page for purchasing. They have serious discounts
> for their members (membership is free for students, by the way).

Thanks.

Daniel.

nm...@cam.ac.uk

unread,
May 30, 2011, 8:09:51 AM5/30/11
to
In article <is02b4$la4$1...@dont-email.me>,

Eli Osherovich <eli.osh...@gmail.com> wrote:
>>
>> Grrk. Not thorough, unfortunately, but perhaps I'm asking too
>> much. I looked at a few aspects that I know about, and it was
>> very much the same old, blinkered, traditional numerical analysis
>> approach. HOWEVER, those aspects are a bit subtle.
>
>Nick, can you recommend some *good* books.

In terms of numerical algorithms and the application of numerical
methods outside the traditional areas (e.g. to advanced statistics),
no. I wish I could, and have been looking for a good many years.

I am not damning the authors of the books that are good in their
way, because you can't do everything, but it IS important to know
where the expertise of a book stops.


Regards,
Nick Maclaren.

Daniel H

unread,
May 30, 2011, 10:33:52 AM5/30/11
to
On 05/30/2011 02:44 PM, Daniel Carrera wrote:
> On 05/30/2011 10:51 AM, Eli Osherovich wrote:
>> On 05/30/2011 11:39 AM, Daniel Carrera wrote:
>> Daniel, I believe you can find the draft of the first volume also.
>
> I found an earlier draft. It's about 200 pages shorter than the final
> book, but it's still good.
>

Would it be possible to post the link here?

Daniel Carrera

unread,
May 30, 2011, 10:43:17 AM5/30/11
to


It was on one of those websites that try to annoy you into buying a
subscription by delaying the download and directing you to the
"register" page every other click.

I put a copy of the file here:

http://pc292.astro.lu.se/~daniel/files/Numerical.Methods.in.Scientific.Computation.Volume.1.DRAFT.pdf


HTH,
Daniel.

Daniel H

unread,
May 30, 2011, 10:52:24 AM5/30/11
to
Thanks!!!

On first glance, I really like the books.

David W Noon

unread,
May 30, 2011, 11:07:23 AM5/30/11
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Mon, 30 May 2011 10:31:41 +0200, Daniel Carrera wrote about Re:
Recommend an alternative to Numerical Recipes?:

>I'm a bit concerned that all the books mentioned seem to be a bit old.
>"Intro to Numerical Analysis" is from 1988, and David Duffy just
>posted a review of "Numerical Algorithms with Fortran" saying that the
>algorithms there are all pre-1980.

I own a copy of each. Hildebrand's book was first written in the
1950's, and Engeln-Müllges and Uhlig's book was published in 1996,
after about 20 years of preparation. I bought the former back in the
mid 1970s and the latter just after it was published. In the iterative
methods chapters of each of those books, they offer the Newton-Raphson
algorithm, which dates back to the 17th Century!

Yes, the books and their algorithms are old by the standards of
computing, but the applicable laws of algebra have not changed much in
all of that time.

If you want a book on numerical analysis, either of these will be
fine -- they will teach you the underpinnings of numerical methods. If
simply you want a compendium of "canned algorithms" then stick with
N.R. and take your chances.
- --
Regards,

Dave [RLU #314465]
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
dwn...@spamtrap.ntlworld.com (David W Noon)
Remove spam trap to reply by e-mail.
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.17 (GNU/Linux)

iEYEARECAAYFAk3jsrIACgkQ9MqaUJQw2MlMnwCfZju9+UdzHTUPxgr0sQ6VXYu6
URMAn0TX9rzVHKMiCb/IVuhQxtyKQRy4
=Cgg9
-----END PGP SIGNATURE-----

none

unread,
May 31, 2011, 2:04:49 PM5/31/11
to

I bought it.

I've never used the algorithms; it seemed no better than Numerical
recipes, at least in my problem area - solution of ODEs.

none

unread,
May 31, 2011, 2:17:40 PM5/31/11
to
On Sat, 28 May 2011 12:53:48 +0100, nmm1 wrote:

> In article <irqn2q$vvn$1...@dont-email.me>,
> Daniel Carrera <dan...@gmail.com> wrote:
>>
>>Ok. It's hard to know what to think. Some people seem to love NR and
>>others seem to hate it.
>
> There is an extremely high (indeed, almost perfect) correlation
> between expertise and dislike of the book.
>
> The first edition was an utter disgrace. Inter alia, I had
> refereed random number algorithms for 25 years, and its ones
> were a new low. MUCH worse than RANDU. Now, THAT'S impressive!
> Damn the random walk, a simple uniformity test went wrong with
> a sample of 200. The second edition was merely very bad.
>

Could you give details, or a reference to where I might find details? The
1st edition took pains to discuss RANDU and how their method was better
than it.

There is an environmental standards-setting program that uses the 1st
Edition routines. Does Monte-Carlo analysis to assist in setting
95-percentile discharge limits. I had recomended upgrading the random
number generator to the NR 2nd edition versions.

Not being a random number expert my reading of their algorithm made it
look as if they were doing a good job; but at that time I did not require
implementing their work. I used the 2nd edition routines last year, in a
Monte Carlo simulation for plant pathogen infectivity risks - how badly
wrong might the 'random' numbers be? I was generating c. 100,000 deviates
for each scenario.

> I looked at other areas that I am an expert on,
and they were similar,
> though not as impressively incompetent as the first edition's random
> numbers. I have better things to do than to track every variation and
> edition.
>
> The points that are missed by the people who claim to have had no
> trouble are:
>
> Not noticing trouble is not the same as there not being any.
> Even in their own analyses, why were they sure that the answers were
> right? Merely not falling over is a very poor measure of quality.
>
> Most practical problems can be solved with a trivial, bad
> solution most of the time. The reason that experts encourage the use of
> good ones is that (a) they are less likely to fail and (b) are more
> likely to detect when they do.
>
>
> Regards,
> Nick Maclaren.

glen herrmannsfeldt

unread,
May 31, 2011, 3:29:52 PM5/31/11
to
none <no...@none.net> wrote:

(snip on random nubmers and NR)


> Could you give details, or a reference to where I might find details? The
> 1st edition took pains to discuss RANDU and how their method was better
> than it.

> There is an environmental standards-setting program that uses the 1st
> Edition routines. Does Monte-Carlo analysis to assist in setting
> 95-percentile discharge limits. I had recomended upgrading the random
> number generator to the NR 2nd edition versions.

Good and bad for random numbers is a science. What is good in some
places might be horrible in others. The favorite reference is
always Knuth's "The Art of Computer Programming" volume 2.

The favorite story about Monte-Carlo and random numbers has to do
with many linear congruential generators generate poor groups
of three. That turned out bad for three-dimensional Monte-Carlo,
but should not be a problem for other dimensions (not multiples of
three).

The problem with RANDU, and also with many LC generators, is that
the low bits aren't so random. You don't want to use the result
modulo a small (or even not so small) number to generate numbers
between zero and N-1. RANDU is especially bad with an even seed,
and, with an odd seed, always generates odd results. (Even
though I haven't looked at it for about 30 years I still remember:
multiply by 65539, take the result modulo 2**31. On 32 bit machines
where multiply returns the low 32 bits, test for a negative value
and add 2**30 twice.)


> Not being a random number expert my reading of their algorithm made it
> look as if they were doing a good job; but at that time I did not require
> implementing their work. I used the 2nd edition routines last year, in a
> Monte Carlo simulation for plant pathogen infectivity risks - how badly
> wrong might the 'random' numbers be? I was generating c. 100,000 deviates
> for each scenario.

How did you use the numbers in your algorithm? That is often as
important as the generator used.

-- glen

Arjan

unread,
May 31, 2011, 5:03:56 PM5/31/11
to
> Good and bad for random numbers is a science. What is good in some


I remember that once in 1980 my teacher wanted to make a distribution
function of the results of all simulations from my classmates for a
particular homework assignment. A random number generator was the
basis for the simulation. To his surprise, the resulting probability-
function had 1 bin. We all had used the same seeding for the random
number generator and consequently found the same result...

A.

none

unread,
May 31, 2011, 5:32:02 PM5/31/11
to
On Tue, 31 May 2011 19:29:52 +0000, glen herrmannsfeldt wrote:

> none <no...@none.net> wrote:

>> Not being a random number expert my reading of their algorithm made it
>> look as if they were doing a good job; but at that time I did not require
>> implementing their work. I used the 2nd edition routines last year, in a
>> Monte Carlo simulation for plant pathogen infectivity risks - how badly
>> wrong might the 'random' numbers be? I was generating c. 100,000 deviates
>> for each scenario.
>
> How did you use the numbers in your algorithm? That is often as
> important as the generator used.
>
> -- glen

Fairly simplistically. I had probabilities for green waste being
contaminated, green waste being lost during transport, green waste being
lost during transfer, and escaping from filters in the air system at the
treatment location. So we generated the random numbers, and just checked
to see if they were less than the relevant probability. If yes, we assumed
that the parcel of green waste that I was looking at was
contaimnated/escaped. Finally, summed up what could be expected to escape
near various sites of special scientific interest over a twelve-month
period, and over the hatching period for the plant pathogen.

I could dig out the relevant generator that I used, but from memory it was
the replacement for the previous $100,000 bet version. I chose the NR
version over the built-in Intel Fortran version as being more accessible
to checking, as the results would be scrutinised at a public inquiry.

Aall the source code was published for the other side to review, along
with the algorithms, the data used, and the references for where we
acquired our probabilities - they didn't come back with any queries, but
since their approach was effectively the one of 'if a single spore is
released then it is a major contamination' they may not really have had
the background to assess a more mathematical assesment of the risk. Our
views on the risks were accepted by the judge over the other side ;-)

robin

unread,
May 31, 2011, 7:44:37 PM5/31/11
to
"glen herrmannsfeldt" <g...@ugcs.caltech.edu> wrote in message news:is3fjg$j6e$1...@dont-email.me...

| The problem with RANDU, and also with many LC generators, is that
| the low bits aren't so random. You don't want to use the result
| modulo a small (or even not so small) number to generate numbers
| between zero and N-1. RANDU is especially bad with an even seed,
| and,

IBM's comment documentation with RANDU says that the starter value
MUST be odd-valued.

| with an odd seed, always generates odd results.

As expected, as the multiplier is odd-valued.

| (Even
| though I haven't looked at it for about 30 years I still remember:
| multiply by 65539, take the result modulo 2**31. On 32 bit machines
| where multiply returns the low 32 bits, test for a negative value
| and add 2**30 twice.)

IBM's comment dodumentation with RANDU is that it is specific
for S/360.

IBM's RANDU adds 2147483647 + 1


robin

unread,
Jun 1, 2011, 1:20:55 AM6/1/11
to
"glen herrmannsfeldt" <g...@ugcs.caltech.edu> wrote in message news:irrfbc$lj3$1...@dont-email.me...

| Introductory NR books stop at trapezoidal rule, or maybe
| Simpson's rule, without explaining either.

Introductory Numerical Methods books cover all the basic methods,
and they explain them well. Including Newton-Cote's rule, etc. etc.
See, for example, Conte and de Boor, and Burden and Faires (Numerical Analyusis).
The latter book, incidentally, gives algorithms for many of the
topics.

| NR is the
| only one that I know of that explains extended Simpson's
| rule, explaining away the magic coefficients. They go
| a little into the better algorithms, and then refer one
| to books that explain the rest.

Any introductory book on numerical methods will explain
those methods.


robin

unread,
Jun 1, 2011, 1:27:33 AM6/1/11
to
"glen herrmannsfeldt" <g...@ugcs.caltech.edu> wrote in message news:is3fjg$j6e$1...@dont-email.me...

| The problem with RANDU, and also with many LC generators, is that
| the low bits aren't so random.

You are thinking of the integer result. RANDU also gives a floating-point
result, and for that the low-order bits are dropped. For the S/360
(for which IBM's RANDU was available), the least-significant 8 bits
were typically dropped.

| You don't want to use the result
| modulo a small (or even not so small) number to generate numbers
| between zero and N-1.

Again, for that, you'd use the floating-point result.


glen herrmannsfeldt

unread,
Jun 1, 2011, 2:03:38 AM6/1/11
to
robin <rob...@dodo.mapson.com.au> wrote:

(snip)


> | NR is the
> | only one that I know of that explains extended Simpson's
> | rule, explaining away the magic coefficients. They go
> | a little into the better algorithms, and then refer one
> | to books that explain the rest.

> Any introductory book on numerical methods will explain
> those methods.

A lot of books explain the magic coefficients. NR explains
the magic away. It is the only book I know of that does that.
If you haven't read the explanation then you won't know what
it is that they explain away.

OK, here is the usual explanation. Simpson's rule gives the
best value for an integral given the two ends an the midpoint.
That is, a weight of 1/3 for each end and 4/3 for the midpoint.

The extended Simpson's rule evaluates the integral as a series
of such, with alternating weights of 2/3 and 4/3. Many books
explain that.


Now, consider that you are integrating over hundreds of points.
Why should every other point be weighted twice what the points
are in between? Near the ends, there is an end effect.
Far from the end, the weights should all be one. NR explains it.

Since you say that all books explain it, can you find a book
that does and post the rest of the explanation?

Most likely not.

-- glen

nm...@cam.ac.uk

unread,
Jun 1, 2011, 2:15:57 AM6/1/11
to
In article <is3fjg$j6e$1...@dont-email.me>,
glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

>none <no...@none.net> wrote:
>
>>> The first edition was an utter disgrace. Inter alia, I had
>>> refereed random number algorithms for 25 years, and its ones
>>> were a new low. MUCH worse than RANDU. Now, THAT'S impressive!
>>> Damn the random walk, a simple uniformity test went wrong with
>>> a sample of 200. The second edition was merely very bad.
>>
>> Could you give details, or a reference to where I might find details? The
>> 1st edition took pains to discuss RANDU and how their method was better
>> than it.

Regrettably, no. You would spot problems easily enough by using
DIEHARD or the methods described in Knuth, but I don't think that
anyone wrote up the defects in detail. They were blindingly
obvious to anyone with even passing expertise.

>Good and bad for random numbers is a science. What is good in some
>places might be horrible in others. The favorite reference is
>always Knuth's "The Art of Computer Programming" volume 2.

Yes. It has a few minor defects and is dated, but is sound.
I recommend using one of Marsaglia's modern 64-bit ones, or I
have one that I am happy for people to use.

>The favorite story about Monte-Carlo and random numbers has to do
>with many linear congruential generators generate poor groups
>of three. That turned out bad for three-dimensional Monte-Carlo,
>but should not be a problem for other dimensions (not multiples of
>three).

Er, no. It's bad for at least some simulations in any dimension
three or above.

>How did you use the numbers in your algorithm? That is often as
>important as the generator used.

Not often, usually! And THAT is one reason that it is so difficult
to persuade people that generators are flawed. The big trouble
is that it is rarely possible to know for certain whether they are
giving unreliable or incorrect results :-(


Regards,
Nick Maclaren.

gmail-unlp

unread,
Jun 1, 2011, 6:52:15 AM6/1/11
to
On Jun 1, 3:15 am, n...@cam.ac.uk wrote:
> In article <is3fjg$j6...@dont-email.me>,
> glen herrmannsfeldt  <g...@ugcs.caltech.edu> wrote:

>
> >none <n...@none.net> wrote:
>
> >>> The first edition was an utter disgrace.  Inter alia, I had
> >>> refereed random number algorithms for 25 years, and its ones
> >>> were a new low.  MUCH worse than RANDU.  Now, THAT'S impressive!
> >>> Damn the random walk, a simple uniformity test went wrong with
> >>> a sample of 200.  The second edition was merely very bad.
>
> >> Could you give details, or a reference to where I might find details? The
> >> 1st edition took pains to discuss RANDU and how their method was better
> >> than it.
>
> Regrettably, no.  You would spot problems easily enough by using
> DIEHARD or the methods described in Knuth, but I don't think that
> anyone wrote up the defects in detail.  They were blindingly
> obvious to anyone with even passing expertise.
>
> >Good and bad for random numbers is a science.  What is good in some
> >places might be horrible in others.  The favorite reference is
> >always Knuth's "The Art of Computer Programming" volume 2.
>
> Yes.  It has a few minor defects and is dated, but is sound.
> I recommend using one of Marsaglia's modern 64-bit ones, or I
> have one that I am happy for people to use.

Would you post your code here, please?

Fernando.

nm...@cam.ac.uk

unread,
Jun 1, 2011, 6:52:08 AM6/1/11
to
In article <17b7bf22-a12f-458a...@gv8g2000vbb.googlegroups.com>,

gmail-unlp <ftin...@gmail.com> wrote:
>>
>> >Good and bad for random numbers is a science. What is good in some
>> >places might be horrible in others. The favorite reference is
>> >always Knuth's "The Art of Computer Programming" volume 2.
>>
>> Yes. It has a few minor defects and is dated, but is sound.
>> I recommend using one of Marsaglia's modern 64-bit ones, or I
>> have one that I am happy for people to use.
>
>Would you post your code here, please?

I don't think that's appropriate, but I can send it to anyone
who sends me an Email.


Regards,
Nick Maclaren.

gmail-unlp

unread,
Jun 1, 2011, 10:35:36 AM6/1/11
to
On Jun 1, 7:52 am, n...@cam.ac.uk wrote:
> In article <17b7bf22-a12f-458a-aa88-f5335c5c0...@gv8g2000vbb.googlegroups.com>,

>
> gmail-unlp  <ftine...@gmail.com> wrote:
>
> >> >Good and bad for random numbers is a science.  What is good in some
> >> >places might be horrible in others.  The favorite reference is
> >> >always Knuth's "The Art of Computer Programming" volume 2.
>
> >> Yes.  It has a few minor defects and is dated, but is sound.
> >> I recommend using one of Marsaglia's modern 64-bit ones, or I
> >> have one that I am happy for people to use.
>
> >Would you post your code here, please?
>
> I don't think that's appropriate, but I can send it to anyone
> who sends me an Email.

Ok, I'll send an e-mail.

Fernando.

Paul van Delst

unread,
Jun 1, 2011, 3:44:22 PM6/1/11
to
These may have been mentioned upthread (too many responses to check).

Have a look at:
http://www.netlib.org/
http://gams.nist.gov/

The latter lists "free" as well as proprietary software, so pay attention to the results

cheers,

paulv

Daniel Carrera wrote:
> Hello,
>
> I have heard often in this forum about the perils of Numerical Recipes.
> Today I saw a web page with more criticism along similar lines to what
> has been said on this list:
>
> http://www.lysator.liu.se/c/num-recipes-in-c.html
>
>
> Now my question: Can anyone recommend an alternative to NR?
>
>
> I'm hoping that someone can suggest a good book or other reference on
> numerical algorithms that I could use instead. Here is my wish list:
>
>
> 1) Trustworthy algorithms. Efficient and correct. This is of course the
> most important, otherwise I'd just use NR which I already own.
>
> 2) Not too expensive. I am a grad student.
>
> 3) Suitable for a non-specialist. I am an astrophysicist with an above
> average computers + math background, but I am *not* a numerical analyst.
> Also, I cannot take too much time out of my work to learn a new algorithm.
>
> 4) Preferably a broad range of topics. I understand it can't be as broad
> as NR, since part of the problem is that nobody can be an expert on
> every type of problem domain.
>
> 5) If possible, I would prefer code examples either in Modern Fortran or
> pseudo-code. I prefer to not have to parse either C++ or F77 to learn
> how an algorithm works.
>
>
> Any suggestions?
>
> Thanks.
>
> Cheers,
> Daniel.

robin

unread,
Jun 1, 2011, 8:31:53 PM6/1/11
to
<nm...@cam.ac.uk> wrote in message news:is4let$976$1...@gosset.csi.cam.ac.uk...

| In article <is3fjg$j6e$1...@dont-email.me>,
| glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

| >Good and bad for random numbers is a science. What is good in some
| >places might be horrible in others. The favorite reference is
| >always Knuth's "The Art of Computer Programming" volume 2.
|
| Yes. It has a few minor defects and is dated, but is sound.

Defects? Knuth's Vol. 2 has stood the test of time.
Dated? Not so. Published in 1997/98, and not as dated as your
comments about an 1987 edition of NR that has had its third edition
published in 2007.
Sound? Unquestionably so.


nm...@cam.ac.uk

unread,
Jun 2, 2011, 3:58:40 AM6/2/11
to
In article <4de6d9fd$0$89995$c30e...@exi-reader.telstra.net>,

robin <rob...@dodo.mapson.com.au> wrote:
>
>| >Good and bad for random numbers is a science. What is good in some
>| >places might be horrible in others. The favorite reference is
>| >always Knuth's "The Art of Computer Programming" volume 2.
>|
>| Yes. It has a few minor defects and is dated, but is sound.
>
>Defects? Knuth's Vol. 2 has stood the test of time.
>Dated? Not so. Published in 1997/98, and not as dated as your
>comments about an 1987 edition of NR that has had its third edition
>published in 2007.
>Sound? Unquestionably so.

Impressive. You have managed to achieve three factual errors and
only managed one correct assertion.


Regards,
Nick Maclaren.

Phillip Helbig---undress to reply

unread,
Jun 2, 2011, 5:10:15 AM6/2/11
to
In article <is3fjg$j6e$1...@dont-email.me>, glen herrmannsfeldt
<g...@ugcs.caltech.edu> writes:

> Good and bad for random numbers is a science.

Marsaglia said that RNGs are like sex: when they're good, they're really
good, but even the bad ones are still pretty good. His words, not mine.

> What is good in some
> places might be horrible in others. The favorite reference is
> always Knuth's "The Art of Computer Programming" volume 2.

Right. Also, consider that it is possible that there are correlations
between successive numbers, perhaps a maximum or minimum difference.
This could be a problem if you are doing something with their sequence,
but wouldn't be if you consider the ensemble. It depends on what you
want to do.

> The favorite story about Monte-Carlo and random numbers has to do
> with many linear congruential generators generate poor groups
> of three. That turned out bad for three-dimensional Monte-Carlo,
> but should not be a problem for other dimensions (not multiples of
> three).

Another story is that IBM said that they guaranteed that any one number
generated by their RNG was random, but not that more than one was
random.

> The problem with RANDU, and also with many LC generators, is that
> the low bits aren't so random.

IIRC, at its highest "luxury level", RANLUX produces perfect random
numbers (within the limits of a 24-bit mantissa, of course), i.e. all
bits are chaotic. This was mathematically proven by Martin Lüscher.
(In this case, one could combine 2 SP RNs to get 1 DP rn.)

nm...@cam.ac.uk

unread,
Jun 2, 2011, 5:07:38 AM6/2/11
to
In article <is7k1n$2pv$1...@online.de>,

Phillip Helbig---undress to reply <hel...@astro.multiCLOTHESvax.de> wrote:
>
>> Good and bad for random numbers is a science.
>
>Marsaglia said that RNGs are like sex: when they're good, they're really
>good, but even the bad ones are still pretty good. His words, not mine.

He wasn't thinking of the REALLY dire ones when he said that :-(

>IIRC, at its highest "luxury level", RANLUX produces perfect random
>numbers (within the limits of a 24-bit mantissa, of course), i.e. all
>bits are chaotic. This was mathematically proven by Martin Lüscher.
>(In this case, one could combine 2 SP RNs to get 1 DP rn.)

Er, no. Three, if you are using floating-point. I don't advise
RANLUX because it is either flawed or expensive. Knuth refers to
the problem and several of us studied it later (I identified it,
but completely failed with any useful analysis). Martin Luescher
was the first person who was good enough to analyse the problem
properly. However, the RANLUX solution (i.e. the 'luxury' level)
makes the generation very slow.

The question of what constitutes 'perfection' for a finite sequence
is touched on (and I mean that) by Knuth, but is an essentially
insoluble problem except in the context of Kolmogorov compressibility.
And that is not relevant to most uses!


Regards,
Nick Maclaren.

Phillip Helbig---undress to reply

unread,
Jun 2, 2011, 6:35:42 AM6/2/11
to
In article <is7jsq$j0l$1...@gosset.csi.cam.ac.uk>, nm...@cam.ac.uk writes:

> >IIRC, at its highest "luxury level", RANLUX produces perfect random
> >numbers (within the limits of a 24-bit mantissa, of course), i.e. all
> >bits are chaotic. This was mathematically proven by Martin Lüscher.
> >(In this case, one could combine 2 SP RNs to get 1 DP rn.)
>
> Er, no. Three, if you are using floating-point.

Right.

> I don't advise
> RANLUX because it is either flawed or expensive. Knuth refers to
> the problem and several of us studied it later (I identified it,
> but completely failed with any useful analysis). Martin Luescher
> was the first person who was good enough to analyse the problem
> properly. However, the RANLUX solution (i.e. the 'luxury' level)
> makes the generation very slow.

True, but for many people (with hardware orders of magnitude faster than
when RANLUX was written) that doesn't matter.

Ron Shepard

unread,
Jun 2, 2011, 9:05:54 PM6/2/11
to
In article <is7k1n$2pv$1...@online.de>,
hel...@astro.multiCLOTHESvax.de (Phillip Helbig---undress to reply)
wrote:

> IIRC, at its highest "luxury level", RANLUX produces perfect random
> numbers (within the limits of a 24-bit mantissa, of course), i.e. all
> bits are chaotic. This was mathematically proven by Martin Lüscher.
> (In this case, one could combine 2 SP RNs to get 1 DP rn.)

How exactly would this be done? This would require combining two random
24 bit mantissas to produce a 53-bit mantissa.

$.02 -Ron Shepard

robin

unread,
Jun 10, 2011, 9:31:48 AM6/10/11
to
<nm...@cam.ac.uk> wrote in message news:is7frg$htm$1...@gosset.csi.cam.ac.uk...

Fact 1: Knuth's Vol. 2, 3/e, published in 1997/98.
Fact 2: NR first published in 1985/6.
Fact 3: NR 3/e published in 2007.
Fact 4. Your comments about the first edition of NR (1985/6) are out of date.


0 new messages