Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

QR vs LU vs SV decompositions.

1,565 views
Skip to first unread message

Daniel Carrera

unread,
Dec 31, 2010, 8:45:03 AM12/31/10
to
Hello,

I have a couple of questions that are more about linear algebra than
Fortran, but they touch on my previous questions about LAPACK.

1) Am I right that QR, LU and Singular Value decompositions are all
used to solve similar problems (mainly solving a linear system and
least squares) ?

2) What are the pros and cons of QR, LU and SVD? I've read that LU is
good for well-behaved matrices, SVD is best for "bad" matrices, and QR
lies somewhere in between. But I really don't know what the author
meant by this or why this should be the case.

3) Can someone suggest an online reference besides Wikipedia that
would help me understand what each of these decompositions is good
for?

Thanks.

Daniel.

nm...@cam.ac.uk

unread,
Dec 31, 2010, 8:11:16 AM12/31/10
to
In article <feca4439-df2b-4e16...@c2g2000yqc.googlegroups.com>,

Daniel Carrera <dcar...@gmail.com> wrote:
>
>I have a couple of questions that are more about linear algebra than
>Fortran, but they touch on my previous questions about LAPACK.
>
>1) Am I right that QR, LU and Singular Value decompositions are all
>used to solve similar problems (mainly solving a linear system and
>least squares) ?

No. QR and SVD are usually/often used for eigensystem analysis,
and SVD for more obscure uses as well. However, all of them CAN
be used to solve systems of linear equations.

I am too rusty to be able to answer your other questions, and all
of the numerical references I have are in out-of-date textbooks.
I would find a modern, reliable, general reference very useful,
and I don't care if it is on paper or the Web. You could look
on the NAG Web site, which has a lot of useful advice; look for
the fortran library, go to the right chapter (E08, if I recall)
and look at the Chapter Introduction.

http://www.nag.co.uk


Regards,
Nick Maclaren.

Nasser M. Abbasi

unread,
Dec 31, 2010, 9:27:47 AM12/31/10
to
On 12/31/2010 5:11 AM, nm...@cam.ac.uk wrote:
> In article<feca4439-df2b-4e16...@c2g2000yqc.googlegroups.com>,
> Daniel Carrera<dcar...@gmail.com> wrote:
>>
>> I have a couple of questions that are more about linear algebra than
>> Fortran, but they touch on my previous questions about LAPACK.
>>
>> 1) Am I right that QR, LU and Singular Value decompositions are all
>> used to solve similar problems (mainly solving a linear system and
>> least squares) ?
>
> No. QR and SVD are usually/often used for eigensystem analysis,
> and SVD for more obscure uses as well. However, all of them CAN
> be used to solve systems of linear equations.
>

QR is definitely also used to solve Ax=b, when A is not square matrix.

To solve Ax=b, there are about 8 cases to consider depending on
the shape and properties of A. One of them A being not square, and this
leads to a least squares solution X (after the QR decomposition)

I made a small note the other day just to try to understand lapack
and compare them to Matlab's calls, here it is:

http://12000.org/my_notes/lapack_analysis/lapack/index.htm

I stopped after about 58 lapack functions. got tired, Too many
of them !

--Nasser

Beliavsky

unread,
Dec 31, 2010, 9:42:52 AM12/31/10
to

You can read Chapter 2 of Numerical Recipes, which discusses these
transformations,
online at http://www.nrbook.com/fortran/ .

nm...@cam.ac.uk

unread,
Dec 31, 2010, 9:15:40 AM12/31/10
to
In article <ifkp97$87q$1...@speranza.aioe.org>,

Nasser M. Abbasi <n...@12000.org> wrote:
>>>
>>> 1) Am I right that QR, LU and Singular Value decompositions are all
>>> used to solve similar problems (mainly solving a linear system and
>>> least squares) ?
>>
>> No. QR and SVD are usually/often used for eigensystem analysis,
>> and SVD for more obscure uses as well. However, all of them CAN
>> be used to solve systems of linear equations.

True. I am not thinking properly at present and had forgotten that
case!

>To solve Ax=b, there are about 8 cases to consider depending on
>the shape and properties of A. One of them A being not square, and this
>leads to a least squares solution X (after the QR decomposition)

Don't bet on it. Matrices can be over any field or, even with some
constraints, over any Abelian ring. Ratios of polynomials are an
example of the first, and integers and plain polynomials of the
second. And, of course, both extend to multinomials .... I have
a vague idea that some people solve such equations over non-Abelian
structures, too.

Solving equations over such things reasonably efficiently is, er,
interesting.


Regards,
Nick Maclaren.

nm...@cam.ac.uk

unread,
Dec 31, 2010, 9:21:35 AM12/31/10
to
In article <20408bf1-2cc9-458c...@z19g2000yqb.googlegroups.com>,
Beliavsky <beli...@aol.com> wrote:

>On Dec 31, 7:45=A0am, Daniel Carrera <dcarr...@gmail.com> wrote:
>>
>> 3) Can someone suggest an online reference besides Wikipedia that
>> would help me understand what each of these decompositions is good
>> for?
>
>You can read Chapter 2 of Numerical Recipes, which discusses these
>transformations,
>online at http://www.nrbook.com/fortran/ .

However, you are advised not to. Much of what it says is sound,
but enough is seriously misleading or even just wrong to make it
unsuitable for inexperienced people.


Regards,
Nick Maclaren.

Daniel Carrera

unread,
Dec 31, 2010, 11:35:17 AM12/31/10
to
On Dec 31, 3:21 pm, n...@cam.ac.uk wrote:
> >You can read Chapter 2 of Numerical Recipes, which discusses these
> >transformations,
> >online athttp://www.nrbook.com/fortran/.
>
> However, you are advised not to.  Much of what it says is sound,
> but enough is seriously misleading or even just wrong to make it
> unsuitable for inexperienced people.

I remember seeing someone in this forum say this before. I'm very
curious, since NR is a popular book. Can you give me an example of
something that NR says that is seriously misleading, and an example of
something that is wrong?

Daniel.

Daniel Carrera

unread,
Dec 31, 2010, 11:39:21 AM12/31/10
to
On Dec 31, 3:27 pm, "Nasser M. Abbasi" <n...@12000.org> wrote:
> I made a small note the other day just to try to understand lapack
> and compare them to Matlab's calls, here it is:
>
> http://12000.org/my_notes/lapack_analysis/lapack/index.htm
>
> I stopped after about 58 lapack functions. got tired, Too many
> of them !

Thanks! That's very useful. I've bookmarked this page for future
reference.

Daniel.

nm...@cam.ac.uk

unread,
Dec 31, 2010, 11:25:34 AM12/31/10
to
In article <efecb302-2407-4541...@f8g2000yqd.googlegroups.com>,
Daniel Carrera <dcar...@gmail.com> wrote:

>On Dec 31, 3:21=A0pm, n...@cam.ac.uk wrote:
>> >You can read Chapter 2 of Numerical Recipes, which discusses these
>> >transformations,
>> >online athttp://www.nrbook.com/fortran/.
>>
>> However, you are advised not to. =A0Much of what it says is sound,

>> but enough is seriously misleading or even just wrong to make it
>> unsuitable for inexperienced people.
>
>I remember seeing someone in this forum say this before. I'm very
>curious, since NR is a popular book. Can you give me an example of
>something that NR says that is seriously misleading, and an example of
>something that is wrong?

Not without looking at it again. But anyone who knows the relevant
areas will be able to spot such things in a few minutes. I can
remember a few things only, and not the details.


Regards,
Nick Maclaren.

e p chandler

unread,
Dec 31, 2010, 12:29:26 PM12/31/10
to

"Daniel Carrera" <dcar...@gmail.com> wrote in message
news:feca4439-df2b-4e16...@c2g2000yqc.googlegroups.com...

There is a **long** series of lectures by Prof. Boyd from Stanford. Search
for
"Introduction to Linear Dynamic Systems" on YouTube. More than I ever wanted
to know about least squares. I've only gotten through about half the
lectures so far.

Ron Shepard

unread,
Dec 31, 2010, 1:43:17 PM12/31/10
to
In article
<feca4439-df2b-4e16...@c2g2000yqc.googlegroups.com>,
Daniel Carrera <dcar...@gmail.com> wrote:

> Hello,
>
> I have a couple of questions that are more about linear algebra than
> Fortran, but they touch on my previous questions about LAPACK.
>
> 1) Am I right that QR, LU and Singular Value decompositions are all
> used to solve similar problems (mainly solving a linear system and
> least squares) ?

These are all related to factorization of matrices in linear algebra
problems (the "LA" in LAPACK), so they are all similar in that
respect. But they are different kinds of factorizations. For
example, LU does not apply to singular matrices, QR and SVD can be
used for singular systems. Sometimes QR and SVD are used to
determine the numerical rank of a problem, sometimes they are used
when the rank is known ahead of time to be less than the dimension.
The methods have different operation counts and memory requirements,
and sometimes when there is a choice of which method to use that is
the determining factor. There is another important kind of
factorization too, the eigenvalues and eigenvectors, which is
similar to but distinct from the three that you mention above.



> 2) What are the pros and cons of QR, LU and SVD? I've read that LU is
> good for well-behaved matrices, SVD is best for "bad" matrices, and QR
> lies somewhere in between. But I really don't know what the author
> meant by this or why this should be the case.

This sounds like a discussion for a specific problem (maybe a
solution to a linear equation system), not a general way to
characterize the three factorization methods. You get different
information, and at different costs, depending on the kind of
application.

>
> 3) Can someone suggest an online reference besides Wikipedia that
> would help me understand what each of these decompositions is good
> for?

I would suggest looking at the LAPACK documentation itself. Here is
one site

http://www.netlib.org/lapack/lug/

There are also pdf versions available. If you are interested in
some of the details, there are also separate working notes of the
LAPACK developers online.

$.02 -Ron Shepard

Ron Shepard

unread,
Dec 31, 2010, 1:53:01 PM12/31/10
to
In article <ifkotf$h8e$1...@gosset.csi.cam.ac.uk>, nm...@cam.ac.uk
wrote:

I know that many people say this, but I disagree. I think the
Numerical Recipes books are excellent introductions to many
computational problems, including linear algebra. I do agree that
the code itself that is given in the book is sometimes not the most
robust, but on the other hand, something like LAPACK code is robust
but too complicated for a beginner to understand easily.

$.02 -Ron Shepard

Daniel Carrera

unread,
Dec 31, 2010, 2:40:10 PM12/31/10
to
On Dec 31, 6:29 pm, "e p chandler" <e...@juno.com> wrote:
> There is a **long** series of lectures by Prof. Boyd from Stanford. Search
> for
> "Introduction to Linear Dynamic Systems" on YouTube. More than I ever wanted
> to know about least squares. I've only gotten through about half the
> lectures so far.

Found them. Very nice of Stanford to make those available.

Daniel Carrera

unread,
Dec 31, 2010, 2:47:00 PM12/31/10
to
On Dec 31, 7:43 pm, Ron Shepard <ron-shep...@NOSPAM.comcast.net>
wrote:

> I would suggest looking at the LAPACK documentation itself.  Here is
> one site
>
>    http://www.netlib.org/lapack/lug/
>
> There are also pdf versions available.  If you are interested in
> some of the details, there are also separate working notes of the
> LAPACK developers online.

I'm blind. Where do I get the PDF? I'd actually like to have it. I've
been reading through it on and off, and some times my unreliable
connection keeps me from reading.

nm...@cam.ac.uk

unread,
Dec 31, 2010, 2:23:12 PM12/31/10
to
In article <ron-shepard-D8FA...@news60.forteinc.com>,

Ron Shepard <ron-s...@NOSPAM.comcast.net> wrote:
>>
>> However, you are advised not to. Much of what it says is sound,
>> but enough is seriously misleading or even just wrong to make it
>> unsuitable for inexperienced people.
>
>I know that many people say this, but I disagree. I think the
>Numerical Recipes books are excellent introductions to many
>computational problems, including linear algebra. I do agree that
>the code itself that is given in the book is sometimes not the most
>robust, but on the other hand, something like LAPACK code is robust
>but too complicated for a beginner to understand easily.

Then clearly you aren't aware of the traps that it leads the naive
into, either because you know of them or because the suckers have
come to you for assistance. I have even had to tell one that all of
the research he had done so far was worthless, because it had led him
to a totally inappropriate method for his problem. And that was over
a year's work towards a PhD.

As I said, much of it is sound, which is why it is so dangerous.
Only an expert knows when it stops being sound.


Regards,
Nick Maclaren.

glen herrmannsfeldt

unread,
Dec 31, 2010, 4:10:59 PM12/31/10
to
nm...@cam.ac.uk wrote:
(snip, someone wrote)

>>You can read Chapter 2 of Numerical Recipes, which discusses these
>>transformations,
>>online at http://www.nrbook.com/fortran/ .

> However, you are advised not to. Much of what it says is sound,
> but enough is seriously misleading or even just wrong to make it
> unsuitable for inexperienced people.

It seems to me that they do a better job of answering some of
the simpler questions that other references just assume.
That seems likely to include the OPs questions.

But yes, before you get too deep into the programming, you should
start looking at other references. That is probably especially
true for matrix related problems.

OK, one example that NR explains that (most) others don't:
For the composite Simpson's Rule, you have alternating 2/3 and 4/3
coefficients, which seem to indicate that some points are more
important than others. NR explains that one away. Now, you might
say that one shouldn't use composite Simpson's rule, but understanding
it is a good start to understanding error terms and approximation
order. (That is, a method might be exact for an Nth degree polynomial,
for appropriate N.)

-- glen

glen herrmannsfeldt

unread,
Dec 31, 2010, 4:22:23 PM12/31/10
to
nm...@cam.ac.uk wrote:
(snip on the NR books)

> Then clearly you aren't aware of the traps that it leads the naive
> into, either because you know of them or because the suckers have
> come to you for assistance. I have even had to tell one that all of
> the research he had done so far was worthless, because it had led him
> to a totally inappropriate method for his problem. And that was over
> a year's work towards a PhD.

That seems usual for PhD research. Much of it is learning how
not to do problems. You can be sure that he won't make that
mistake again!



> As I said, much of it is sound, which is why it is so dangerous.
> Only an expert knows when it stops being sound.

But how does one get to be an expert? You can't just start reading
the expert references, as they usually require a lot of background
understanding first.

The alternative that NR is trying to avoid is the "black box"
solution, where you are supposed to trust the creator of the
"black box" routine to have it right. But you can't do that if
you don't know which problems the routine was designed to get
right.

-- glen

Daniel Carrera

unread,
Dec 31, 2010, 4:47:31 PM12/31/10
to
On Dec 31, 8:23 pm, n...@cam.ac.uk wrote:
> I have even had to tell one that all of
> the research he had done so far was worthless, because it had led him
> to a totally inappropriate method for his problem.  And that was over
> a year's work towards a PhD.


Do you remember what was the unsound method that this PhD student was
using? It would be a good example of what can go wrong with NR.


Daniel.

Nasser M. Abbasi

unread,
Dec 31, 2010, 5:08:53 PM12/31/10
to

That is good question, but what I was thinking when
I first read this, is how could a PhD student be
on the wrong research path for a whole year and not be told or
warned about it by their advisor?

I understand that a PhD student will always have an
assigned advisor, and they have to meet them regularly
to update them with the research and talk about how
things are going. It sounds like may be the student was
not doing a good job in updating their professor very
well on what they were doing?

--Nasser

Ron Shepard

unread,
Dec 31, 2010, 6:56:05 PM12/31/10
to
In article <ron-shepard-13B7...@news60.forteinc.com>,
Ron Shepard <ron-s...@NOSPAM.comcast.net> wrote:

> > 3) Can someone suggest an online reference besides Wikipedia that
> > would help me understand what each of these decompositions is good
> > for?
>
> I would suggest looking at the LAPACK documentation itself. Here is
> one site
>
> http://www.netlib.org/lapack/lug/
>
> There are also pdf versions available. If you are interested in
> some of the details, there are also separate working notes of the
> LAPACK developers online.

After posting this, I remembered that I also often use the online
Mathematica documentation to get started on numerical problems.
Even if you don't actually use Mathematica, these usually have
citations and references to related material. Here is the one for
QR decomposition, for example:

http://mathworld.wolfram.com/QRDecomposition.html

$.02 -Ron Shepard

Daniel Carrera

unread,
Dec 31, 2010, 7:02:47 PM12/31/10
to
On Dec 31 2010, 11:08 pm, "Nasser M. Abbasi" <n...@12000.org> wrote:
> That is good question, but what I was thinking when
> I first read this, is how could a PhD student be
> on the wrong research path for a whole year and not be told or
> warned about it by their advisor?
>
> I understand that a PhD student will always have an
> assigned advisor, and they have to meet them regularly
> to update them with the research and talk about how
> things are going.  It sounds like may be the student was
> not doing a good job in updating their professor very
> well on what they were doing?

I think it does suggest a problem with either the student or the
supervisor (or maybe they just spoke at different wavelengths). A year
is a lot of time. The problem should have been caught a lot sooner.

Daniel.

glen herrmannsfeldt

unread,
Dec 31, 2010, 8:32:40 PM12/31/10
to
Daniel Carrera <dcar...@gmail.com> wrote:
> On Dec 31 2010, 11:08 pm, "Nasser M. Abbasi" <n...@12000.org> wrote:
(snip)

>> I understand that a PhD student will always have an
>> assigned advisor, and they have to meet them regularly
>> to update them with the research and talk about how
>> things are going.

> I think it does suggest a problem with either the student or the
> supervisor (or maybe they just spoke at different wavelengths). A year
> is a lot of time. The problem should have been caught a lot sooner.

As the signature of a frequent poster to this group says:

Good judgment comes from experience;
experience comes from bad judgment.
-- Mark Twain

Much of graduate school is experience. Different advisors and
students are different in how much guidance they give/need.

I would presume that the student learned much during that year,
though it seems not directly useful for his project.

If the student does exactly as the advisor says, he tends not
to learn much, following the quote above.

-- glen

nm...@cam.ac.uk

unread,
Jan 1, 2011, 6:10:01 AM1/1/11
to
In article <2ee4d43c-19d3-45c5...@32g2000yqz.googlegroups.com>,

Daniel Carrera <dcar...@gmail.com> wrote:
>
>> I have even had to tell one that all of
>> the research he had done so far was worthless, because it had led him
>> to a totally inappropriate method for his problem. =A0And that was over

>> a year's work towards a PhD.
>
>Do you remember what was the unsound method that this PhD student was
>using? It would be a good example of what can go wrong with NR.

It may have been quadrature for a function which went to infinity
at one boundary. The method wasn't unsound - it was merely wrong
for that problem. But it's a decade ago now, and was just one of
many.

Even longer ago, I and others posted a good many examples where
the advice given was seriously misleading or wrong. As I said, I
would have to recheck and am disinclined to waste more time on
that. And, of course, I am NOT a real expert on most areas of
numerical analysis, so I won't even spot many of them.


Regards,
Nick Maclaren.

robin

unread,
Jan 1, 2011, 8:01:38 AM1/1/11
to
<nm...@cam.ac.uk> wrote in message news:ifkotf$h8e$1...@gosset.csi.cam.ac.uk...

Have you looked at any recent edition of NR?

It has gone into several editions.

Precisely what is "seriously misleading"?


Dan Nagle

unread,
Jan 1, 2011, 11:56:07 AM1/1/11
to
Hello,

On 2011-01-01 08:01:38 -0500, robin said:

> Have you looked at any recent edition of NR?
>
> It has gone into several editions.
>
> Precisely what is "seriously misleading"?

You might want to follow the discussion at
http://www.astro.gla.ac.uk/~norman/star/sc13/sc13.htx/N-a2b3c1.html

I have been unable to load the page Van Snyder wrote discussing
his issues with NR, and he has not yet responded to an email.
I suspect the server is no more, but I don't know. I can find
no link to NR issues from Van's (apparently official) page at JPL.

--
Cheers!

Dan Nagle

Beliavsky

unread,
Jan 1, 2011, 1:12:01 PM1/1/11
to
On Jan 1, 10:56 am, Dan Nagle <danna...@verizon.net> wrote:
> Hello,
>
> On 2011-01-01 08:01:38 -0500, robin said:
>
> > Have you looked at any recent edition of NR?
>
> > It has gone into several editions.
>
> > Precisely what is "seriously misleading"?
>
> You might want to follow the discussion athttp://www.astro.gla.ac.uk/~norman/star/sc13/sc13.htx/N-a2b3c1.html

>
> I have been unable to load the page Van Snyder wrote discussing
> his issues with NR, and he has not yet responded to an email.

"Why not use Numerical Recipes?" (compiled by W. Van Snyder) is at
http://math.stanford.edu/~lekheng/courses/302/wnnr/nr.html .

nm...@cam.ac.uk

unread,
Jan 1, 2011, 12:46:40 PM1/1/11
to
In article <61d585d5-a661-45d5...@l17g2000yqe.googlegroups.com>,

Beliavsky <beli...@aol.com> wrote:
>>
>> I have been unable to load the page Van Snyder wrote discussing
>> his issues with NR, and he has not yet responded to an email.
>
>"Why not use Numerical Recipes?" (compiled by W. Van Snyder) is at
>http://math.stanford.edu/~lekheng/courses/302/wnnr/nr.html .

The issues that I saw were mostly different, but comparable to
those mentioned. I.e. that's nowhere near a complete list.


Regards,
Nick Maclaren.

glen herrmannsfeldt

unread,
Jan 1, 2011, 6:13:23 PM1/1/11
to
Beliavsky <beli...@aol.com> wrote:
(snip, someone wrote)

>> I have been unable to load the page Van Snyder wrote discussing
>> his issues with NR, and he has not yet responded to an email.

> "Why not use Numerical Recipes?" (compiled by W. Van Snyder) is at
> http://math.stanford.edu/~lekheng/courses/302/wnnr/nr.html .

That looks pretty good, but I will give a completely different example
to show how I consider NR.

Consider an ordinary radio receiver like most of us have in
our homes and cars. How does it work? If you open it up and look
inside, you will likely not be able to figure it out.

The Regenerative and Super-Regenerative receiver designs were developed
in the 1910's and 1920's with vacuum tubes. With one of those, you
might have been able to compare the circuit to an explanation on how
one should work, figure out what each component did, and eventually
understand it.

Modern receivers use mostly the same ideas, but the implementation
details have changed over the years. Much has been put inside
integrated circuits which you can't see inside of. If I wanted to
understand one, I would first start from the designs of the 1930's
or so, when things were simpler. After I understood those, I would
read up on how things have changed over the years. One would have
to follow the developments in sequence to have some idea how the
circuits evolved. Starting with modern circuit designs would not
help much at all.

Actually, I think I like the explanations in NR much better than
the actual code, but the code is nice, too.

If someone tried to build and sell a radio based on a book from
the 1930's (though maybe updated to transistors instead of tubes)
it isn't likely to be competitive in the market.

If one does understand that technology has changed, and that NR
isn't the final answer, then I think it isn't so bad.

OK, now another example based on the name. If you buy a recipe
book (that is, for cooking) it will be written for home cooking.
If you buy packaged food in the store, it is unlikely to be made
in the same way, and with the same ingredients. (Look on the
ingredients label of any packaged food.) Among others, there are
preservatives needed in food that will be on the shelf for a while
before being eaten, that aren't needed when it will be eaten
immediately. That doesn't reduce the market for home cook books,
but industrial food companies have their own books. One would likely
experiment with the home versions before going on to industrial
mass production.

-- glen

robin

unread,
Jan 1, 2011, 7:34:31 PM1/1/11
to
"Dan Nagle" <dann...@verizon.net> wrote in message news:ifnmb7$1p7$1...@news.eternal-september.org...

| Hello,
|
| On 2011-01-01 08:01:38 -0500, robin said:
|
| > Have you looked at any recent edition of NR?
| >
| > It has gone into several editions.
| >
| > Precisely what is "seriously misleading"?
|
| You might want to follow the discussion at
| http://www.astro.gla.ac.uk/~norman/star/sc13/sc13.htx/N-a2b3c1.html

The opening comment refers to FORTRAN IV.

That's really really really out-of-date.

There have been at least two editions of NR since then.


robin

unread,
Jan 1, 2011, 7:42:12 PM1/1/11
to
"Beliavsky" <beli...@aol.com> wrote in message
news:61d585d5-a661-45d5...@l17g2000yqe.googlegroups.com...

This too is out-of-date. Most of the remarks relate to the early 1990s.

AFIK the current edition is 2007.


Ron Shepard

unread,
Jan 2, 2011, 1:42:07 AM1/2/11
to
In article <4d1fc9ec$0$99355$c30e...@exi-reader.telstra.net>,
"robin" <rob...@dodo.mapson.com.au> wrote:

> "Why not use Numerical Recipes?" (compiled by W. Van Snyder) is at
> http://math.stanford.edu/~lekheng/courses/302/wnnr/nr.html .
>
> This too is out-of-date. Most of the remarks relate to the early 1990s.
>
> AFIK the current edition is 2007.

An undesirable "feature" of that latest version is that the sample
code is written in c++, and it uses enough of the obscure features
of c++ that it is difficult for someone who knows only C and fortran
to open the book to some arbitrary section and follow the details.
Also, some of the simplicity is lost in the discussion sections in
the C/C++ versions of the book because of the zero-based array index
convention in those languages.

I would have hoped that a modern fortran version would have been out
a year or so later, but it has been four years now, and still no
fortran version. This is unfortunate because there are some
interesting new methods that are discussed in the latest version,
such as multidimensional function fitting with scattered data points.

$.02 -Ron Shepard

robin

unread,
Jan 2, 2011, 3:19:38 AM1/2/11
to
<nm...@cam.ac.uk> wrote in message news:iflaj0$j2q$1...@gosset.csi.cam.ac.uk...

And everything else is sound and not dangerous?

For many of the packages, you get to see only the descriptions of the
interfaces, and none of the actual code. You might get an outline of the method.

With NR, you get not only a detailed description of the method
plus mathematical derivation, you also get all the code.

Thus, if anything goes wrong, you can at least examine the code
and determine what needs to be done with the data,
modify the algorithm, or even maybe choose another algorithm.

With a "black box", you're stuck with a routine that crashes somewhere --
and it isn't necessarily obvious why it crashed.


0 new messages