I have the text Numerical Recipes in C. I see no need to convert the
functions from this book into MATLAB M-files or Mex-files. Most of
the functions in the book already exist in MATLAB in much more
numerically robust implementations. So why do all this work? Maybe
that's why no one has done it yet.
Duane
Navin
Navin
The Numerical Recipes books are a pretty good description of the
algorithms, for those who feel like they need to understand the algorithms.
The code in early editions was famously buggy.
If Matlab doesn't directly provide the routines (and they probably directly
provide most of the routines), use the fortran or c version of the book,
consider the code provided as pseudo code, and write it in matlab.
Scott
I think Duane's point was that you don't need to use their (some may
say sub-optimal) code (me included). Which NR functions are you
using? We at CSSM may be able to replace them with ML equivalents.
- Steve
p.s. In my experience, I've found NR (in F,f,c,...) a great reference
for the algorithms. But pretty crappy for code. Read the relevant
chapter and then write the code in ML.
I guess I have to come down on the side of the dissenters here. I've found
NR (in C/C++, FORTRAN) to be very useful over the years, and while I have
heard of the occasional bug, I have not experienced any myself. My code for
Lomb-Scargle periodogram analysis (on the FEX) is a direct translation from
NR/FORTRAN into ML. And while writing _efficient_ code in ML might not
always be trivial, translating directly (including lots of unnecessary
nested for loops, etc.) into mcode is pretty easy. So do as Scott
recommends: use the NR in your favorite non-ML language as a coding guide
and translate away!
Cheers,
Brett
>always be trivial, translating directly (including lots of unnecessary
>nested for loops, etc.) into mcode is pretty easy.
As I have mentioned here several times, Fortran 90 is an array
language, like Matlab. Currently, if you buy the Fortran version of
Numerical Recipes from http://www.nr.com/com/storefront.html , you get
Fortran 77 and Fortran 90 versions of NR. The F90 version uses array
operations and thus does not have "unnecessary [] nested loops". It
would be much better to convert that to Matlab than the F77 version.
Just out of curiosity, what exactly is it that Numerical Recipes can do
that MATLAB can't already do?
/Johan
Hmm, that's blown my morning coffee break! I'm going to have to take
my "C" copy outside, blow off the dust and then compare function for
function.
Just reiterating previous points. ML help doesn't tell you WHY or
HOW things work, just how to use them. I read NR a bit like
Schildt's ANSI C book - I ignore the code examples.
In the humble words of Homer J. Simpson... D'Oh!
I thought this whole thing was about obtaining some functionality of NR
and implement it in ML. Now I get it... Don't waste any more coffee
breaks ;)
/Johan
I'm sure that most of the bugs are resolved. If you google up "Numerical
Recipes bugs" you'll see that the NR folks dismiss them as "distressing
rumors", but pages like http://www.uwyo.edu/buerkle/misc/wnotnr.html go
into some detail about the nature of some of the problems.
Push comes to shove, when using any sort of numerical libraries, try to
use something with a huge user base and many reviews. IMSL and netlib
come to mind, as does (of course) matlab.
I'm a big fan, though, of the idea that to use an algorithm correctly, it
really can help to have an idea of what is in the algorithm. In terms of
reviewing bunches of algorithms in a way that people can understand them,
NR is fantastic. Of course, this idea has its limits-- NL2SOL is one of
my favorite nonlinear optimizers, and it can use many different
optimization routines, choosing which one it will use for any given
iteration based on the nature of the problem. I don't really have any
interest in learning what lies under the hood of each and every
algorithm.
BTW, has anyone written a matlab wrapper for NL2?? Fortran has its
proponents (myself included), but if any update of the language actually
made I/O easy, then it wouldn't be Fortran anymore! I'd much rather work
from the matlab side.
Scott
> I'm sure that most of the bugs are resolved. If you google up
> "Numerical Recipes bugs" you'll see that the NR folks dismiss them as
> "distressing rumors", but pages like
> http://www.uwyo.edu/buerkle/misc/wnotnr.html go into some detail about
> the nature of some of the problems.
>
http://www.accu.org/bookreviews/public/reviews/n/n003134.htm contains a
very interesting, and fairly recent, perspective on NR that I have never
even considered.
Scott
NAG should also be considered. As described at
http://www.nag.com/nagware/matlab_products.asp , it possible to call
the NAG Fortran library from Matlab.
<snip>
>Fortran has its proponents (myself included), but if any update of the
language actually
>made I/O easy, then it wouldn't be Fortran anymore! I'd much rather
work
>from the matlab side.
IMO Fortran I/O is easy and powerful. If x is a matrix with 10 columns
then just something like
write (some_unit,"(10f10.4)") x
writes the matrix to a file. If you don't know how many columns there
will be, it is easy to build the format string at run-time. A matrix
can be read with just
read (some_unit,*) x
What I/O do you think is awkward in Fortran?
> What I/O do you think is awkward in Fortran?
>
>
It's definately the reads, not the writes. Perhaps things have changed
since I was a grad student, desperately trying to make sure that each
character in my input file was in the correct column, let alone in the
correct format. Mixed format files were especially tough. It was
especially tough, as cssm was not popular enough to post URGENT messages
for help.
Scott
>> What I/O do you think is awkward in Fortran?
>It's definately the reads, not the writes. Perhaps things have changed
>since I was a grad student, desperately trying to make sure that each
>character in my input file was in the correct column, let alone in the
>correct format.
You were using a read with a specified format. It is easier to use a
list-directed read, present at least since Fortran 77. It lets you read
an entire matrix from a file with one read (as in Matlab), as I showed,
regardless of what character is in what column.
Fortran is still called Fortran, the OTPL :).
and then later:
> <http://www.accu.org/bookreviews/public/reviews/n/n003134.htm>
> ontains a
> very interesting, and fairly recent, perspective on NR that I have
> never
> even considered.
I think there would be a market for "Numerical Recipes in Pseudo
Code" or even "Numerical Recipes Without Code".
What would be the difference between "Numerical Recipes
Without Code" and any other of the many good texts on
numerical analysis? (Personally, as a text on numerical
analysis without the code, Numerical Recipes would not
be my favorite. I believe it is the code which raises
it a few notches in value.)
I just did a search on Amazon, using the keywords
"numerical matlab", and found at least a few texts with
a bias towards matlab that looked quite nice. This
includes one by a certain notable matlab "contributor".
John
--
As someone who has played bridge many times against both
Saul and his wife Ros and who also owns books by both
of them, one day I'll remember to bring both books along
to a tournament, asking each to autograph the other's
book.
> Personally, as a text on numerical
> analysis without the code, Numerical Recipes would not
> be my favorite
What is your favorite?
Scott
Tough question. I might want to break it down into
categories. I'll restrict my comments to texts which
I think the members of the NG would like, and would
not comment on books I have not read. (I don't have
Cleve's book YET so I won't comment. It is one I will
pick up when I have an opportunity.)
My overall favorite text on general numerical analysis
might be Stewart's Afternotes series. A fun book(s)
to read. "Numerical Methods that Work" by Acton, was
another that I enjoyed reading.
On linear algebra, it would be Golub & Van Loan as
the all encompassing text. I also liked Trefethen &
Bau. For least squares problems, Lawson & Hanson
is the classic, but Ake Byorck's book "Numerical
Methods for Least Squares Problems" has also been
handy. Seber & Wild's book, "Nonlinear Least Squares"
covers that topic very nicely.
Where splines are concerned, I learned it first from
De Boor, "A practical Guide to Splines". Still my
favorite there.
On optimization, I always liked Fletcher's "Practical
Methods of Optimization" or Gill, Murray & Wright.
Many others are equally good.
I think that no self-respecting numerical animalist
would be without Abramowitz & Stegun - "Handbook of
Mathematical Functions". I own several copies, one
for home and one at work. I even have tabs on the
sections I go to most often. If you are really into
the approximation of functions, then a truly neat
text (if hard to find) is by J.F. Hart - "Computer
Approximations".
Finally, one that I've found handy is the Graphics
Gems series. (Think of Numerical Recipes, but on
steroids.)
That is a start. I've clearly left many deserving
titles out, as well as entire domains of numerical
analysis. If I've left out your own favorite text,
then it need not have been a deliberate omission,
more likely an example of my own limited library
and memory.
John
--
All comments above are my own personal opinions,
and need NOT reflect those of anybody else in the
world.
> Tough question. I might want to break it down into
> categories. I'll restrict my comments to texts which
> I think the members of the NG would
....
Thanks-- That's a pretty impressive list. I'll have to give that
Afternotes series a try
Scott
> I think that no self-respecting numerical animalist
^^^^^^^^^
> would be without Abramowitz & Stegun - "Handbook of
> Mathematical Functions". I own several copies, one
> for home and one at work.
Wow, John, you must be really ferocious when you attack a problem!
--
Doug Schwarz
dmschwarz&urgrad,rochester,edu
Make obvious changes to get real email address.
"Johan Carlson" <Johan.NOSP...@csee.ltu.se> skrev i meddelandet
news:41eecb83$0$176$cc7c...@news.luth.se...
Again coming back to numerical recipes. As I can't still convert the
numerical recipes in C code ( cambridge book) into matlab. Anyone of
you have the Bulirsch stoer method. As I found other codes in various
different books.
Please respond.
Navin
Man, I think you just blew Mathworks coffee break -- they've been
spending big bucks conning the schoolboys that WHY and HOW things work
doesn't matter.
I am trying to build some mex files from NR C-code, and unfortunately
having some doubts in the way. For example, how can I get my inputs (
plhs[5] e plhs[6] ) in my mexFunction for converting NR’s frprmn.c
function?
void frprmn(float p[], int n, float ftol, int *iter, float
*fret,float (*func)(float []), void (*dfunc)(float [], float []))
In another words, how can I have a pointer to ‘float(*func)(float
[])’ an to ‘void (*dfunc)(float [], float []))’ inside my
mexFunction?
I guess this can be useful not only to NR conversion but for any
C-code conversion… What is desired when you confront Bottleneck
computations (usually for-loops) that do not run fast enough in
MATLAB.
Thanks for your help,
Marcelo
1) To simply answer the original question, YES!
The authors of Numerical Recipes have, for edition 3, provided a full mex-interface header file.
Find it here:
http://www.nr.com/nr3_matlab.html
It makes any Numerical Recipes algorithm very easy to implement in Matlab.
2) Why would anyone use Numerical Recipes?
Because it explains easily and clearly the code, why it's implemented that way, and how it relates to the background theory. Other textbooks I've read (admittedly I'm not 'well-read') are either on the 'theory' or the 'implementation' but rarely give a good crossover.
3) What does NR have that ML doesn't
At least one thing: Daubechies wavelets adjusted for the interval, rather than using periodic BCs. There may be more; but this is what I needed!
All the best
Tom Clark
I took a look at the sprstm function, which computes C=A*B where A and B are both sparse matrices. There is a pair of outer loops, which if translated to MATLAB would look like:
for i = 1:m
for j = 1:k
...
end
end
where A is m-by-k. This means that the time complexity of their code is Omega(m*k), which can far exceed the flops need to compute a sparse matrix product. To put this in perspective, MATLAB can do C=A*B for two tridiagonal matrices A and B (in sparse format) of dimension one million, in about one third of a second, taking O(n) time to do so. MATLAB hardly breaks a sweat. Numerical Recipes with the same pair of matrices would take O(n^2) time (perhaps more), or about a million times slower (!).
You can see a simple synopsis of the algorithm used in MATLAB's C=A*B here:
http://www.cise.ufl.edu/research/sparse/CSparse/CSparse/Source/cs_multiply.c
http://www.cise.ufl.edu/research/sparse/CSparse/CSparse/Source/cs_scatter.c
or you can read the actual code used internally in MATLAB 7.5 and later here:
http://www.mathworks.com/matlabcentral/fileexchange/15139
Unless NR has functions that MATLAB doesn't have, just use MATLAB. It's faster and more robust.