I'm looking for the computational complexity of the Nelder-Mead
Simplex method for non-linear programming. Does anybody have a
reference for this? It uses the direct search method and I'm using it
for n=1 dimension, i.e. one optimization variable of interest.
Thanks in advance.
Use fminbnd for 1-d problems. It should be more
efficient in general.
There is estimate that I know of for what you want.
John
Are you looking for Best Case, Worst Case, or Expected Case running times?
Clearly the computational time is going to depend upon the function.
I would have to think about it more, but I _suspect_ that even in 1 dimension,
the algorithm can have infinite running time -- especially since the algorithm
itself does not define stopping criteria. Anyhow, consider that even in a
single variable, a function can have infinite fluctuations between any two
rational coordinates (since the cardinality of continuous real numbers exceeds
the cardinality of rational numbers). Floating point numbers are in truth only
rational numbers so you could miss an infinite number of deep minima. Your 1D
function might _look_ completely flat at all representable floating point
numbers and yet have any amount of variability in between those numbers.
Clearly then, unless you put some strict limits on the form and smoothness of
the function being minimized, the running time of the Simplex method might
bear little resemblance to the theoretical smoothness of the function. Broaden
the irregularity out a bit and which basin you are going to fall into might
depend upon the rounding properties of the CPU.
Basically, I suspect that you could construct a 1D function for which the NM
Simplex method became a chaotic search over its surface. I do not, however,
assert this as a fact, as I do not know enough about the requirements for a
function to be chaotic.
I'm using fminsearch in Matalb. Do you have any limit on this? I know
worst-case it will be an infinite number of searches because it's
based on the "luck" of the polytope in finding the right gradients.
I've proven that my function is strictly convex so the function has
one local (global) minimum only. I have also read that, for one
dimension, the Nelder-Mead is guaranteed to converge if the function
is strictly convex.
I agree that in the worst case, it is infinite. Expected case would
probably be the smartest way to go, though best case would be
interesting to see as well. With strict convexity I suspect it will be
easier to compute as there will be very few fluctuations.
Any thoughts on this? Thanks.
Anyone?
No. There is no limit for fminsearch, and no estimate of
the time required. (Sorry, I appear to have stated there
is an estimate in my previous response, but I typed too
quickly, and missed the fact that I had not said there is
NO estimate.)
That is not because such an estimate is impossible to
build, but merely because it is not terribly worth while
to do so. You can easily dive into the algorithm in one
dimension and show a lower limit on how many steps
fminsearch will take before it chooses to cut an interval
into one that is half as long. This will relate fminsearch
to a bisection like scheme.
Feel free to do so.
John
"Ginu" <oshe...@gmail.com> wrote in message
news:663d5f4f-1f5d-432f...@g6g2000yql.googlegroups.com...
You also asked this on the sci.math.num-analysis newsgroup:
http://mathforum.org/kb/thread.jspa?threadID=2105030&tstart=0
and I think Gordon Sande's question to you is a good one. How are you
planning to use this information?
--
Steve Lord
sl...@mathworks.com
comp.soft-sys.matlab (CSSM) FAQ: http://matlabwiki.mathworks.com/MATLAB_FAQ
To contact Technical Support use the Contact Us link on
http://www.mathworks.com
Computational complexity must be related to a "problem size". When you
specify a convex function of 1 variable as your only case of interest,
then you have eliminated all variability in the problem size, and
"complexity" becomes meaningless. There is still a potential question of
_efficiency_, which could potentially be framed in terms of the number
of function evaluations, but that's not the same thing as _complexity_.
I have a network of V nodes, each of which will execute this
optimization algorithm. Hence V is the variable. However, I am simply
looking for the computational complexity when I have a convex function
of 1 variable as I can translate that to my problem pretty easily.
Hence why I am looking at this situation. If I can find an average
rate convergence for this single case, I am able to do the rest myself.