Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

how to set step size for fminunc

258 views
Skip to first unread message

Matt Hart

unread,
Mar 25, 2004, 5:31:52 PM3/25/04
to
Hi everybody,

I don't seem to find an option that allows me to set the minimum step
size used in fminunc. I am using it as a first stage in an optimization
process and want very fast convergence to the region where the optimum
lies. The function is such that fminunc cannot find the actual minimum
but it is smooth enough globally such that a few iterations should be
enough to find the approximate region. Then I am switching to a more
suitable algorithm to find the optimum.

Right now fminunc reduces the step size too much after the first few
iterations which prevents it from converging.

So how can I let the step size be >=0.1 say?

thanks, matt.

Peter Spellucci

unread,
Mar 26, 2004, 10:37:06 AM3/26/04
to

In article <651e0e77e7937f21...@news.teranews.com>,
you have no influence on the stepsize. limiting the stepsize to be at least 0.1
could even destroy the convergence of the method. the stepsize is determined
by the requirement of "sufficient" decrease". there is a possibility to use a
very crude "tolx" and "tolfun", but it is hard to see how this should be
chosen without knowing a lot about your function.
hth
peter

Matt Hart

unread,
Mar 26, 2004, 7:06:30 PM3/26/04
to

[...]

>
> you have no influence on the stepsize. limiting the stepsize to be at least 0.1
> could even destroy the convergence of the method. the stepsize is determined
> by the requirement of "sufficient" decrease". there is a possibility to use a
> very crude "tolx" and "tolfun", but it is hard to see how this should be
> chosen without knowing a lot about your function.
> hth
> peter

Peter,

Thanks for this even if this is not the answer I was hoping for. Here
are some reasons why I want to do something like this. My objective
function does not have derivatives (its a mass of indicator functions).
It can be solved quite nicely using the new pattern search in the GADS
toolbox (fminsearch takes too long). However, for pattern search to
really work I need to bound the parameter space. Now if you plot the
function for 2 variables only it looks really smooth. So as long as the
step size doesn't get too small it seems to be converging nicely.
However at the micro level things aren't nice so then things fail and
that's the point at which I want to switch over to pattern search.

The reason why this should work is because a friend tried the objective
function in Gauss and there it comes fairly close to the solution. So if
I could get something like that to work first and then fit those values
in the pattern search with a reasonable bound I should be fine. Fminunc
however doesn't like this.

Do you know of any matlab code on the net which does something like
fminunc but where I could fix the step size?

thanks, matt.


Peter Spellucci

unread,
Mar 29, 2004, 8:28:06 AM3/29/04
to

In article <51ce1c459f2d50ca...@news.teranews.com>,

in a mathmeatical puristic way you could smooth your function and use the
smoothed one, but from my experience this helps little )because a steep
slope is as bad as a kink). but here is a source which might help you:
(the announcement came today in the sci.op-research group:
%%%%%%%%%%%%%%
(from arnold neumaier):
SNOBFIT (Stable Noisy Optimization by Branch and FIT)
is a MATLAB 6 package for the robust and fast solution
of noisy optimization problems with continuous variables
varying within bound, possibly subject to additional
soft constraints. Discrete variables are not supported.

Objective function values must be provided by a file-based
interface; care is taken that the optimization proceeds
reasonably even when the interface produces noisy or
even occasionally undefined results (hidden constraints).
The interface makes it possible to use SNOBFIT with new
data entered by hand, or by any automatic or semiautomatic
experimental system.

This makes SNOBFIT very suitable for applications to the
selection of continuous parameter settings for simulations
or experiments, performed with the goal of optimizing some
user-specified criterion. Since multiple data points can be
entered, SNOBFIT can take advantage of parallel function
evaluations.

The method combines a branching strategy to enhance the
chance of finding a global minimum with a sequential
quadratic programming method based on fitted quadratic
models to have good local properties. Various safeguards
address many possible pitfalls that may arise in practical
applications, for which most other optimization routines
are ill-prepared. Soft constraints are taken care of by
a new penalty-type method with strong theoretical properties.

Source code and a description of the methods used can be
found at
http://www.mat.univie.ac.at/~neum/software/snobfit/
or via my Global (and Local) Optimization web site.

%%%%%%%%%%%

hth
peter

Matt Hart

unread,
Mar 29, 2004, 2:22:46 PM3/29/04
to
[...optimization advice]

(as always) very helpful. thanks, matt.

Marcelo Marazzi

unread,
Mar 31, 2004, 2:27:20 PM3/31/04
to
There isn't an option for setting a lower bound on the stepsize;
of course, you can always try adding one line of Matlab code to
enforce this in the algorithm, though this ad hoc modification is
not guaranteed to work.

If your function is not smooth enough, it might be that the default
step in the finite differences calculation is also too small for
your case. One thing to try is to increase it via options.DiffMinChange.

-marcelo

Olivier Salvado

unread,
Jun 14, 2004, 4:10:38 PM6/14/04
to
That's what I did: modify fminsearch to initialize the size of the
trials to be bigger (user specified), and taking into account
DiffMinChange in the Simplex. Not super clean but it works in my
case...

I posted the new code on matlab central: fminsearchOS.

Olivier

0 new messages