Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Supplying a Gradient to Fmincon

206 views
Skip to first unread message

DOD

unread,
Apr 13, 2011, 12:29:43 PM4/13/11
to
I am trying to follow the advice given here

http://www.mathworks.com/help/toolbox/optim/ug/brhkghv-7.html

but I am having trouble. I have a complicated objective to minimize
subject to complicated nonlinear constraints, all of which are
functions of my 3 choice variables, and a wide variety of parameters.
My script that starts everything looks like this:

% define a bunch of parameters
gamma= .2;
beta = .3;
x0=[.5 .5 .5];
%etc
solution = nested_minimization_program(x0,gamma,beta)


Nested_minimization_program looks like this:

function out = nest_minimization-program(x0,gamma,beta)
options = optimset('GradObj','on');
out = fmincon(@objective,x0,[],[],[],[],[0 0 0],[1 1
1],@nonlin,options)
function [obj obj_gradient] = objective(x)
[obj obj_gradient] = complicated_objective(x,gamma,beta);
end
function [ineq_constriant eq_constraint] = nonlin(x)
[ineq_constriant eq_constraint] =
complicated_constaints(x,beta,gamma)
end
end

Complicated_objective is a file that returns the value of the
objective for its first argument, and the value of anlaytical gradient
for its second. Complicated_constaints returns a vector of nonlinear
inequality constraints for its first argument, and a vector of
nonlinear equality constraints for its second.

The reason to do this is so that I can use the @objective and @nonlin
syntax for fmincon; objective and nonlin are only functions of x, not
of the parameters, because they are subfunctions of a function that
has been passed the parameters already. I believe this is the form I
should use in order to pass the gradient and the nonlinear constraints
on to fmincon. My problem is that when I run this code, I get the
following error

>Warning: Trust-region-reflective algorithm does not solve this type of problem,
using active-set algorithm. You could also try the interior-point or
sqp
algorithms: set the Algorithm option to 'interior-point' or 'sqp' and
rerun. For
more help, see Choosing the Algorithm in the documentation.

IE, for some reason fmincon is leaving the Trust-region-reflective
algorithm and going to active set, which does not make use of my
analytical gradient. The requirements for fmincon to use analytical
gradients is, according to http://www.mathworks.com/help/toolbox/optim/ug/brhkghv-3.html,

>Write code that returns:
> The objective function (scalar) as the first output

> The gradient (vector) as the second output

>Set the GradObj option to 'on' with optimset.


objective returns a scalar value of the objective and a gradient as
required, Gradobj is turned on, so I don't see my problem.

Matt J

unread,
Apr 13, 2011, 4:08:05 PM4/13/11
to
DOD <dco...@gmail.com> wrote in message <c3d4297a-b9d6-4819...@a26g2000vbo.googlegroups.com>...

>
My problem is that when I run this code, I get the
> following error
=======================

You mean "warning".


> IE, for some reason fmincon is leaving the Trust-region-reflective
> algorithm

============================


because trust-region only supports
xor(bound constraints, linear equality constraints)
as mentioned in the FMINCON doc page.


> and going to active set, which does not make use of my
> analytical gradient.

====================

Active set will not ignore your analytical gradient.
It simply doesn't require you to supply it (unlike trust-region)

http://www.mathworks.com/help/toolbox/optim/ug/brhkghv-3.html#bsj1e55

However, you might try using interior-point as the warning message suggests to see if performance improves.


Alan Weiss

unread,
Apr 13, 2011, 4:12:33 PM4/13/11
to
The reason that you cannot use the trust-region-reflective algorithm has
nothing to do with the way you pass extra parameters. As documented
here, among other places:
http://www.mathworks.com/help/toolbox/optim/ug/brhkghv-18.html#bsbwxm7
the trust-region-reflective algorithm only takes bound constraints or
linear equality constraints, but not both, and cannot handle nonlinear
constraints. It's a good algorithm, but is limited in the range of
constraints it can handle.

The active-set algorithm does make use of your gradient calculation
assuming that
1. You set GradObj to 'on'
2. You set GradConstr to 'on'

You might also want to try setting Algorithm to 'interior-point', as
recommended in the link above.

It is possible that your nonlinear constraints are not written
correctly, since you indicate that you have more than one, and that you
give a vector as the gradient of the constraints. As described here:
http://www.mathworks.com/help/toolbox/optim/ug/brhkghv-11.html#brhkghv-16
the gradient of the constraints should be a matrix, with the same number
of columns as the number of constraints, and the number of rows is the
dimension of your vector x.

Good luck,

Alan Weiss
MATLAB mathematical toolbox documentation

Matt J

unread,
Apr 13, 2011, 4:20:08 PM4/13/11
to
Alan Weiss <awe...@mathworks.com> wrote in message <io503h$l9m$1...@fred.mathworks.com>...

>
>
> The active-set algorithm does make use of your gradient calculation
> assuming that
> 1. You set GradObj to 'on'
> 2. You set GradConstr to 'on'
=================

But as I understand things, you do not have to do both, if you only want to supply an analytical gradient for one or the other....

DOD

unread,
Apr 13, 2011, 4:49:47 PM4/13/11
to
On Apr 13, 3:08 pm, "Matt J " <mattjacREM...@THISieee.spam> wrote:
> DOD <dco...@gmail.com> wrote in message <c3d4297a-b9d6-4819-beff-f974bb7a4...@a26g2000vbo.googlegroups.com>...

I'm not sure I understand; I am not supplying any linear equality
constraints - those matrices are empty. I am using upper and lower
bounds and nonlinear constraints. Is the problem therefore the very
use of a nonlinear constraint? IE the trust-region method will not
work in my case in any event?

As for your advice to try interior point, I am again having some
trouble that I think stems from the use of parameters. Here is my
code:

---
function [out fval]
=nested_ramsey_minimization(b,g,lambda,gamma,rho,kavg,beta,delta,firm_alpha,initial_guess)

options = optimset('Algorithm','interior-point', ...
'GradObj','on','GradConstr','on', ...
'Hessian','user-supplied','HessFcn',@hessianfn);
A=[];
littleb=[];
Aeq=[];
littlebeq=[];

%[out fval] =
fmincon(@ramsey_obj,initial_guess,A,littleb,Aeq,littlebeq,[0 0 0],[1 1
1],@nonlin,options);
[out fval] = fmincon(@ramsey_obj,initial_guess,A,littleb,Aeq,littlebeq,
[],[],@nonlin,options);

function [obj_value obj_gradient] = ramsey_obj(x)
[obj_value obj_gradient] =
ramsey_direct_minimization(x,lambda,gamma,rho,kavg,beta);
end

function [cineq ceq gradc gradceq] = nonlin(x)
[cineq ceq gradc gradceq] =
ramsey_direct_constraints(x,b,g,lambda,gamma,rho,kavg,firm_alpha,delta,beta);
end
% Seperate subfunction for hessian when using interior point
function out = hessianfn(x)
out = ramsey_hessian(x,lambda,gamma,rho,kavg);
end

end
---

What is different from before: the objective function @ramsey_obj now
only supplies the scalar objective and the vector of gradients. The
hessian is supplied by a separate function @hessianfn. This new
function is again defined as a subfunction so that it is only a
function of x, since the parameters required by ramsey_hessian have
been passed to nested_ramsey_minimization.

When I run this code I receive the following error:

--
??? Error using ==> nested_ramsey_minimization>hessianfn
Too many input arguments.
---

I don't understand the problem; the only input to hessianfn is x,
which is a 3-vector, and in the code for ramsey_hessian, only x(1)
through x(3) are used. So I don't know what this error is really
telling me.


Thanks very much for your help!


DOD

unread,
Apr 13, 2011, 4:53:13 PM4/13/11
to
On Apr 13, 3:12 pm, Alan Weiss <awe...@mathworks.com> wrote:
> The reason that you cannot use the trust-region-reflective algorithm has
> nothing to do with the way you pass extra parameters. As documented
> here, among other places:http://www.mathworks.com/help/toolbox/optim/ug/brhkghv-18.html#bsbwxm7
> the trust-region-reflective algorithm only takes bound constraints or
> linear equality constraints, but not both, and cannot handle nonlinear
> constraints. It's a good algorithm, but is limited in the range of
> constraints it can handle.
>
> The active-set algorithm does make use of your gradient calculation
> assuming that
> 1. You set GradObj to 'on'
> 2. You set GradConstr to 'on'
>
> You might also want to try setting Algorithm to 'interior-point', as
> recommended in the link above.
>
> It is possible that your nonlinear constraints are not written
> correctly, since you indicate that you have more than one, and that you
> give a vector as the gradient of the constraints. As described here:http://www.mathworks.com/help/toolbox/optim/ug/brhkghv-11.html#brhkgh...
> > gradients is, according tohttp://www.mathworks.com/help/toolbox/optim/ug/brhkghv-3.html,

>
> >> Write code that returns:
> >>     The objective function (scalar) as the first output
>
> >>     The gradient (vector) as the second output
>
> >> Set the GradObj option to 'on' with optimset.
>
> > objective returns a scalar value of the objective and a gradient as
> > required, Gradobj is turned on, so I don't see my problem.

It is correct that I have two nonlinear constraints, and they are
returned in a vector. The 'gradient' for these constraints is then
returned as a 3x2 matrix, which I believe is correct. So I don't
think that is the problem. You are correct that it was not clear to
me that the trus-region methods do not support nonlinear constraints,
so I am attempting to rewrite as an interior-point problem, but as
detailed in my reply above, that is not quite working yet either.
Thanks for your help!

Matt J

unread,
Apr 13, 2011, 5:10:21 PM4/13/11
to
DOD <dco...@gmail.com> wrote in message <c8fed5ff-c706-4111...@q12g2000prb.googlegroups.com>...

>
> Is the problem therefore the very
> use of a nonlinear constraint? IE the trust-region method will not
> work in my case in any event?
===============

Yes.

> --
> ??? Error using ==> nested_ramsey_minimization>hessianfn
> Too many input arguments.
> ---
>
> I don't understand the problem; the only input to hessianfn is x,
> which is a 3-vector, and in the code for ramsey_hessian, only x(1)
> through x(3) are used. So I don't know what this error is really
> telling me.

=================


hessianfn is not supposed to be a function only of x. It is supposed to be a function of two arguments

hessian = hessianfn(x, lambda)

as described here

http://www.mathworks.com/help/toolbox/optim/ug/fmincon.html#brh002z


The error message is saying that FMINCON wants to pass more than 1 argument to hessianfn, but you have not allowed it to.

Matt J

unread,
Apr 13, 2011, 5:24:05 PM4/13/11
to
"Matt J" wrote in message <io53ft$f7$1...@fred.mathworks.com>...

>
>
> hessianfn is not supposed to be a function only of x. It is supposed to be a function of two arguments
>
> hessian = hessianfn(x, lambda)
>
====================

Granted, the FMINCON documentation could be clearer about this. It just says that the HessianFcn has to a return a Hessian. It doesn't tell you that it's supposed to be the Hessian of the Lagrangian, as opposed to just the objective function.

DOD

unread,
Apr 13, 2011, 6:01:29 PM4/13/11
to
On Apr 13, 4:24 pm, "Matt J " <mattjacREM...@THISieee.spam> wrote:
> "Matt J" wrote in message <io53ft$f...@fred.mathworks.com>...

I see, this makes it much more clear what is required. Thanks very
much for your help, I think I see what I need to do.

0 new messages