How to solve a LP with parametric objective function

528 views
Skip to first unread message

Tanguy

unread,
May 21, 2014, 3:35:26 PM5/21/14
to gur...@googlegroups.com
Hello, 

Is there a standard way to solve a parametric LP of the following form using Gurobi?

Data: A, b, c, d, u
Parameter: theta

min (c+theta*d)' x 
Ax <= b 
x >= 0 
0 <= theta <= u

I'm interested in getting the optimal solution(s) x as a function of theta. Note that I use the Matlab API.

[I've tried to follow the approach described here (using Gurobi instead of CPLEX) but I'm not sure how to examine/display the solution values after each iteration when using Gurobi.]

Thanks!

Pooja Pandey

unread,
Jun 21, 2014, 4:30:39 PM6/21/14
to gur...@googlegroups.com
I am also looking for the parametric analysis (Objective function) of the same problem. Is there sourse code available for this problem? Looks like Cplex or Gurobi don't solve this problem directly.  Cplex or Gurobi, which one be better to perform this task?

Jakob Sch.

unread,
Jun 22, 2014, 3:32:37 PM6/22/14
to gur...@googlegroups.com
Hi Pooja,

I'm not aware of any source code that is available. Since you are asking in the gurobi support forum, an answer might be a bit biased, but neutral test showed that gurobi is winning over cplex in the most cases (see http://plato.asu.edu/ftp/lpcom.html).

Best regards,
Jakob

Michal Kvasnica

unread,
Jul 3, 2014, 10:57:11 AM7/3/14
to gur...@googlegroups.com
What you are looking for is the Multi-Parametric Toolbox for Matlab (http://control.ee.ethz.ch/~mpt/3/). It can solve parametric linear, quadratic and mixed-integer problems provided the parameters enter the constraints in a linear fashion.

Your problem can be solved as follows:

% some random problem data

A = [eye(2); -eye(2)];

b = [1; 1; -0.5; -0.5];

u = 2;

c = [1; 1];

d = [1; 1];

 

% first we formulate the problem in YALMIP

 

% optimization variable

nx = size(A, 2);

x = sdpvar(nx, 1);

 

% parameter

theta = sdpvar(1, 1);

 

% objective function

J = (c + theta*d)'*x;

 

% constraints

C = [ A*x <= b, x >= 0, 0 <= theta <= u ];

 

% convert the parametric linear program to the MPT format

plp = Opt(C, J, theta, x);

 

% obtain the parametric solution

solution = plp.solve();

 

% plot the primal optimizer as a function of the parameter

for i = 1:nx

    figure; 

    solution.xopt.fplot('primal', 'index', i);

    xlabel('t');

    ylabel(sprintf('x_%d(t)', i));

end

 

% plot the objective function

figure;

solution.xopt.fplot('obj');

xlabel('t');

ylabel('J(t)');

 

 

% evaluate the parametric solution for a particular value of the parameter

t0 = 0.5;

xopt = solution.xopt.feval(t0, 'primal')

Jopt = solution.xopt.feval(t0, 'obj')


Note that parametric optimization is, in general, hard (to say the least), which means that it works for a small number of parameters (say up to 4) and optimization variables (say below 20). The reason being that the parametric solver needs to enumerate all feasible combinations of active constraints, and this number grows exponentially with the total number of constraints. Some special, well-defined problems can be solved even in larger dimensions (especially if the number of parameters is large (say 50), but the number of optimization variables is tiny (1 or 2)), but that's more of an exception than a rule.

-michal
Reply all
Reply to author
Forward
0 new messages