Returning infeasible solutions when solving MILP with CPLEX and Gurobi

894 views
Skip to first unread message

Axel Ringh

unread,
Jul 2, 2013, 11:09:15 AM7/2/13
to yal...@googlegroups.com
Hi everyone,


First of all I have a simple notation regarding yalmip and cplex: I am solving a milp using cplex. The solution returned through yalmip returns the status "Successfully solved (CPLEX-IBM)". This is true in one sense, however cplex also returns a message (found in the variable output, line 149 of the yalmip function call_cplexibm_miqp) which says "Solution with numerical issues" (the output code from cplex is 102). This is seen nowhere, if one does not put break points in the code and look at the output. It would be very nice if this message could be propagated back to the diagnostics-output of sdpsolve-call :)


Secondly, I have a question about infeasible solutions returned by yalmip. The milp I have is in fact a binary linear program, and it might have several optimal solutions. I am interested in trying to generate several of these. This should in theory be possible by using a technique presented in the article Canonical cuts on the unit hypercube, which can be found here http://www.jstor.org/stable/2099623. In short; let x be a vector formed of the binary variables. By adding the constraint

sum_{i | x_i = 1 for current solution} x_i  -  sum_{j | x_j = 0 for current solution} x_j  <= | {i | x_i = 1 for current solution} | - 1

(where |{...}| means the number of elements in the set {...}) the current combination of the binary variables should be excluded. So by resolving the problem with this constraint added I will get another solution. If the objective function value of this new solution is equal to that of the previous solution, we have found another optimal solution. If it is > than previous solution or infeasbile, there are no more optimal solutions.

Thus, I set up the problem and collect all the constraints in a variable con. The I solve the problem with

    solutions(1).diagnostic = solvesdp(con,obj,opts);

Now I use the following lines of code to remove the current optimal solution (x are my binary varaibles)

    x_tmp = double(x);
    x_equal_one = find(x_tmp == 1);
    x_equal_zero = find(x_tmp == 0);

    con_removeSolution = [ones(1, length(x_equal_one)) * x(x_equal_one) - ones(1, length(x_equal_zero)) * x(x_equal_zero) <= length(x_equal_one) - 1 ];
    con = con + con_removeSolution;

    solutions(2).diagnostic=solvesdp(con,obj,opts);

Solving this problem returns a solution that is infeasible with respect to some of my original constraints in the variable con. I have used both cplex and gurobi to solve the problem (and gurobi does not seem to get the nuerical issues that cplex does), and both of the return answears that are infeasible.

Are these operations somehow illegal in yalmip? Or are they not performing the operations I think they are? Even if the constraint I add is not doing what I would like it to, how can it be infeasible with respect to the original constraints that should still hold?


I am very greatefull for any help I could get
Best
Axel

Johan Löfberg

unread,
Jul 2, 2013, 11:23:26 AM7/2/13
to yal...@googlegroups.com
1.

102 means Optimal sol. within epgap or epagap tolerance found so only reasonable error code in YALMIP is 0. More granular information has to be done on a solver specific basis by turning on the flag 'savesolveroutput' and then look at sol.solveroutput

2.
The code looks correct (although a bit unnecessarily complicated, you have sum(x(x_equal_one)) - sum(x(x_equal_zero)) <= length(x_equal_one) - 1 (and it is also implemented in the command exclude)). Are you sure there are multiple optima?

From this trivial test you can see that it works
n = 3;
x
= binvar(n,1);
solvesdp
(con,sum(x))
double(x)
con
= []
for i = 1:2^n
x_tmp
= double(x)

x_equal_one
= find(x_tmp == 1);
x_equal_zero
= find(x_tmp == 0);

con
= [con,ones(1, length(x_equal_one)) * x(x_equal_one) - ones(1, length(x_equal_zero)) * x(x_equal_zero) <= length(x_equal_one) - 1 ];
solvesdp
(con,sum(x))
double(x)
end






Axel Ringh

unread,
Jul 3, 2013, 4:35:36 AM7/3/13
to yal...@googlegroups.com
Thank you for your answer.

1) Ok, than I will use that option to get more info about the solution from the solver.

2) I also implemented a small example that seems to work;

dimensions = 5;

x = binvar(dimensions,1,'full');
y = sdpvar(dimensions,1,'full');

con = [];

for index = 1:dimensions
    con = con + [0 >= y(index) >= -index*x(index)]; 
end

obj = ones(1, dimensions) * y;

opts = sdpsettings('verbose',2,'solver','gurobi','debug',1);

for index = 1:2^(dimensions) + 1
    diagnostics{index} = solvesdp(con,obj,opts);
    diagnostics{index}.info
    disp('---------------------------------------------')
    
    if ~strcmp(diagnostics{index}.info(1:12), 'Successfully') 
        
        disp('Terminated')
        disp(strcat('Infeasible after', [' ', num2str(index - 1)], ' canonical cuts'));
        disp(strcat('Number of dimensions are', [' ' ,num2str(dimensions)]))
        break
        
    end
    
    
    x_values{index} = double(x);
    y_values{index} = double(y);
    
    x_equal_one = find(x_values{index} == 1);
    x_equal_zero = find(x_values{index} == 0);

    con = con + [ones(1, length(x_equal_one)) * x(x_equal_one)...
        - ones(1,length(x_equal_zero)) * x(x_equal_zero)...
        <= length(x_equal_one) - 1];

end

It generates the solutions in the "correct" order, and the objective value function stays the same or decreases after each canonical cut applied. Hence the canonical cuts seems to work (although maybe not implemented in the easiest way). I am not sure that I have multiple optimal solutions to my first problem, but I suspect that I have. What I want to investigate if that is the case, and if so generate a number of them. So I apply the following schema:

0) Iter_count = 0
1) Solve the original problem. Store the solution.
2) IF iter_count < max_iter: Remove current solution. Iter_count = iter_count + 1
2.1) ELSE return the soltuions found so far.
3) Solve the new problem
4) IF objective function value = objective function value of first solution found: another optimal point found. Store this solution. Go to step 2)
4.1) ELSE IF objective function value > objective function value of first solution found OR problem infeasible: no more optimal solutions to the problem. Return the solutions found so far.
4.2 ) ELSE Error message since it should not be possible to get here (then I find better solutions after removing at least one solution, which should not be possible). Returns solutions found for debugging, including the current.

If I have only one optimal solution, I should go through this as 1) -> 2) -> 3) -> 4.1) and thus only return one solution to the optimization problem.

What happens is that when I apply these cuts to remove the current solution, the solver returns solutions that are infeasible with respect to some of the original constraints and have a better objective value, hence I go through it as 1) -> 2) -> 3) -> 4.2). I know there will always be numerical round-offs and such things (therefore I, e.g. do the comparison for objective value is J_new <= J_first + sqrt(eps) )  but the order of magnitude of my continuous variables are 10^3 and the constraint violations are of 10^2 and in some cases even 10^3. Unfortunately the problem I have is vary large: I have around 35 000 binary variables, 15 000 continues variables and the constraints are

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|    ID|      Constraint|                              Type|
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|    #1|   Numeric value|     Element-wise inequality 500x1|
|    #2|   Numeric value|   Element-wise inequality 26500x1|
|    #3|   Numeric value|          Equality constraint 53x1|
|    #4|   Numeric value|      Element-wise inequality 53x1|
|    #5|   Numeric value|      Element-wise inequality 53x1|
|    #6|   Numeric value|   Element-wise inequality 13197x1|
|    #7|   Numeric value|   Element-wise inequality 13197x1|
|    #8|   Numeric value|   Element-wise inequality 13250x1|
|    #9|   Numeric value|    Element-wise inequality 1250x1|
|   #10|   Numeric value|    Element-wise inequality 1250x1|
|   #11|   Numeric value|    Element-wise inequality 1250x1|
|   #12|   Numeric value|    Element-wise inequality 1250x1|
|   #13|   Numeric value|    Element-wise inequality 1750x1|
|   #14|   Numeric value|    Element-wise inequality 1750x1|
|   #15|   Numeric value|         Equality constraint 5x250|
|   #16|   Numeric value|         Equality constraint 5x250|
|   #17|   Numeric value|         Equality constraint 5x250|
|   #18|   Numeric value|         Equality constraint 5x250|
|   #19|   Numeric value|         Equality constraint 7x250|
|   #20|   Numeric value|         Equality constraint 7x250|
|   #21|   Numeric value|   Element-wise inequality 13250x1|
|   #22|   Numeric value|     Element-wise inequality 250x1|
|   #23|   Numeric value|     Element-wise inequality 250x1|
|   #24|   Numeric value|      Element-wise inequality 34x1|
|   #25|   Numeric value|      Element-wise inequality 34x1|
|   #26|   Numeric value|         Equality constraint 1x250|
|   #27|   Numeric value|         Equality constraint 1x250|
|   #28|   Numeric value|     Element-wise inequality 500x1|
|   #29|   Numeric value|     Element-wise inequality 500x1|
|   #30|   Numeric value|     Element-wise inequality 500x1|
|   #31|   Numeric value|     Element-wise inequality 250x1|
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

so around 100 000 if I am not wrong. I do not know how many of these might be violated, I have just noticed that one of these "sets" of contraints that is put on only continues variables gets violated for some indices. The violation is, as I said, in the same order of magnitude as the variable values (10^2 - 10^3). This is what confuses me, since the solver returns it as if it were a feasible solution. Do you have any idea of why this happens? Could it be that there are so many constraints so that the norm of the violation is somehow still small enough to consider it as feasible?

Best
Axel

Johan Löfberg

unread,
Jul 3, 2013, 5:47:03 AM7/3/13
to yal...@googlegroups.com
Typical cause for obtaining massively infeasible solutions, despite solver saying solution went fine, is poor big-M modeling (or same thing coming from different modeling), i.e., you have constraints with poor numerics, mixing huge and small coefficients etc. LP relaxation is feasible, binary variables are sufficiently binary w.r.t tolerances, solver then finishes, rounds the solution and exits, but the solution is infeasible when rounded. For instance, 2 <= 1+1e10*x, LP-relaxation yieds x=1e-9, and then this is rounded down, and the result is highly infeasible (simplified, but the idea is the same)


Axel Ringh

unread,
Jul 3, 2013, 7:03:58 AM7/3/13
to yal...@googlegroups.com
Ok, thank you. Then I will have a closer look to the model to see if I can find anything that looks weird.

Axel Ringh

unread,
Jul 4, 2013, 7:16:33 AM7/4/13
to yal...@googlegroups.com
When I declare the variables and the constraints in different orders, I get different solutions... I guess this is because different order of variable declaration will reorder the columns of the constraint matrix and the reordering of constraints will change the order of the rows, and thus since the solver are feed different matrices it also solves the problem in different ways. Do you think this could be the case?

Moreover, I only seem to get infeasibility when I add these cuts. All of the other constraints contain less than 500 variables (out of the about 35 000 binary and 15 000 continuous). These cuts will involve all the 35 000 binary variables, do think that this can be what causes the problem?

Johan Löfberg

unread,
Jul 4, 2013, 9:28:01 AM7/4/13
to yal...@googlegroups.com
You would have to send me the model for any further analysis (either the complete YALMIP+data, or simply the cplex data which you obtain by running the command export)

..but what you say is correct. If numerically ill-conditioned, any small change can make a huge impact.

Axel Ringh

unread,
Jul 5, 2013, 8:06:15 AM7/5/13
to yal...@googlegroups.com
I went through the model and re-scaled some of the parameters and variables, so now the continuous variables takes values in the range 10^-3 - 1. This seems to have solve the problem at least partly. Now I have violations that are 10^-4 which is at least 1 order of magnitude less than the values of the variables. I will keep working on this, but thank you very much for your input in the problem. It has been very valuable :)
Reply all
Reply to author
Forward
0 new messages