Pool of Feasible Solutions

1,206 views
Skip to first unread message

Bert Dijk

unread,
Nov 25, 2015, 8:16:44 AM11/25/15
to Gurobi Optimization
Hi all,

I am currently creating an optimization model (MIP) that should return a pool of feasible solutions. I am using the SolutionNumber functionality of Gurobi to loop over my solutions and obtain the information I want. 

However, in some cases I only obtain less than 3 solutions, while I would desire more. I believe the problem size is sufficient to obtain more.

Are there other ways to obtain more feasible solutions, or are the ones found with SolutionNumber the only accessible ones? 

I am not that experienced with Gurobi, so any help will be appreciated! I tried to change the MIPFocus parameter but this did not generate more solutions in my case.

Kind regards,

Bert

Tobias Achterberg

unread,
Nov 25, 2015, 8:29:45 AM11/25/15
to gur...@googlegroups.com
The SolutionNumber functionality is just providing access to all the solutions
that Gurobi found while it was searching for an optimal solution. Thus, the size
of this set somewhat depends on luck (i.e., the actual solution path that Gurobi
took). It could happen that Gurobi finds an optimal solution immediately (i.e.,
as first solution that it produces), which would mean that the SolutionNumber
will be 1 at the end.

In order to reliably generate more than one solution you could use a callback
that collects the solutions in a private set and tells Gurobi to discard the
solution (so it has to continue searching). Doing this would actually make it
possible to collect all feasible solution of an integer program, but of course
these can be many and it could take a very long time to enumerate them.

Another approach would be to solve the model to optimality and query the
SolutionNumber attribute. If you have enough solutions, stop. If not, add a
constraint

c*x <= z' + eps

to your model with c being the objective function vector, z' being the optimal
objective value, and "eps" being some non-negative constant that allows to find
sub-optimal solutions. Then, modify the objective function to some random
vector, for example to maximize the distance to the optimal solution found in
the first run. Then, solve again, query the SolutionNumber attribute again and
iterate.

For the latter approach we have some code example in Java, Python and C++, which
we could probably share here if this would be helpful.


Tobias

Bert Dijk

unread,
Nov 25, 2015, 9:36:25 AM11/25/15
to Gurobi Optimization
Hi Tobias, 

Thanks for your elaborate reply. I believe enumerating all solutions would take too long, therefore I am very interested in the last approach you described.

I am writing my code in Python, could you perhaps share the code example you have available?

Thanks again for you effort.

Kind regards,

Bert

Op woensdag 25 november 2015 11:29:45 UTC-2 schreef Tobias Achterberg:

Tobias Achterberg

unread,
Nov 25, 2015, 10:56:47 AM11/25/15
to Gurobi Optimization
Here is a Python example from Greg Glockner that you can use as a starting point:
#!/usr/bin/python

# Copyright 2013, Gurobi Optimization, Inc.

# A heuristic to find alternate solutions to an LP or MILP model.
# This example reads a model from a file and solves it. If a solution is
# found, it modifies the model to look for alternate solutions that are
# far from the prior solutions. Uses Model.getAttr() to retrieve attributes
# in bulk.

solfn
= "alternate"
eps
= 1e-4

import sys
from gurobipy import *

if len(sys.argv) < 3:
   
print('Usage: alternate.py filename numsols')
    quit
()

try:
   
# Solve model to determine objective value
    numsols
= int(sys.argv[2])
    model
= read(sys.argv[1])
    vars
= model.getVars()
   
if model.IsQP == 1 or model.IsQCP == 1:
       
print('Error: only supports LP and MILP models')
       
exit(0)
    model
.optimize()
   
if model.SolCount < 1:
       
print('No solution found, stopping')
       
exit(0)
   
print('\nInitial solution has objective %f\n' % model.ObjVal)
    model
.write("%s%i.sol" % (solfn, 0))

   
# Initialize storage for previous solutions
   
PrevX = [model.getAttr('X', vars)]

   
if model.IsMIP == 0:
       
# For LPs, fix variables with non-zero reduced costs
       
# and slacks with non-zero duals
        cons
= model.getConstrs()
        RC
= model.getAttr('RC', vars)
       
Pi = model.getAttr('Pi', cons)
       
for i in range(len(vars)):
           
if abs(RC[i]) > model.Params.OptimalityTol:
                vars
[i].LB = PrevX[0][i]
                vars
[i].UB = PrevX[0][i]
       
for i in range(len(cons)):
           
if abs(Pi[i]) > model.Params.OptimalityTol:
                cons
[i].Sense = '='
   
else:
       
# For MIPs, convert objective to a constraint
       
Obj = model.getAttr('Obj', vars)
        objexp
= 0
       
for i in range(len(vars)):
           
if Obj[i] != 0:
                objexp
+= vars[i]*Obj[i]
        model
.addConstr(objexp == model.ObjVal, "OrigObj")

   
# Initialize new objective
    model
.ModelSense = GRB.MINIMIZE
   
Obj = list(PrevX[0]) # make a copy
   
for i in range(len(vars)):
        vars
[i].Obj = Obj[i]
   
   
# Solve iteratively with different objectives
    model
.Params.SolutionLimit = 1
   
while len(PrevX) < numsols:
        model
.optimize()
       
ThisX = model.getAttr('X', vars)
       
# Test that the solution is new
       
for k in range(len(PrevX)):
            notNew
= True
           
for i in range(len(vars)):
               
if abs(ThisX[i]-PrevX[k][i]) > eps:
                    notNew
= False
                   
break
           
if notNew:
               
break
       
if notNew:
           
if model.IsMIP == 1 and model.Status == GRB.SOLUTION_LIMIT:
               
print('\nSolution %i is same as solution %i, continuing\n' %
                     
(len(PrevX),k) )
                model
.Params.SolutionLimit += 1
           
else:
               
print('\nSolution %i is same as solution %i, stopping\n' %
                     
(len(PrevX),k) )
               
break
       
else:
           
# Store and write current solution, modify objective and resolve
           
for i in range(len(vars)):
               
Obj[i] = 0.8*ThisX[i]+0.2*Obj[i]
                vars
[i].Obj = Obj[i]
           
print('\nAlternate solution %i found' % len(PrevX))
            model
.write("%s%i.sol" % (solfn, len(PrevX)))
           
PrevX.append(ThisX)
            model
.Params.SolutionLimit = 1
       
except GurobiError as e:
   
print('Gurobi error: %s' % str(e))



Tobias Achterberg

unread,
Nov 25, 2015, 10:59:30 AM11/25/15
to Gurobi Optimization
Note that the code line

model.addConstr(objexp == model.ObjVal, "OrigObj")

means that you are only looking for alternate solutions that have exactly (up to feasibility tolerance) the same objective value as the optimal solution. My guess is that you want to replace this by something like

model.addConstr(objexp <= model.ObjVal + 1.05 * abs(model.ObjVal), "OrigObj")

which means (provided that you are minimizing) that you are looking for solutions that are at most 5% away from the optimal value.

Tobias Achterberg

unread,
Nov 25, 2015, 11:00:39 AM11/25/15
to Gurobi Optimization
On Wednesday, November 25, 2015 at 4:59:30 PM UTC+1, Tobias Achterberg wrote:
model.addConstr(objexp <= model.ObjVal + 1.05 * abs(model.ObjVal), "OrigObj")

Sorry, of course this should be "0.05 * abs(...)" instead of 1.05.
 

Bert Dijk

unread,
Nov 26, 2015, 4:22:48 PM11/26/15
to gur...@googlegroups.com
Hi Tobias,

Thank you for providing the code. It has been very useful and I tried to play around with it a little bit. 

Although I managed to implement the code, I was not very successful in obtaining new solutions, when the model re-optimized the same solution was found and then the loop terminated. I believe this has to do with a greedy behavior that lies within my model (its a maximization model).

 In this piece of code, my model status is 2 (optimality) and solution limit = 10. 
            if model.IsMIP == 1 and model.Status == GRB.SOLUTION_LIMIT:
                
print('\nSolution %i is same as solution %i, continuing\n' %
                      
(len(PrevX),k) )
                model.Params.SolutionLimit += 1
            
else:
                
print('\nSolution %i is same as solution %i, stopping\n' %
                      
(len(PrevX),k) )
                
break

Therefore the loop breaks and it stops generating new solutions. 

I hope this was clear, but do you have any suggestions on how to further improve the piece of code such that, even with a slightly greedy behaviour, still get more feasible solutions? My model size (Optimize a model with 200113 rows, 9855 columns and 462421 nonzeros) should be large enough to obtain those. 

Any help is very appreciated!

Kind regards,

Bert

--

---
You received this message because you are subscribed to a topic in the Google Groups "Gurobi Optimization" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/gurobi/BsJm4thVtfI/unsubscribe.
To unsubscribe from this group and all its topics, send an email to gurobi+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Tobias Achterberg

unread,
Nov 30, 2015, 8:12:48 AM11/30/15
to gur...@googlegroups.com
Hi Bert,

First of all, since you have a maximization problem, your objective constraint
needs to be a ">=" constraint, and the right hand side needs to be slightly
reduced instead of increased. But I guess you are aware of this.

Moreover, I think that you need to play around a little bit with the auxiliary
objective function in the code.

The code is using the solution value as objective function coefficient in a
minimization problem. This is a heuristic to drive the solution away from the
previous solution by pushing variables towards zero that have a large value in
the previous solution.

For binary variables one can do better. Namely, you could actually maximize the
Hamming distance to the previous solution. If the previous solution has x == 0,
then put a -1 as objective coefficient, if it has x == 1, then put a +1 as
objective coefficient. Maybe it is already enough for your model to use an
objective on the binaries and you can set the objective of all other variables to 0.

In the "modify objective and resolve" part of the code, you want to use a
combination of the old objective and the new one that you get from maximizing
the Hamming distance to the most recent solution. Similar to the current code,
you could use weights 0.8 and 0.2.

On the other hand, you could try to just use a random objective function with
coefficients in [-1,+1] in each iteration. Of course, finding a solution that
you already know is then no longer a reason to abort. You could run the
algorithm as long as you like, for example until it was not able to find any new
solution for, say, 10 consecutive iterations or until a predefined number of
different solutions has been found.

Regards,

Tobias

Bert Dijk

unread,
Dec 1, 2015, 7:47:30 AM12/1/15
to gur...@googlegroups.com
Hi Tobias,

Thank you again for the reply. It really helps me to think about my problem.

I think I understand your answer, although I am not 100% sure of how to implement this to my specific problem.

My decision variables are binary - Therefore we could maximize the Hamming distance. However, for my research it is crucial to be able to compare the objective functions of the seperate solutions ( I need to compare the objective function of the optimal solution with other solutions found and the solution currently used by a company).

Since your approach focuses on objective function modification, would the approach still work?

My other ideas are to either enforce a constraint that limits the objective function to be lower than the ones found so far, or to make sure the not all the new variables can be the same as in the solutions found so far. However, for the latter I am not sure how I could implement it exactly.

Your help is very much appreciated!

Kind regards,

Bert

Tobias Achterberg

unread,
Dec 1, 2015, 8:29:42 AM12/1/15
to gur...@googlegroups.com
If all your decision variables are binary, you can also keep the objective
function as is and add a constraint after each optimization run to rule out the
new solution. Let x' be the solution that was found in the last iteration of
your algorithm. Then add a constraint

sum {xj : x'_j = 0} + sum {1-xj : x'_j = 1} >= 1

This constraint ensures that at least one of your binary variables has to take a
different value than it had in the previous solution x'.

If you do it like this and run the algorithm until you hit an infeasible model
(because you ruled out all feasible solutions using such constraints) then you
should get all feasible solutions to your problem, and you should get them in
decreasing objective quality.


The main point in the previous approach, where we added a constraint that keeps
the objective value close to the optimal value and where we modified the
objective function to maximize the Hamming distance is to find a reasonably good
alternative solution that is as different as possible to the previous solution.
If you continue to use this approach, you can of course calculate the objective
function value (in terms of your original objective function) by a simple dot
product of the solution vector x and your original objective function vector c.


Tobias

Bert Dijk

unread,
Dec 1, 2015, 12:52:59 PM12/1/15
to gur...@googlegroups.com
Tobias,

Thank you very much for all the help and explanations. I managed to implement this in my model and it works well.

Kind regards,

Bert

Ankit Wagle

unread,
Feb 4, 2017, 6:32:38 AM2/4/17
to Gurobi Optimization
Hi,

I tried using a similar approach, but it didn't work for me. The solution repeats itself. My model is as follows:-
Maximize
10 X_1
Subject To
1 - X_1 > 1
Binary
X_1
End

Without the constraint, the output for X_1 should be 1. But with constraint, the output should be 0. However, I am still getting the value 1. Is there any way to fix this? How did your model work?
Any help would be appreciated!

With Best Regards,
Ankit Wagle

Sonja Mars

unread,
Mar 27, 2017, 3:56:33 AM3/27/17
to gur...@googlegroups.com
Hi,

Can you give us some more details on what exactly are you doing? If I try to solve your model after having copied it so an lp file, Gurobi gives me this warning:

Warning: Numeric variable name in LP file ('1') - often the result
         of a constant term in the left-hand side of a constraint.

This indicates that Gurobi is in fact solving this model here:
Maximize
  10 B0(1)
Subject To
 c0: - B0(1) + C1(1) >= 1
Bounds
Binaries
 B0(1)
End
where C1(1) is a continuous non-negative variable.

More details on the LP-format can be found here:

Best regards, 
Sonja 


----
Dr. Sonja Mars
Gurobi Optimization - Technical Support 

Reply all
Reply to author
Forward
0 new messages