I just started using the MIP package and am loving it so far!!!
I am running into an issue on a particular problem that I feel should be solvable though I can't figure out how to do it. A simplified version of my problem is that I have an array of numbers of size (G*K, D). G is a number of groups, K of number of candidates within a group, and D is time. The goal is to choose a single row from within each group, and minimize the largest entry of the row-wise sum of the chosen rows, while keeping the cost of the solution bounded to within some budget. If I wanted to minimize the overall sum this would be straightforward but minimizing the maximum is causing me issues. Here is what I have thus far:
np.random.seed(44)
G=5 # Number of groups
K=3 # Number of candidates within each group
D = 10 # Dimension of each solution (time slices)
N = G*K
# Generate some random data
values = np.random.rand(N, D)
costs = np.random.rand(N)
budget = 10
model = Model(sense=MINIMIZE)
x = [model.add_var(name="x({})".format(n), var_type=BINARY) for n in range(N)]
model.objective = np.max([xsum(x[i]*values[i,j] for i in range(N)) for j in range(D)])
# Budget constraint
model += xsum(costs[n] * x[n] for n in range(N)) <= budget
# Must choose one of K rows per group
for g in range(G):
model += (xsum(x[K*g+k] for k in range(K))==1)
model.optimize()
print("Solution:")
spend = 0
for n in range(N):
if x[n].x >= 0.99:
print(f"x[{n}]=1")
spend += costs[n]
else:
print(f"x[{n}]=0")
print("Objective = {}".format(model.objective_value))
print("Overall spend = {}".format(spend))
When I check the final objective value, the code consistently returns the value of the first entry rather than the max. I checked the documentation and am not finding any infinity norm or similar operator to do this. Any advice? If I replace the np.max with another xsum things work as expected (I minimize the overall sum of the solution) but I am needing to minimize the largest value rather than the overall sum for my application.
Thank you so much in advance!
Drew