Hello, been using OPTANO Modleing for years and it’s been great. We’re recently looking at enhancing our model and I was looking for some guidance on how best to model a constraint that I’m having trouble with.
It’s basically that we want to keep two percentages within a tolerance of each other, here is a formula if it helps.
I know this would be a quadratic constraint and I’m not sure if that’s something you can just add to an existing model with only linear constraints or not. Also, I’m open to reformulating this as several linear constraints even if doing so wouldn’t be as precise, in this case “close” is actually good enough.
Please let me know if you have any questions or if I can clarify in any way.
Thanks
Hi Jannick,
I'm a colleague of Jim's who has been working with him to look into these quadratic implementations. I have come up with a somewhat redacted description of our problem, along with various examples of the approaches we've taken.
We have a list of options for a set of products, and we are using these options as binary decision variables to create constraints and objectives for the model, based on various attributes (in these examples, Value on the item is always a positive number) of the options and products. All of the current constraints generally follow a formula such as this (to keep various subsets PopA1 below 10% of AllPopA):
As Jim described, new constraints in progress are trying to keep two populations within some percentage of each other. We were able to implement your first suggestion as written (in F#):
let popA1Total = Expression.Sum [ for x in popA1 -> x.Value * vars[x] ]
let popATotal = Expression.Sum [ for x in popA -> x.Value * vars[x] ]
let popB1Total = Expression.Sum [ for x in popB1 -> x.Value * vars[x] ]
let popBTotal = Expression.Sum [ for x in popB -> x.Value * vars[x] ]
model.AddConstraint(
Constraint.LessThanOrEqual(
Expression.Absolute(popA1Total * popBTotal - popB1Total * popATotal),
0.05 * popATotal * popBTotal
),
$"popA1 and popB1 within 5% - a"
)
model.AddConstraint(
Constraint.LessThanOrEqual(
Expression.Absolute(popB1Total * popATotal - popA1Total * popBTotal),
0.05 * popATotal * popBTotal
),
$"popA1 and popB1 within 5% - a"
)
This worked exactly as expected when using a smaller controlled data sets (150 total variables), however our normal data set runs in the numbers of 100,000 + variables overall, and these particular subset of variables tend to be at minimum 10,000 for popA and popB, and both popA1 and popB1 can be upwards of 5,000 each, so the resulting quadratic expression was tens of millions of terms, and never makes it to the solver (at least within several days of letting it run) due to the time it takes to normalize the constraints.After some deliberation, we realized that we could do an initial solve, and see where the sums will land as a percentage, and use this information to determine a percentage to apply to both as a target to be within range of, resulting in linear formulation of the constraint:
let targetPercent = 0.5
model.AddConstraint(
Constraint.LessThanOrEqual(
popA1Total,
(targetPercent + 0.025) * popATotal
),
"popA1 below upper limit"
)
model.AddConstraint(
Constraint.GreaterThanOrEqual(
popA1Total,
(targetPercent - 0.025) * popATotal
),
"popA1 above lower limit"
)
model.AddConstraint(
Constraint.LessThanOrEqual(
popB1Total,
(targetPercent + 0.025) * popBTotal
),
"popB1 below upper limit"
)
model.AddConstraint(
Constraint.GreaterThanOrEqual(
popB1Total,
(targetPercent - 0.025) * popBTotal
),
"popB1 above lower limit"
)
This approach is functional and works as intended,
and does keep all percentages in the appropriate ranges. While working on this approach,
I had a thought to modify this to allow targetPercent to be a variable within
the model to see if we could remove the human error aspect from the solution
and let the model pick where the target should be.
First
iteration of new quadratic:
let lowerbound = new Variable("target percent for popA1 and popB1", 0.005, 0.945, VariableType.Continuous)
model.AddConstraint(
Constraint.LessThanOrEqual(popA1Total,
popATotal * (lowerbound + 0.05)),
"popA1 below upper limit"
)
model.AddConstraint(
Constraint.GreaterThanOrEqual(popA1Total,
popATotal * lowerbound),
"popA1 above lower limit"
)
model.AddConstraint(
Constraint.LessThanOrEqual(popB1Total,
popBTotal * (lowerbound + 0.05)),
"popB1 below upper limit"
)
model.AddConstraint(
Constraint.GreaterThanOrEqual(popB1Total,
popBTotal * lowerbound),
"popB1 above lower limit"
)
This iteration still took a long time to normalize constraints and create the CPLEX INumVars (approx. 1 hour), and ended with a result of CPLEX Error 5002: 'GDI' is not convex.
Second iteration:
let lower = new Variable("target percent for popA1 and popB1", 1.0, 94.0, VariableType. Integer)
model.AddConstraint(
Constraint.LessThanOrEqual(popA1Total,
popATotal * (lower + 5.0) * 0.01),
"popA1 below upper limit"
)
model.AddConstraint(
Constraint.GreaterThanOrEqual(popA1Total,
popATotal * lower * 0.01),
"popA1 above lower limit"
)
model.AddConstraint(
Constraint.LessThanOrEqual(popB1Total,
popBTotal * (lower + 5.0) * 0.01),
"popB1 below upper limit"
)
model.AddConstraint(
Constraint.GreaterThanOrEqual(popB1Total,
popBTotal * lower * 0.01),
"popB1 above lower limit"
)
This strictly integer version normalized into CPLEX somewhat faster (about 49 minutes), but also gave the same CPLEX error.
Third iteration:
Third iteration:
Somewhat similar to the above two, but using a small set of binary variables:
let target5 = new Variable($"tpo target for coupon {coupon}", 0., 1., VariableType.Binary)
let target25 = new Variable($"tpo target for coupon {coupon}", 0., 1., VariableType.Binary)
let target125 = new Variable($"tpo target for coupon {coupon}", 0., 1., VariableType.Binary)
let target0625 = new Variable($"tpo target for coupon {coupon}", 0., 1., VariableType.Binary)
let lower = 0.5 * target5 + 0.25 * target25 + 0.125 * target125 + 0.0625 * target0625
This approach also took a very long time to reach the solver (over 2 hours), and so was discarded.
Fourth iteration:
On the fourth iteration, I went back to a single integer variable for the target (from 3.0 – 97.0), but reformulated the constraints to work with the difference between target * total and subset:
optanoModel.AddConstraint(
Constraint.LessThanOrEqual(Expression.Absolute(popATotal * target * 0.01 - popA1Total),
popATotal * 0.025),
$"popA1 in tolerance limit"
)
optanoModel.AddConstraint(
Constraint.LessThanOrEqual(Expression.Absolute(popBTotal * target * 0.01 - popB1Total),
popBTotal * 0.025),
$"popB1 in tolerance limit"
)
This ended with an exception due to BigM Value not being supplied on these constraints, so I went back and calculated one:
let bigMa = popA |> List.except popA1 |> List.map(fun x -> x.Value) |> List.sum
let bigMb = popB |> List.except popB1 |> List.map(fun x -> x.Value) |> List.sum
let aExp = Expression.Absolute(popATotal * target - popA1Total)
let bExp = Expression.Absolute(popBTotal * target - popB1Total)
aExp.BigM <- float bigMa
bExp.BigM <- float bigMb
This took 76 minutes, and also gave an error for not convex.
I did see in the CPLEX documentation that there is an option to enable solving of non-convex problems, setting OptimalityTarget to 3, but I haven’t found a way to access that through OPTANO documentation.