Advice for modeling a quadratic(?) constraint

286 views
Skip to first unread message

Jim Nardecchia

unread,
Dec 6, 2023, 10:10:31 AM12/6/23
to OPTANO Modeling

Hello, been using OPTANO Modleing for years and it’s been great. We’re recently looking at enhancing our model and I was looking for some guidance on how best to model a constraint that I’m having trouble with.

It’s basically that we want to keep two percentages within a tolerance of each other, here is a formula if it helps.

Screenshot 2023-12-06 100816.png

I know this would be a quadratic constraint and I’m not sure if that’s something you can just add to an existing model with only linear constraints or not. Also, I’m open to reformulating this as several linear constraints even if doing so wouldn’t be as precise, in this case “close” is actually good enough.

Please let me know if you have any questions or if I can clarify in any way.

Thanks

Jannick Lange

unread,
Dec 8, 2023, 9:52:59 AM12/8/23
to OPTANO Modeling
Hi Jim,

I'm happy to hear that content with the OPTANO.Modeling library :)

Are PopA and PopB of constant size, and are only A1 and B1 variable, (or are all 4 values variable)?
If A1 and B1 are the only variable part of that equation, you might be able to reformulate it into a set of linear constraints:
modeling_ratio.png

When A1 >= B1, the 1st constraint will limit the difference to be within the tolerance. The 2nd constraint will be non-binding.
When B1 > A1, the 2nd constraint will ensure that the tolerance is not exceeded (and the 1st constraint will be non-binding).

Even if PopA/PopB are of non-constant size, I'd probably still use this approach, even though it might turn the model quadratic.

Please let me know, whether this already solves your issues, or if we should try to further improve the formulation.

Best Regards
Jannick

Jim Nardecchia

unread,
Jan 3, 2024, 9:21:02 AM1/3/24
to OPTANO Modeling
Hey Jannick, thanks so much for your reply! We weren't able to get around to implementing it until last week due to the holidays, but we've added it to our model and it causes it to spend many many hours normalizing the constraints, we've yet to see it complete that step even after running overnight in a Release build.

To your earlier reply, PopA/PopB ARE indeed of non-constant size so this is most definitely quadratic, and therefore this may be expected on a dataset of a certain size. On the dataset I'm using to test there were already 5704 constraints, adding yours added 14 more (your 2 constraints across 7 groupings). We also tried adding just one constraint and using the Expression.Absolute method, but that was resulting in us getting errors because the computed BigM was infinite for the larger groupings, this happened even after we pre-scaled our numbers way down (multiplied all of them by .00001 before they go into the Expression.Sum). 

Just thought I'd reach out to see if you had any more suggestions.

Thanks so much for your help!

Jannick Lange

unread,
Jan 10, 2024, 7:47:18 AM1/10/24
to OPTANO Modeling
Hi Jim,

how did you implement the constraint using an Abs expression?
I don't think that there is an operator in Modeling, that supports the division of 2 variables.
If you wrote something along the lines of  "var division = 1*x / 1*y;", you should note that this will be interpreted as "var division = (1 * x / 1) * y;". I.e.: "x * y"!

In general: When you try to build an absolute operator around a non-linear expression, you can pass a manually calculated "BigM" value in the Abs-constructor. Sometimes, it is not trivial to derive a proper BigM value for non-linear expressions, but easily feasible for the user with "external domain knowledge".

Are the existing 5704 constraints linear or also quadratic? How many variables occurr per constraint?
The normalization of this number of constraints should not take that long - usually, I'd expect them to be normalized within at most a couple of seconds.

Would you be willing to share your model, so that I can investigate the root cause for that?
Unfortunately, the LP Reader/Writer of OPTANO Modeling does not support non-linear models, so that we'd have to find some other way (e.g. github/gitlab) to get your model/data across to me.

Best regards,
Jannick

Jim Nardecchia

unread,
Jan 26, 2024, 2:45:34 PM1/26/24
to OPTANO Modeling
Hey Jannick sorry it's been so long, just wanted to let you know that due to the issues we've been having we pivoted and are doing some pre-calculations that allow us to turn these into linear constraints, in an attempt to keep moving forward for now. We plan on getting back to the quadratic approach soon, we're not ready to give up yet as there's a very good chance we've modeled it incorrectly or need to configure things differently or something to that effect.

Your suggestion is very appreciated, we will look at implementing it when we can and will get back to you.

Thanks!

Trey Westrich

unread,
Mar 7, 2024, 9:34:59 AM3/7/24
to OPTANO Modeling

Hi Jannick,

 

I'm a colleague of Jim's who has been working with him to look into these quadratic implementations.  I have come up with a somewhat redacted description of our problem, along with various examples of the approaches we've taken.

 

We have a list of options for a set of products, and we are using these options as binary decision variables to create constraints and objectives for the model, based on various attributes (in these examples, Value on the item is always a positive number) of the options and products. All of the current constraints generally follow a formula such as this (to keep various subsets PopA1 below 10% of AllPopA):

optano_1.png

As Jim described, new constraints in progress are trying to keep two populations within some percentage of each other. We were able to implement your first suggestion as written (in F#):

let popA1Total = Expression.Sum [ for x in popA1 -> x.Value * vars[x] ]

let popATotal = Expression.Sum [ for x in popA -> x.Value * vars[x] ]

 

let popB1Total = Expression.Sum [ for x in popB1 -> x.Value * vars[x] ]

let popBTotal = Expression.Sum [ for x in popB -> x.Value * vars[x] ]

 

model.AddConstraint(

  Constraint.LessThanOrEqual(

    Expression.Absolute(popA1Total * popBTotal - popB1Total * popATotal),

    0.05 * popATotal * popBTotal

  ),

  $"popA1 and popB1 within 5% - a"

)

 

model.AddConstraint(

  Constraint.LessThanOrEqual(

    Expression.Absolute(popB1Total * popATotal - popA1Total * popBTotal),

    0.05 * popATotal * popBTotal

  ),

  $"popA1 and popB1 within 5% - a"

) 

This worked exactly as expected when using a smaller controlled data sets (150 total variables), however our normal data set runs in the numbers of 100,000 + variables overall, and these particular subset of variables tend to be at minimum 10,000 for popA and popB, and both popA1 and popB1 can be upwards of 5,000 each, so the resulting quadratic expression was tens of millions of terms, and never makes it to the solver (at least within several days of letting it run) due to the time it takes to normalize the constraints.

Linearized version

After some deliberation, we realized that we could do an initial solve, and see where the sums will land as a percentage, and use this information to determine a percentage to apply to both as a target to be within range of, resulting in linear formulation of the constraint:

optano_2.png

      let targetPercent = 0.5

 

      model.AddConstraint(

        Constraint.LessThanOrEqual(

          popA1Total,

          (targetPercent + 0.025) * popATotal

        ),

        "popA1 below upper limit"

      )

      model.AddConstraint(

        Constraint.GreaterThanOrEqual(

          popA1Total,

          (targetPercent - 0.025) * popATotal

        ),

        "popA1 above lower limit"

      )

 

      model.AddConstraint(

        Constraint.LessThanOrEqual(

          popB1Total,

          (targetPercent + 0.025) * popBTotal

        ),

        "popB1 below upper limit"

      )

      model.AddConstraint(

        Constraint.GreaterThanOrEqual(

          popB1Total,

          (targetPercent - 0.025) * popBTotal

        ),

        "popB1 above lower limit"

      )

This approach is functional and works as intended, and does keep all percentages in the appropriate ranges. While working on this approach, I had a thought to modify this to allow targetPercent to be a variable within the model to see if we could remove the human error aspect from the solution and let the model pick where the target should be.

First iteration of new quadratic:

          let lowerbound = new Variable("target percent for popA1 and popB1", 0.005, 0.945, VariableType.Continuous)

          model.AddConstraint(

            Constraint.LessThanOrEqual(popA1Total,

              popATotal * (lowerbound + 0.05)),

            "popA1 below upper limit"

          )

          model.AddConstraint(

            Constraint.GreaterThanOrEqual(popA1Total,

              popATotal * lowerbound),

            "popA1 above lower limit"

          )

 

          model.AddConstraint(

            Constraint.LessThanOrEqual(popB1Total,

              popBTotal * (lowerbound + 0.05)),

            "popB1 below upper limit"

          )

          model.AddConstraint(

            Constraint.GreaterThanOrEqual(popB1Total,

              popBTotal * lowerbound),

            "popB1 above lower limit"

          )

This iteration still took a long time to normalize constraints and create the CPLEX INumVars (approx. 1 hour), and ended with a result of CPLEX Error  5002: 'GDI' is not convex.

Second iteration:

           let lower = new Variable("target percent for popA1 and popB1", 1.0, 94.0, VariableType. Integer)

          model.AddConstraint(

            Constraint.LessThanOrEqual(popA1Total,

              popATotal * (lower + 5.0) * 0.01),

            "popA1 below upper limit"

          )

          model.AddConstraint(

            Constraint.GreaterThanOrEqual(popA1Total,

              popATotal * lower * 0.01),

            "popA1 above lower limit"

          )

 

          model.AddConstraint(

            Constraint.LessThanOrEqual(popB1Total,

              popBTotal * (lower + 5.0) * 0.01),

            "popB1 below upper limit"

          )

          model.AddConstraint(

            Constraint.GreaterThanOrEqual(popB1Total,

              popBTotal * lower * 0.01),

            "popB1 above lower limit"

          )

This strictly integer version normalized into CPLEX somewhat faster (about 49 minutes), but also gave the same CPLEX error.

Third iteration:

Third iteration:

Somewhat similar to the above two, but using a small set of binary variables:

 

let target5 = new Variable($"tpo target for coupon {coupon}", 0., 1., VariableType.Binary)

let target25 = new Variable($"tpo target for coupon {coupon}", 0., 1., VariableType.Binary)

let target125 = new Variable($"tpo target for coupon {coupon}", 0., 1., VariableType.Binary)

let target0625 = new Variable($"tpo target for coupon {coupon}", 0., 1., VariableType.Binary)

let lower = 0.5 * target5 + 0.25 * target25 + 0.125 * target125 + 0.0625 * target0625

This approach also took a very long time to reach the solver (over 2 hours), and so was discarded.

Fourth iteration:

On the fourth iteration, I went back to a single integer variable for the target (from 3.0 – 97.0), but reformulated the constraints to work with the difference between target * total and subset:

optano_3.png

optanoModel.AddConstraint(

  Constraint.LessThanOrEqual(Expression.Absolute(popATotal * target * 0.01 - popA1Total),

    popATotal * 0.025),

   $"popA1 in tolerance limit"

   )

 

          optanoModel.AddConstraint(

            Constraint.LessThanOrEqual(Expression.Absolute(popBTotal * target * 0.01 - popB1Total),

              popBTotal * 0.025),

            $"popB1 in tolerance limit"

          )

This ended with an exception due to BigM Value not being supplied on these constraints, so I went back and calculated one:

          let bigMa = popA |> List.except popA1 |> List.map(fun x -> x.Value) |> List.sum

          let bigMb = popB |> List.except popB1 |> List.map(fun x -> x.Value) |> List.sum

          let aExp = Expression.Absolute(popATotal * target - popA1Total)

          let bExp = Expression.Absolute(popBTotal * target - popB1Total)

          aExp.BigM <- float bigMa

          bExp.BigM <- float bigMb

This took 76 minutes, and also gave an error for not convex.

 

I did see in the CPLEX documentation that there is an option to enable solving of non-convex problems, setting OptimalityTarget to 3, but I haven’t found a way to access that through OPTANO documentation.

Jannick Lange

unread,
Mar 11, 2024, 9:39:44 AM3/11/24
to OPTANO Modeling
Hello Trey,

I have added the OptimalityTarget parameter to the CplexSolverConfiguration.
The updated NuGet package can be found here:

Best regards,
Jannick

Trey Westrich

unread,
Mar 13, 2024, 2:34:35 PM3/13/24
to OPTANO Modeling
Hi Jannick,
Thank you for this package update! That exposes the parameter we believed we were needing for Cplex, although Cplex is still returning an error -> CPLEX Error 5002: <constraint_id> is not convex

We will be reaching out to IBM for further Cplex support in an attempt to solve our problems, whether they are in our formulation or set up.

Thank you for all the assistance and advice with this problem!



Trey Westrich

unread,
Mar 20, 2024, 2:52:54 PM3/20/24
to OPTANO Modeling
Hi Jannick, 

I wanted to follow up to let you (and anyone else interested) know that we received advice on another reformulation to accomplish this from someone in the IBM Cplex community.

Your first four inequalities are fine, and are linear except for the product of target and sum(popA) or sum(popB). You can linearize each of those products by introducing auxiliary variables (continuous over [0,1]) and a gaggle of constraints. Let x_i denote the binary variable indicating selection of object i and y_i the variable that will capture target * x_i. I'm going to assume target's domain is [0, 1], but you can try tightening it if you wish. The product of target and whichever sum we are looking at will be the sum of the y_i variables. The constraints defining y_i are y_i <= x_i, y_i <= target, and y_i >= target - 1 + x_i. You can check that if x_i = 0 then y_i = 0, while if x_i = 1 then y_i = target.

My understanding of this, in layman's terms, is instead of multiplying the target by the sum of the binary variables, use a continuous variable per item to represent how much of an item is chosen, and summing those, while constraining these continuous variables to an upperbound of min(target percentage, item chosen), and a lowerbound of (target - 1 + item chosen).

It does add a lot of extra constraints, but the functionality holds well.
Thanks again for the assistance!

Trey
Reply all
Reply to author
Forward
0 new messages