Discrete Lagrange Multiplier search

13 views
Skip to first unread message

Mark Engelberg

unread,
Jul 3, 2018, 6:12:02 PM7/3/18
to JAMES Users
Hi, I'm just starting to explore JAMES.  I'm really impressed with the clean architecture of the codebase, and looking forward to using it on real problems.

Recently, to expand my knowledge of constraint solvers, I took this free online course:

In the section about local neighborhood search, the instructor discussed that one of the challenges is that it is often the case that there is no good way to find neighboring states that maintain all the hard constraints of the problem.  Therefore, you often want to relax some constraints during the search process by turning them into penalizing constraints, but want to make sure that all of these penalizing constraints are fully resolved by the time you return a solution.  He explained that the process of assigning a fixed penalty to each penalizing constraint can require a lot of trial and error and fine tuning to get right.

To resolve this, he talked about a technique called Discrete Lagrange Multiplier Search.  The idea is that if you ever reach a local maximum/minimum in which some penalizing constraints are unresolved, you automatically scale the penalty associated with those penalizing constraints and continue or re-run.  Then, you'll eventually end up in a local maximum/minimum with no further penalizing constraints.  There is still a parameter to fine-tune (the scaling coefficient), but it seemed like this would be more robust than working out the penalty associated with each constraint.

Is there anything like this in JAMES (I didn't see anything like this yet in my reading of the source)?  If not, is there some other convenient way of dealing with this problem of working with penalizing constraints that need to be resolved at the end of the search?

Thanks.

Herman De Beukelaer

unread,
Jul 15, 2018, 7:02:13 AM7/15/18
to JAMES Users
Hi Mark,

Glad to hear that you find the framework useful!

There is currently no such approach implemented in JAMES. You now have to choose between hard constraints (that can not be violated during optimization) or soft constraints (that are not guaranteed to be resolved at the end of the search). I guess you could try to implement this yourself? You would need a search that wraps an arbitrary other search and repeatedly runs it for a certain time, number of iterations, ..., or until convergence at a local optimum. In between runs you can ask the solution which soft constraints are violated, and upscale their penalty. The main search would then continue (at least) until all constraints have been resolved.

As an example, you could look at some of the existing searches in JAMES that wrap other searches, such as variable neighbourhood search or parallel tempering. This should get you started. If you have specific implementation questions, feel free to post them here.

Kind regards
Herman
Reply all
Reply to author
Forward
0 new messages