Hi, I'm just starting to explore JAMES. I'm really impressed with the clean architecture of the codebase, and looking forward to using it on real problems.
Recently, to expand my knowledge of constraint solvers, I took this free online course:
In the section about local neighborhood search, the instructor discussed that one of the challenges is that it is often the case that there is no good way to find neighboring states that maintain all the hard constraints of the problem. Therefore, you often want to relax some constraints during the search process by turning them into penalizing constraints, but want to make sure that all of these penalizing constraints are fully resolved by the time you return a solution. He explained that the process of assigning a fixed penalty to each penalizing constraint can require a lot of trial and error and fine tuning to get right.
To resolve this, he talked about a technique called Discrete Lagrange Multiplier Search. The idea is that if you ever reach a local maximum/minimum in which some penalizing constraints are unresolved, you automatically scale the penalty associated with those penalizing constraints and continue or re-run. Then, you'll eventually end up in a local maximum/minimum with no further penalizing constraints. There is still a parameter to fine-tune (the scaling coefficient), but it seemed like this would be more robust than working out the penalty associated with each constraint.
Is there anything like this in JAMES (I didn't see anything like this yet in my reading of the source)? If not, is there some other convenient way of dealing with this problem of working with penalizing constraints that need to be resolved at the end of the search?
Thanks.