I am trying to understand how SLSQP handles constraints and what I can do to help it respect them.
I wrote a unit test for my soc_constraint() function, and it fails when I feed it the result vector of my optimization:
def test_soc_constraint(self):
# this is from the result of the search algorithm
x = np.array(
[1.80000000e+04, 8.46081736e-13, 2.07000000e+04, 3.41556186e-02,
2.43000000e+04, 2.48762429e+03, 2.70059724e+04, 1.22052973e-13,
2.79000000e+04, 1.46120260e+02, 3.87000000e+04, 2.68250461e-13,
4.95000000e+04, 1.88947644e+03, 5.31000000e+04, 1.46378797e-14,
5.66990000e+04, 1.79850965e+03, 5.67000000e+04, 3.21744370e-06,
5.76000000e+04, 6.08151283e+03, 9.27000000e+04, 1.48264459e+03,
1.10700000e+05, 2.23267620e-04, 1.35900000e+05, 5.37007732e-04,
1.44900000e+05, 8.99000000e+02, 1.64700000e+05, 1.01187451e-04]
) # type: np.ndarray
soc, log = simple_booking.soc_constraint(x, True)
print(log, soc)
self.assertGreaterEqual(soc, 0)
This is my constraint function:
def soc_constraint(x, logging=False):
log = []
soc = SOC_MAX
durations = x[1::2]
for i, j in durations.reshape(-1, 2):
if logging:
log.append( np.array([soc, i, j]))
soc -= i * CHARGING_SOC_PER_SEC
if soc < 0:
break
soc += j * CHARGING_SOC_PER_SEC
if soc > SOC_MAX:
soc = SOC_MAX - soc
break
if not logging:
return soc
else:
return soc, np.array(log)
and I call it like this:
constraints.append({"type": "ineq", "fun": soc_constraint})
result:OptimizeResult = minimize(objective_func, np.array(x_initial),
args=[metrics_v, plan],
method="SLSQP",
bounds=bounds,
constraints=constraints,
options={"eps": 1 , "maxiter": 1000},
tol=1, )
That NumPy array in my test is the result of a successful optimization, very much like this, but from a different run:
fun: -3088228.898083334
jac: array([ 0. , -169.95 , 0. , 204.37 ,
0. , -230.09 , 0. , 230.02277778,
0. , -219.09 , 0. , 206.2 ,
0. , -227.28 , 0. , 234.97 ,
-2.5549358 , -257.98857901, 0. , 257.99 ,
30.89673519, -230.91357867, 0. , 143.06 ,
0. , -213.77 , 0. , 163.94 ,
0. , -200.93 , 0. , 0. ])
message: 'Optimization terminated successfully'
nfev: 906
nit: 27
njev: 27
status: 0
success: True
x: array([1.80000000e+04, 8.46081736e-13, 2.07000000e+04, 3.41556186e-02,
2.43000000e+04, 2.48762429e+03, 2.70059724e+04, 1.22052973e-13,
2.79000000e+04, 1.46120260e+02, 3.87000000e+04, 2.68250461e-13,
4.95000000e+04, 1.88947644e+03, 5.31000000e+04, 1.46378797e-14,
5.66990000e+04, 1.79850965e+03, 5.67000000e+04, 3.21744370e-06,
5.76000000e+04, 6.08151283e+03, 9.27000000e+04, 1.48264459e+03,
1.10700000e+05, 2.23267620e-04, 1.35900000e+05, 5.37007732e-04,
1.44900000e+05, 8.99000000e+02, 1.64700000e+05, 1.01187451e-04])
The thing is, that this constraint function is violated with that solution - and every other solution that SLSQP finds for me.
Could someone explain how that is possible? At what point in the search process is the constraint function called? I expect it to be called to find valid values for the search vector before the objective function is evaluated. Is that correct?
I know the constraint function is called (a lot) during the search, and I verified that it returned negative values during the search before the result was returned. Are there conditions where the constraints are ignored?
I tried other algorithms (COBYLA and trust-constr), and they seem stuck on the initial value without varying them enough. Perhaps the algorithms available in scipy.optimize.minimize are not a good fit for my problem. where can I learn more about how to classify my problem and pick the suitable algorithm for finding minima?