"Terminated (singular KKT matrix)"

8,428 views
Skip to first unread message

Doron

unread,
Jun 4, 2009, 12:19:25 PM6/4/09
to CVXOPT
Hi,

I encountered the following problems while using solvers.qp:

(This is a simple example for a something I saw in a much more complex
situation):

Consider the following matrices:

q = matrix([ -1. , -2. ])
P = matrix([ 0. , 1 , 1 , 2 ],(2,2))
G = matrix([ -1. , 0 , 1 , 0 , -1 , 2 ],(3,2))
h = matrix([ 1. , 1 , 0 ])

## When I try to solve this by:
solution = solvers.qp( P, q, G, h)

I get this message:
Terminated (singular KKT matrix).

And a solution of (0,0), which is clearly not the optimal. (2,-1) is
better.

I suspect it has to do with the fact the q also serves as one of the
rows of G, which is the common thing between this simple example and
my original one, but I don't see why it should have this effect.

Any idea, anyone?
Thanks!

Jeffery Kline

unread,
Jun 4, 2009, 3:33:47 PM6/4/09
to cvx...@googlegroups.com
Hi Doron, It looks like your problem is not convex -- the matrix P
has a negative eigenvalue.

--Jeff
Message has been deleted
Message has been deleted

Doron

unread,
Jun 4, 2009, 7:21:58 PM6/4/09
to CVXOPT
Thanks, Jeff.
You're right, of course, I'm sorry.

But in the original instance this has happened, the matrix P was
indeed semi-positive definite.

After investigating this a bit, I found that the problem was this:
I tried to get the minimal possible value of the quadratic part given
a (feasible!) value of the linear part.
So I added a constraint saying that the linear part is at most this
constant (C), and put a very large coefficient in front of the
quadratic part.

I got this message of "Terminated (singular KKT matrix)" whenever I
put a coefficient larger than a certain number.
So it worked with small coefficient and didn't work with larger ones.

Anyway, I'm trying to regenerate again a simpler instance of the same
problem.

Thanks,
Doron


On Jun 4, 3:33 pm, Jeffery Kline <jeffery.kl...@gmail.com> wrote:
> Hi Doron,  It looks like your problem is not convex -- the matrix P
> has a negative eigenvalue.
>
> --Jeff
>

Joachim Dahl

unread,
Jun 5, 2009, 3:22:51 AM6/5/09
to cvx...@googlegroups.com
Feasibility isn't detected by the QP solver, so if you have an
infeasible problem,
you will either get the "singular KKT matrix" error, or the solver will
terminate after
the maximum number of iterations.

A very large dynamic range of values in the problem parameters will
invariably
make the problem harder to solve in finite precision.

If you post a simple example, we can experiment a bit with
normalization of
the objective and constraints, and see if it helps.

Joachim

Doron wrote:
> You're right, of course, I'm sorry.
>
> But in the original instance this has happened, the matrix P was
> indeed semi-positive definite.
> I'm trying to regenerate again a simpler instance of the same
> problem.
>
> I suspect though, that what actually happens is that I try to
> minimize 1/2 x^t P x + q^t x,
> and I have a bunch of constrains, one of which is that
> q^t x < constant
>
> It could be that because of the other constraints, this is not
> feasible, but instead of getting a "domain error", I get this
> "Terminated (singular KKT matrix)" message.
>
> Is that possible?
>
> Thanks,
> Doron
>
>
>
> On Jun 4, 3:33 pm, Jeffery Kline <jeffery.kl...@gmail.com> wrote:
>
>> Hi Doron, It looks like your problem is not convex -- the matrix P
>> has a negative eigenvalue.
>>
>> --Jeff
>>
Message has been deleted

Doron

unread,
Jun 5, 2009, 3:20:12 PM6/5/09
to CVXOPT
Hi again,

I couldn't find a small example. But here's an example with 36
variables:

from cvxopt import matrix
from cvxopt import spdiag
from cvxopt import solvers
solvers.options['show_progress'] = False

small_P = matrix([[42.490 , 24.520 , 16.296 , 22.496 , 23.514 ,
6.767 , 23.524 , 25.851 , 27.678 , 14.417 , 27.660 , 27.697 , 24.877 ,
14.962 , 24.850 , 25.810 , 27.218 , 16.099 ], [24.520 , 71.290 ,
5.578 , 34.778 , 32.315 , 13.650 , 32.289 , 52.114 , 45.923 , 13.648 ,
45.904 , 46.023 , 67.110 , 14.666 , 36.678 , 52.056 , 43.421 ,
19.309 ], [16.296 , 5.578 , 88.098 , 14.219 , 17.706 , 9.775 ,
17.758 , 9.824 , 10.855 , 24.681 , 10.829 , 10.841 , 6.619 , 26.055 ,
17.736 , 9.801 , 25.236 , 27.125 ], [22.496 , 34.778 , 14.219 ,
44.667 , 34.695 , 14.455 , 34.680 , 35.685 , 37.009 , 17.049 ,
36.992 , 37.057 , 35.023 , 17.968 , 32.141 , 35.644 , 36.785 ,
20.142 ], [23.514 , 32.315 , 17.706 , 34.695 , 59.741 , 20.547 ,
59.710 , 35.032 , 37.004 , 20.339 , 36.986 , 37.062 , 32.950 ,
21.302 , 35.427 , 34.991 , 37.632 , 23.434 ], [6.767 , 13.650 ,
9.775 , 14.455 , 20.547 , 28.116 , 20.538 , 15.232 , 12.009 , 13.727 ,
11.997 , 12.050 , 14.019 , 14.219 , 17.149 , 15.214 , 16.107 ,
15.221 ], [23.524 , 32.289 , 17.758 , 34.680 , 59.710 , 20.538 ,
59.692 , 35.013 , 36.982 , 20.358 , 36.964 , 37.040 , 32.925 ,
21.320 , 35.423 , 34.972 , 37.622 , 23.450 ], [25.851 , 52.114 ,
9.824 , 35.685 , 35.032 , 15.232 , 35.013 , 53.850 , 44.850 , 16.187 ,
44.828 , 44.927 , 52.540 , 17.122 , 38.326 , 53.789 , 45.455 ,
21.105 ], [27.678 , 45.923 , 10.855 , 37.009 , 37.004 , 12.009 ,
36.982 , 44.850 , 70.026 , 15.457 , 69.996 , 70.104 , 45.742 ,
16.352 , 38.683 , 44.792 , 47.350 , 21.463 ], [14.417 , 13.648 ,
24.681 , 17.049 , 20.339 , 13.727 , 20.358 , 16.187 , 15.457 ,
29.207 , 15.436 , 15.463 , 14.246 , 28.547 , 19.219 , 16.171 ,
22.855 , 26.109 ], [27.660 , 45.904 , 10.829 , 36.992 , 36.986 ,
11.997 , 36.964 , 44.828 , 69.996 , 15.436 , 69.990 , 70.086 ,
45.722 , 16.330 , 38.662 , 44.771 , 47.329 , 21.440 ], [27.697 ,
46.023 , 10.841 , 37.057 , 37.062 , 12.050 , 37.040 , 44.927 ,
70.104 , 15.463 , 70.086 , 70.205 , 45.836 , 16.362 , 38.744 ,
44.869 , 47.421 , 21.500 ], [24.877 , 67.110 , 6.619 , 35.023 ,
32.950 , 14.019 , 32.925 , 52.540 , 45.742 , 14.246 , 45.722 ,
45.836 , 63.991 , 15.251 , 37.109 , 52.484 , 43.936 , 19.759 ],
[14.962 , 14.666 , 26.055 , 17.968 , 21.302 , 14.219 , 21.320 ,
17.122 , 16.352 , 28.547 , 16.330 , 16.362 , 15.251 , 28.222 ,
20.169 , 17.103 , 23.694 , 26.058 ], [24.850 , 36.678 , 17.736 ,
32.141 , 35.427 , 17.149 , 35.423 , 38.326 , 38.683 , 19.219 ,
38.662 , 38.744 , 37.109 , 20.169 , 44.191 , 38.282 , 38.522 ,
23.201 ], [25.810 , 52.056 , 9.801 , 35.644 , 34.991 , 15.214 ,
34.972 , 53.789 , 44.792 , 16.171 , 44.771 , 44.869 , 52.484 ,
17.103 , 38.282 , 53.748 , 45.399 , 21.082 ], [27.218 , 43.421 ,
25.236 , 36.785 , 37.632 , 16.107 , 37.622 , 45.455 , 47.350 ,
22.855 , 47.329 , 47.421 , 43.936 , 23.694 , 38.522 , 45.399 ,
62.744 , 26.996 ], [16.099 , 19.309 , 27.125 , 20.142 , 23.434 ,
15.221 , 23.450 , 21.105 , 21.463 , 26.109 , 21.440 , 21.500 ,
19.759 , 26.058 , 23.201 , 21.082 , 26.996 , 31.480 ]])

P = matrix( [ [ small_P, -small_P ], [ -small_P, small_P ] ] )

q = matrix(0.0,(36,1))

G = matrix([ spdiag([-1.]*36), spdiag([1.]*36) , matrix([1,0]*18 +
[0,1]*18, (2,36))])

h = matrix([0.0]*36 + [1.0]*38)

A = matrix( [ [1.0]*18 + [-1.0]*18 , [0.35, -0.75, -1.04, 0.12,
0.12, -0.25, -0.45, 0.35, -1.04, -0.25, -1.04, -0.61, 0.39, -0.25,
-0.75, 0.12, 0.12, -1.04, -0.71, 0.15, 0.44, -0.72, -0.22,
0.15, .09, -0.71, 0.44, 0.15, 0.44, 0.25, -0.99, 0.15, 0.15,
-0.22, -0.72, 0.44] ] ).trans()

b = matrix( [0.0, 0.54] )

sol1 = solvers.qp( P, q, G, h, A, b )["x"]
sol2 = solvers.qp( 10000*P, q, G, h, A, b )["x"]

I get a clean solution when I do not use the coefficient in
solvers.qp, but get the "Terminated (singular KKT matrix)." message
when I use multiply P by 10,000 as in the command line above,

I suspect it has to do with the fact that although small_P is
positive-
definite (all eigenvalues are positive), P is singular, and with
calculations errors has negative eigenvalues.
Could it be that CVXOPT knows to handle negative eigenvalues that are
very close to 0, but after they are doubled by some factor (say,
10,000),
it becomes a bigger problem for the solver?

And perhaps the solution is to add a small diagonal matrix to my P, to
ensure P becomes strictly positive?
Any other suggestions as to how to avoid this problem?

Thanks again,
Doron

Joachim Dahl

unread,
Jun 5, 2009, 3:20:28 PM6/5/09
to cvx...@googlegroups.com
If you scale the problem differently and solve

sol2 = solvers.qp( P, 1e-2*q, 1e-2*G, h, 1e-2*A, b )

instead of

sol2 = solvers.qp( 1e4*P, q, G, h, A, b )

then the algorithm converges.

Joachim
> (Still, this does not explain the difference made by the coefficient).
>
> Any suggestions as to how to avoid this problem?
>
> Thanks again,
> Doron
>
> On Jun 5, 3:22 am, Joachim Dahl <dahl.joac...@gmail.com> wrote:
>

Joachim Dahl

unread,
Apr 2, 2013, 3:56:25 AM4/2/13
to cvx...@googlegroups.com
Rescaling constraints Ax=b such that norm(A) < 1 (using, e.g., max-abs-row-sum norm) often helps.
Generally limiting the dynamic range of the problem often helps.

Few large elements in the model generally destroys accuracy in the solver. For example:
* Redundant constraints box-type constraints x[k] <= u for large u
* Big-M type penalty terms
* Ax=b where a few b[k] are very large,  or a few A[k,j] are very large.

In such cases, you might be able to rescale your problem so that the solver converges, but essentially the model cannot accurately be described using double-precision floating point. The book 
"Numerical Optimization" by Nocedal and Wright has some good comments about scaling and 
using variables that are "of the same scale".




On Mon, Apr 1, 2013 at 8:40 PM, Sylvain Corlay <sylvain...@gmail.com> wrote:
Hi Joachim, 
I often encounter this problem of 'singular KKT matrix' which disappears if I find a proper rescaling of the problem. 
Do you have any rule of thumb of how to determine the rescaling factor? 
S. 

--
You received this message because you are subscribed to the Google Groups "CVXOPT" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cvxopt+un...@googlegroups.com.
To post to this group, send email to cvx...@googlegroups.com.
Visit this group at http://groups.google.com/group/cvxopt?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Joachim Dahl

unread,
Apr 3, 2013, 1:53:56 AM4/3/13
to cvx...@googlegroups.com
There are potentially several advantages of using an SOC formulation instead of a quadratic formulation, but you will have try it to see if CVXOPT works better with one or the other.  For the MOSEK solver, an SOC formulation is generally the best choice.


On Wed, Apr 3, 2013 at 1:16 AM, Sylvain Corlay <sylvain...@gmail.com> wrote:
Thank you for your answer! Apparently, I fall in the case
* Ax=b where a few b[k] are very large,  or a few A[k,j] are very large.
The re-scaling is not always successful. I am working on a nice reformulation as a SOCP problem, but I don't know if I will have similar issues. Is it likely to be more robust? 
Best, 
Sylvain

Sylvain Corlay

unread,
Apr 3, 2013, 4:06:00 PM4/3/13
to cvx...@googlegroups.com
Thank you for you answer, I spent some time today on reformulating various problems in terms of SOCP today.  

By the way, there is an inconsistency in terms of conversion from numpy arrays and lists which is a bit misleading to me: one conversion is row-major and the other one is column-major:

In [18]:
print matrix( [[0., 13., 12.], [0., -3., -12.], [-1., 0., 0.]] )
[ 0.00e+00  0.00e+00 -1.00e+00]
[ 1.30e+01 -3.00e+00  0.00e+00]
[ 1.20e+01 -1.20e+01  0.00e+00]

In [20]:
print matrix( numpy.array([[0., 13., 12.], [0., -3., -12.], [-1., 0., 0.]]) )
[ 0.00e+00  1.30e+01  1.20e+01]
[ 0.00e+00 -3.00e+00 -1.20e+01]
[-1.00e+00  0.00e+00  0.00e+00]

Another issue that we have is that the numpy.array -> cvxopt.matrix randomly crashes (one time out of 100) on one dimensional numpy arrays. (cvxopt 1.1.5, numpy 1.6.2 and Python 2.7.3). Do you guys have knowledge of such issues? 
S. 

Martin

unread,
Apr 5, 2013, 3:20:57 AM4/5/13
to cvx...@googlegroups.com

The next release of CVXOPT will address some known issues with Numpy compatibility, but we will continue to use a column-major input format in cvxopt.matrix. This is consistent with how the data is actually stored. Numpy arrays can be stored as either row-major or column-major, but the array routine assumes row-major input regardless of how you decide to store the data. 


Martin 

Sylvain Corlay

unread,
Apr 8, 2013, 12:05:26 PM4/8/13
to cvx...@googlegroups.com
Alright, for the column major list conversion. 
I hope you guys found the reason for this random crash on matrix conversion. 
Thanks for this great peace of software. 
Message has been deleted

Sylvain Corlay

unread,
Apr 15, 2013, 2:17:44 PM4/15/13
to cvx...@googlegroups.com
Hi Joachim, 

Thanks for your advice regarding renormalization and the reformulation as a SOCP. 

I was succesful with the following renormalization policy in QP: 

- Each row of A[i] and b[i] are renormalized by sum(abs(A[i]))
- Each row of G[i] and h[i] are renormalized by sum(abs(G[i]))
- P and q are renormalized with the spectral norm of P. 

It seems to really help for the stability of the eventual quadratic program. 

Sylvain

Prateek Yadav

unread,
Mar 23, 2018, 7:00:16 AM3/23/18
to CVXOPT
Hello Everyone,

I am trying to solve and SDP using cvxpy with CVXOPT solver. I have to solve it for various random graphs but it looks like for some of them it works and for some it fails. the way to generate random graphs is using stochastic block model.

def Compute_Lovasz_Theta(Graph, Solver=cvx.CVXOPT, Verbose=True):

nv = Graph.number_of_nodes()
ne = Graph.number_of_edges()
A = nx.convert_matrix.to_numpy_matrix(Graph)
# Create two scalar optimization variables.
Z = cvx.Variable((nv,nv), PSD=True)
# Create two constraints.
constraints = [cvx.trace(Z)==1,
               Z[np.where(A==1)] == 0]

# Form objective.
obj = cvx.Maximize(cvx.sum(Z))

# Form and solve problem.
prob = cvx.Problem(obj, constraints)
prob.solve(solver=Solver, verbose=Verbose, max_iters=100)   
return  sparse.csr_matrix(Z.value)

model = SBM(num_vertices, num_communities, community_labels, p_matrix)

train_perc = 10
val_perc = 30
test_perc = 60
no_of_communities = 2
adjacency_matrix = np.matrix(model.block_matrix)
G_nx = nx.from_numpy_matrix(adjacency_matrix)

Compute_Lovasz_Theta(G_nx)

It gives singular KKT error for some random graphs, the graphs are all connected.

    pcost       dcost       gap    pres   dres   k/t
 0: -1.0000e+00 -1.0000e+00  2e+02  4e+00  1e+00  1e+00
 1: -4.5232e+00 -4.5713e+00  1e+01  4e-01  1e-01  5e-02
 2: -2.7795e+00 -2.7572e+00  2e+00  1e-01  3e-02  4e-02
 3: -2.5992e+00 -2.5529e+00  2e+00  9e-02  3e-02  7e-02
 4: -2.3703e+00 -2.3471e+00  9e-01  3e-02  9e-03  3e-02
 5: -2.1825e+00 -2.1728e+00  2e-01  7e-03  2e-03  1e-02
 6: -2.1055e+00 -2.0964e+00  2e-01  4e-03  1e-03  1e-02
 7: -2.0806e+00 -2.0761e+00  1e-01  2e-03  4e-04  5e-03
 8: -2.0391e+00 -2.0354e+00  7e-02  5e-04  1e-04  4e-03
 9: -2.0244e+00 -2.0226e+00  3e-02  2e-04  4e-05  2e-03
10: -2.0198e+00 -2.0180e+00  3e-02  1e-04  4e-05  2e-03
11: -2.0117e+00 -2.0109e+00  1e-02  3e-05  9e-06  8e-04
12: -2.0094e+00 -2.0085e+00  2e-02  3e-05  8e-06  9e-04
13: -2.0052e+00 -2.0049e+00  5e-03  7e-06  2e-06  4e-04
14: -2.0035e+00 -2.0032e+00  6e-03  4e-06  1e-06  3e-04
15: -2.0022e+00 -2.0021e+00  2e-03  1e-06  3e-07  2e-04
16: -2.0018e+00 -2.0016e+00  3e-03  1e-06  3e-07  2e-04
17: -2.0011e+00 -2.0010e+00  1e-03  3e-07  9e-08  8e-05
18: -2.0010e+00 -2.0009e+00  2e-03  3e-07  9e-08  9e-05
19: -2.0006e+00 -2.0006e+00  6e-04  9e-08  3e-08  4e-05
20: -2.0005e+00 -2.0004e+00  9e-04  9e-08  2e-08  4e-05
21: -2.0003e+00 -2.0003e+00  3e-04  2e-08  6e-09  2e-05
22: -2.0003e+00 -2.0002e+00  4e-04  2e-08  7e-07  2e-05
23: -2.0002e+00 -2.0002e+00  2e-04  7e-09  4e-07  1e-05
24: -2.0001e+00 -2.0001e+00  2e-04  6e-09  4e-06  1e-05
Terminated (singular KKT matrix).

Martin

unread,
Mar 23, 2018, 7:10:03 AM3/23/18
to CVXOPT
It looks like numerical difficulties before the default stopping criteria are met. You can try changing the default options (see http://www.cvxpy.org/en/latest/tutorial/advanced/#setting-solver-options).

The first thing to try would be to set refinement=2, and if that doesn't work, try changing abstol, reltol, and/or feastol

SERGE BENYAMIN

unread,
Oct 23, 2019, 4:15:49 PM10/23/19
to CVXOPT
Hello Sylvain,

First thank you very much , it has been a long time since you have written those lines but you answered my problem. The thing is I have been reading the docs and I can't understand why rescaling the problem in such way , especially with the spectral norm of P helps

Where have you read it or can you explain me why?

best regards,

Serge
Reply all
Reply to author
Forward
0 new messages