CVXOPT SDP solution not positive definite

359 views
Skip to first unread message

Pravin Bezwada

unread,
Apr 24, 2015, 7:33:49 AM4/24/15
to cvx...@googlegroups.com
Hi, 

I am attempting to optimize following problem, but cvxopt does not return a positive semi-definite matrix (although it returns a symmetric matrix) even when I set the constraints rightly (i think).

Please find below code. Kindly help me out.

Best regards,
Pravin


    # maximize     Tr(MY)
    # subject to
    #           ||Y|| - kz <= 0
    #           Tr(Y) - z   = 0
    #           Tr(BY)      = 1
    #           Y          >= 0 - positive semidefinite
    #
    # where
    #           Y =    X             z =     1
    #               --------             ----------
    #                Tr(BX)                Tr(BX)
    #          
    #           M = A'B A           
    #           B = covariance matrix
    #           X = xx'    - x are the weights
    #           k = max no. of stocks in mean reverting portfolio
    #
    # build model and compute matrices for CVXOPT
    # CXOPT requires SDP in standard form as below:
    # 
    # minimize        c'Y
    #
    # subject to:
    #            G Y + s = h
    #            A Y     = b
    #
    #
    # note we use technique in solution by Fanfan to represent
    # the L1 norm constraint ||Y|| in cvxopt by introducting
    # new variables.
    
    # Therefore number of variables of optimization are
    # Y, z, new variables (lets call it W) for L1 norm
    # total variables = 2 x n x n  + 1
    M = np.dot(Ahat.T, B)
    M = np.dot(M, Ahat)
    
    (n,n) = np.shape(M)
    
    # constant in optimization objective requires n x n x 2 + 1
    c = np.ravel(M.T).tolist() # constants for Y i.e. n x n
    c = [x * -1.0 for x in c]  # -1.0 because its minimization
    c = c + ([0] * (n*n))      # constants for new variables W
    c.append(0)                # constant for z
    c = np.asarray(c) 
    
    # now we construct matrix A. It has 3 parts
    # a1 for Tr(Y) - z   = 0
    # a2 for Tr(BY)      = 1
    # a3 for sum(W) - kz = 0
    
    a1 = np.ravel(np.eye(n)).tolist() # for Y
    a1 = a1 + ([0] * (n*n))           # for W
    a1.append(-1.0)                   # for z
    
    a2 = np.ravel(B.T).tolist()     # for Y
    a2 = a2 + ([0] * (n*n+1))       # for W and z

    a3 = [0] * (n*n)                # for Y
    a3 = a3 + ([1] * (n*n))         # for W
    a3.append(-context.K)           # for z

    # now A = [a1, a2, a3, a4]'
    Alist = [np.asarray(a1),np.asarray(a2),np.asarray(a3)]
    A = np.matrix(Alist)

    context.H = np.zeros((2*n*n+1,1)) 
    context.G = getG(n,context.K) 
    context.Hs = getHs(n)
    context.Gs = getGs(n)
    
    
    #CVXOPT begins
    cm = matrix(c)
    hm = matrix(context.H)
    Gm = matrix(context.G)
    Am = matrix(A)
    bm = matrix([0.0,1.0, 0.0])
    sol = sdp(c=cm,Gl=Gm,hl=hm,Gs=context.Gs,hs=context.Hs,A=Am,b=bm)

    
    if sol['status'] <> 'optimal':
        log.info(sol['status'])
        return

    Y = matrix(sol['x'][0:(n*n)],(n,n))
    
    if (np.any(np.linalg.eigvals(Y) < 0)):
        log.info('Y is not positive semidefinite')
        
def getHs(n):
    return [matrix(np.eye(n) * -0.0)]   
    
def getGs(n):
    G = np.zeros((n**2,2*n**2+1))
    
    row = 0
    for i in range(0,n):
        for j in range(0,n):
            g = np.zeros((n,n))
            
            if i == j:
                g[i,j] = -2.0
            else:
                g[i,j] = -1.0
                g[j,i] = -1.0
            
            g = np.ravel(g).tolist()
            g = g + ([0] *(n**2+1))
            G[row,:] = g
            row += 1
    
    return [matrix(G)]

def getG(n, k):
    # we have n x n variables in Y
    # we introduct n x n new variables for L1 norm constraint
    # we have one more variable for z >= 0
    
    G = np.zeros((2*n*n+1, 2*n*n + 1)) 

    
    # constraints for inequalities
    for i in range(0,n*n):
        G[i,i] = -1.0
        G[i,n*n+i] = -1.0
        
    # constraints for inequalities    
    for i in range(0, n*n):
        G[n*n+i,i] = 1.0
        G[n*n+i,n*n+i] = -1.0

    # constraint for z
    G[2*n*n,-1] = -1
    
    return G

Martin

unread,
Apr 24, 2015, 9:55:24 AM4/24/15
to cvx...@googlegroups.com
You can use CVXPY (http://www.cvxpy.org) to set up the problem. It will look something like this:

from cvxpy import *

# Define problem data (A,B,k,n)
# ...

# Variables
Y = Semidef(n)  # create positive semidefinite variable of order n
z = Variable()

# Create problem 
obj = Minimize(trace(A*X))
constraints = [sum_entries(abs(Y)) <= k*z, trace(Y) == z, trace(B*Y) == 1.0]
prob = Probem(obj, constraints)
prob.solve()



johns...@gmail.com

unread,
Nov 4, 2016, 7:00:54 AM11/4/16
to CVXOPT
Hi,

I have a similar issue with cvxopt.
Like Martin has recommended I use cvxpy to define my SDP.
My SDP consists of a number of constraints as well as a positive semidefinite variable X (with cvxpy.semidef(N)).
When I try to solve my problem I get a optimal solution according to the cvxopt solver ("Optimal solution found"). However is the calculated matrix X not positive semifdefinite as intended.
And as far as I know my constraints are correctly implemented.
Has anyone any idea how this is possible?

Thank you.

With kind regard,
John

Joachim Dahl

unread,
Nov 4, 2016, 7:04:55 AM11/4/16
to cvx...@googlegroups.com
How small is the smallest eigenvalue?  You can use dsyevd to test that.

If it's close to zero, for example, -1e8, then it's just a matter of round-off errors and you can probably still use the solution. 

--
You received this message because you are subscribed to the Google Groups "CVXOPT" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cvxopt+unsubscribe@googlegroups.com.
To post to this group, send email to cvx...@googlegroups.com.
Visit this group at https://groups.google.com/group/cvxopt.

For more options, visit https://groups.google.com/d/optout.

johns...@gmail.com

unread,
Nov 7, 2016, 4:16:47 AM11/7/16
to CVXOPT
First of all, thanks for your reply!

My smallest eigenvalue is e.g. around -8e-11, so quite small...
But how should I define that the given solution is correct? (Where is the limit?)
However the semidefinite matrix X is still not semidefinite, which is crucial to me (for verification: I couldn't calculate the Cholesky decomposition, that should be possible if X > 0).
To unsubscribe from this group and stop receiving emails from it, send an email to cvxopt+un...@googlegroups.com.

To post to this group, send email to cvx...@googlegroups.com.

Joachim Dahl

unread,
Nov 7, 2016, 4:28:56 AM11/7/16
to cvx...@googlegroups.com
That sounds like a very accurate solution.  You cannot use a general Cholesky factorization verify your results - it would require X to be strictly definite.  Computing eigenvalues is more appropriate.

A similar issue is present for all interior-point methods for linear programming as well.  The solver may return a negative x,  but as long as it's sufficiently close to 0,  it is considered OK.  That's not an artifact of optimization algorithms - it's a result of doing computations in finite precision. LAPACKs eigenvalue return may return slightly negative eigenvalues for a semidefinite matrices, etc.  So in practice it's fine to consider a small number such as -1e-10 to be zero (assuming your data are sensibly scaled).

To unsubscribe from this group and stop receiving emails from it, send an email to cvxopt+unsubscribe@googlegroups.com.

johns...@gmail.com

unread,
Nov 21, 2016, 4:58:42 AM11/21/16
to CVXOPT
Thanks again for this advice. I am going to implement the eigen-value approach. And thanks a lot for the extra comment, I keep it in mind.
Reply all
Reply to author
Forward
0 new messages