non-linear convex problem with sensitivity issue

35 views
Skip to first unread message

sl

unread,
Mar 3, 2017, 8:17:37 PM3/3/17
to CVXOPT
Hi,

I am trying to solve a non-linear optimization problem and hitting some data sensitivity issue. Here is a simplified version that already shows the problem; of course I don't have to use log for this simple setup itself but I need it in the more general setup:

Given [r0,r1], find [w0,w1] such that it
    minimize -log(1+ r0*w0 + r1*w1)
    subject to w0>=0, w1>=0, w0+w1<=1.0, and -log(1+ r0*w0 + r1*w1)<=-log(0.9)

so
    f0 = -log(1+ r0*w0 + r1*w1)
    f1 = f0+ log(0.9) 

If (r0,r1) is (-0.2,0.1), the solver finds the optimal solution (0,1); but if I change to (-0.4,1), it gets stuck; in fact all the points fed into F are zeros.

Here is the code... can some one please help? Thanks..

from cvxopt import matrix, solvers
import numpy as np
import math
import cvxopt

r=matrix([-0.4, 0.1]).T

def F(w=None, y=None):
    if w is None: return (1, matrix(0.0, (2,1)))
    if min(w)<0 or sum(w)>1.0: return None
 
    f=matrix(0.0, (2,1))
    Df=matrix(0.0, (2,2))

    z= (1.0+ r*w)[0,0]
    f[0,0]= -math.log(z)
    Df[0,:]= - r/z

    f[1,0]= math.log(0.9)-f[0,0]
    Df[1,:]= Df[0,:]

    if y is None: return f,Df
    H=y[0]* r.T*r/(z*z) + y[1]*r.T*r/(z*z)
    return f,Df,H


G=matrix(0.0, (3,2))
h=matrix(0.0, (3,1))
G[0:2,:]=-matrix(np.identity(2))
G[2,:]= matrix(1.0, (1,2))
h[2]= 1.0

sol=solvers.cp(F, G, h)
print(sol['x'])

sl

unread,
Mar 4, 2017, 10:35:00 AM3/4/17
to CVXOPT
There was a small typo in the code but it did not make much of difference...

from cvxopt import matrix, solvers
import numpy as np
import math
import cvxopt

r=matrix([-0.4, 0.1]).T

def F(w=None, y=None):
    if w is None: return (1, matrix(0.0, (2,1)))
    if min(w)<0 or sum(w)>1.0: return None
 
    f=matrix(0.0, (2,1))
    Df=matrix(0.0, (2,2))

    z= (1.0+ r*w)[0,0]
    f[0,0]= -math.log(z)
    Df[0,:]= - r/z

    f[1,0]= math.log(0.9)+f[0,0]
Reply all
Reply to author
Forward
0 new messages