Running multiple knitro instances in a single python file

59 views
Skip to first unread message

Ray W

unread,
Aug 12, 2018, 12:39:23 PM8/12/18
to Artelys Knitro forum
Hello,

I'm trying to use a sequential estimator coupled with the EM algorithm, and I am having trouble with the correct syntax in the callback functions. I'm trying to use the Python version of 10.3.0. In general, the code looks something like:

while (q has not converged):
    solve
for x1 using knitro
    solve
for x2 using knitro, using the estimated values from x1 before
    update q
as a function of the two estimated vectors x1 and x2



In general, x1 and x2 are the solutions to two completely different problems, i.e. with different dimensions of the solution vector, different objectives, etc.

I have 2 questions:
1) I'm under the impression that I can just create two knitro instances, e.g. kc1 and kc2, initialize them, and then run a KTR_restart at the end of the while loop, iteratively using the prior estimates of x1 and x2 as the initial values. Is this correct? If so, what sort of special variable names do I need to use or avoid, for each method?

2) I've attached my code below; where I've basically added 1 or 2 to every knitro variable name. If someone can point out the mistake in that, I would appreciate it (I get reasonable solutions when I use scipy.optimize.minimize, at least for x1, but I get nonsense solutions when I replace that code with what I believe is the equivalent solve command using knitro). I must admit I'm not familiar with the way callbacks work, so I would be equally happy if someone just provided a basic skeleton of code that implements an example of the sequential estimator I outlined above. I'm guessing it would look like:

initialize kc1
initialize kc2


while(q not converged):
    KTR_solve
(kc1,x1, ...)
    KTR_solve
(kc2,x2, ...)
    update q
    KTR_restart
(kc1)
    KTR_restart
(kc2)



Thanks,
Ray

My code:

from knitro import *
import numpy as np
import scipy.optimize
data_length=50000

# DGP: Z is N(aX+bY, var1*a+var2)
# goal is to estimate a1, a2, b1, b2, var1, var2
# a and b vary by type (there are 2 types) but var1 and var2 do not
# true params are
# a1 = 4, a2 = 3
# b1 = 3, b2 = 1
# var1= 1, var2 = 1.5

data=np.zeros(data_length)
cap_x=np.random.normal(3,2,data_length)
cap_y=np.random.normal(4,1,data_length)
epsilon1=np.random.normal(0,1,data_length)
epsilon2=np.random.normal(0,np.sqrt(1.5),data_length)
for i in range(data_length):
    type_draw=np.random.rand()
    if type_draw<0.3:
        data[i]=4*cap_x[i]+2*cap_y[i]+2*epsilon1[i]+epsilon2[i]
    else:
        data[i]=3*cap_x[i]+1*cap_y[i]+np.sqrt(3)*epsilon1[i]+epsilon2[i]

# x is given by a1, a2, b1, b2, var1, var2
x1Init= [3,2, 4, 1.5]
x2Init = [0.8,1.7]



# first step: estimate the mean values, assuming homoskedastic
# for example, set the value to 2 for SD
# x1 is [a1, a2, b1, b2]
def MStep1 (x1,c):
    a_vals=x1[0:2]
    b_vals=x1[2:4]
    loglikes=np.zeros([data_length,2])
    for i in range(2):
        loglikes[:,i]=(-np.log(2)-
            (data-a_vals[i]*cap_x-b_vals[i]*cap_y)**2/
            (2*2**2))
    return -np.sum(posteriors*loglikes)

# second step: estimate the variance taking the estimated values
# as given
# here: take a1, a2,mean_hat as given. posteriors are also given
# x2 = [sig1, sig2]
def MStep2 (x2,c):
    var1=x2[0]
    var2=x2[1]
    a_hat=[a1hat,a2hat]
    mean_hat=[type1_meanhat,type2_meanhat]
    type_sd=[np.sqrt(a_hat[0]*var1+var2),np.sqrt(a_hat[1]*var1+var2)]

    loglikes=np.zeros([data_length,2])
    for i in range(2):
        a=(-np.log(type_sd[i])-
            (data-mean_hat[i])**2/(2*type_sd[i]**2))
        loglikes[:,i]=a
    return -np.sum(posteriors*loglikes)


n1 = 4
objGoal1 = KTR_OBJGOAL_MINIMIZE
objType1 = KTR_OBJTYPE_GENERAL

bndsLo1 = [ -KTR_INFBOUND, -KTR_INFBOUND,-KTR_INFBOUND,-KTR_INFBOUND]
bndsUp1 = [ KTR_INFBOUND, KTR_INFBOUND,KTR_INFBOUND,KTR_INFBOUND]
m1=0
cType1 = [ ]
cBndsLo1 = [ ]
cBndsUp1 = [ ]
jacIxConstr1 = [ ]
jacIxVar1    = [ ] 
hessRow1 = [ ]
hessCol1 = [ ]

n2 = 2
objGoal2 = KTR_OBJGOAL_MINIMIZE
objType2 = KTR_OBJTYPE_GENERAL

bndsLo2 = [ 0,0]
bndsUp2 = [ KTR_INFBOUND, KTR_INFBOUND ]
m2 = 0
cType2 = [ ]
cBndsLo2 = [ ]
cBndsUp2 = [ ]
jacIxConstr2 = [ ]
jacIxVar2    = [ ] 
hessRow2 = [ ]
hessCol2 = [ ]

#---- SETUP AND RUN KNITRO TO SOLVE THE PROBLEM.

#---- CREATE A NEW KNITRO SOLVER INSTANCE.
kc1 = KTR_new()
if kc1 == None:
    raise RuntimeError ("Failed to find a Knitro license.")

#---- DEMONSTRATE HOW TO SET KNITRO PARAMETERS.
if KTR_set_int_param_by_name(kc1, "outlev", 0):
    raise RuntimeError ("Error setting parameter 'outlev'")
if KTR_set_int_param_by_name(kc1, "gradopt", 3):
    raise RuntimeError ("Error setting parameter 'gradopt'")
if KTR_set_int_param_by_name(kc1, "hessopt", 2):
    raise RuntimeError ("Error setting parameter 'hessopt'")
if KTR_set_int_param_by_name(kc1, "par_numthreads",4):
    raise RuntimeError ("Error setting parameter 'par_numthreads'")
if KTR_set_double_param_by_name(kc1, "feastol", 1.0E-10):
    raise RuntimeError ("Error setting parameter 'feastol'")
if KTR_set_double_param_by_name(kc1, "ftol",5.0E-3):
    raise RuntimeError ("Error setting ftol")
if KTR_set_double_param_by_name(kc1, "maxtime_real",2.0E4):
    raise RuntimeError ("Error setting maxtime")

kc2 = KTR_new()
if kc2 == None:
    raise RuntimeError ("Failed to find a Knitro license.")

#---- DEMONSTRATE HOW TO SET KNITRO PARAMETERS.
if KTR_set_int_param_by_name(kc2, "outlev", 0):
    raise RuntimeError ("Error setting parameter 'outlev'")
if KTR_set_int_param_by_name(kc2, "gradopt", 3):
    raise RuntimeError ("Error setting parameter 'gradopt'")
if KTR_set_int_param_by_name(kc2, "hessopt", 2):
    raise RuntimeError ("Error setting parameter 'hessopt'")
if KTR_set_int_param_by_name(kc2, "par_numthreads",4):
    raise RuntimeError ("Error setting parameter 'par_numthreads'")
if KTR_set_double_param_by_name(kc2, "feastol", 1.0E-10):
    raise RuntimeError ("Error setting parameter 'feastol'")
if KTR_set_double_param_by_name(kc2, "ftol",5.0E-3):
    raise RuntimeError ("Error setting ftol")
if KTR_set_double_param_by_name(kc2, "maxtime_real",2.0E4):
    raise RuntimeError ("Error setting maxtime")


#------------------------------------------------------------------ 
#     FUNCTION callbackEvalFC
#------------------------------------------------------------------
 ## The signature of this function matches KTR_callback in knitro.h.
 #  Only "obj" and "c" are modified.
 ##
def callbackEvalFC1 (evalRequestCode, n1, m1, nnzJ1, nnzH1, x1, lambda_1, obj1,
 c1, objGrad1, jac1, hessian1, hessVector1, userParams1):
    if evalRequestCode == KTR_RC_EVALFC:
        obj1[0] = MStep1(x1, c1)
        return 0
    else:
        return KTR_RC_CALLBACK_ERR

def callbackEvalFC2 (evalRequestCode, n2, m2, nnzJ2, nnzH2, x2, lambda_2,
    obj2, c2, objGrad2, jac2, hessian2, hessVector2, userParams2):
    if evalRequestCode == KTR_RC_EVALFC:
        obj2[0] = MStep2(x2, c2)
        return 0
    else:
        return KTR_RC_CALLBACK_ERR        

#------------------------------------------------------------------
#     FUNCTION callbackEvalGA
#------------------------------------------------------------------
 ## The signature of this function matches KTR_callback in knitro.h.
 #  Only "objGrad" and "jac" are modified.
 ##
def callbackEvalGA1 (evalRequestCode, n1, m1, nnzJ1, nnzH1, x1, lambda_1,
 obj1, c1, objGrad1, jac1, hessian1, hessVector1, userParams1):
    if evalRequestCode == KTR_RC_EVALGA:
        evaluateGA(x1, objGrad1, jac1)
        return 0
    else:
        return KTR_RC_CALLBACK_ERR

def callbackEvalGA2 (evalRequestCode, n2, m2, nnzJ2, nnzH2, x2, lambda_2,
 obj2, c2, objGrad2, jac2, hessian2, hessVector2, userParams2):
    if evalRequestCode == KTR_RC_EVALGA:
        evaluateGA(x2, objGrad2, jac2)
        return 0
    else:
        return KTR_RC_CALLBACK_ERR

#------------------------------------------------------------------
#     FUNCTION callbackEvalH
#------------------------------------------------------------------
 ## The signature of this function matches KTR_callback in knitro.h.
 #  Only "hessian" or "hessVector" is modified.
 ##
def callbackEvalH1 (evalRequestCode, n1, m1, nnzJ1, nnzH1, x1, lambda_1,
    obj1, c1, objGrad1, jac1, hessian1, hessVector1, userParams1):
    if evalRequestCode == KTR_RC_EVALH:
        evaluateH1(x1, lambda_1, 1.0, hessian1)
        return 0
    elif evalRequestCode == KTR_RC_EVALH_NO_F:
        evaluateH1(x1, lambda_1, 0.0, hessian1)
        return 0
    else:
        return KTR_RC_CALLBACK_ERR

def callbackEvalH2 (evalRequestCode, n2, m2, nnzJ2, nnzH2, x2, lambda_2, obj2,
    c2, objGrad2, jac2, hessian2, hessVector2, userParams2):
    if evalRequestCode == KTR_RC_EVALH:
        evaluateH2(x2, lambda_2, 1.0, hessian2)
        return 0
    elif evalRequestCode == KTR_RC_EVALH_NO_F:
        evaluateH2(x2, lambda_2, 0.0, hessian2)
        return 0
    else:
        return KTR_RC_CALLBACK_ERR

#---- REGISTER THE CALLBACK FUNCTIONS THAT PERFORM PROBLEM EVALUATION.
#---- THE HESSIAN CALLBACK ONLY NEEDS TO BE REGISTERED FOR SPECIFIC
#---- HESSIAN OPTIONS (E.G., IT IS NOT REGISTERED IF THE OPTION FOR
#---- BFGS HESSIAN APPROXIMATIONS IS SELECTED).
if KTR_set_func_callback(kc1, callbackEvalFC1):
    raise RuntimeError ("Error registering function callback.")
if KTR_set_grad_callback(kc1, callbackEvalGA1):
    raise RuntimeError ("Error registering gradient callback.")
if KTR_set_hess_callback(kc1, callbackEvalH1):
    raise RuntimeError ("Error registering hessian callback.")

if KTR_set_func_callback(kc2, callbackEvalFC2):
    raise RuntimeError ("Error registering function callback.")
if KTR_set_grad_callback(kc2, callbackEvalGA2):
    raise RuntimeError ("Error registering gradient callback.")
if KTR_set_hess_callback(kc2, callbackEvalH2):
    raise RuntimeError ("Error registering hessian callback.")

#---- INITIALIZE KNITRO WITH THE PROBLEM DEFINITION.
ret1 = KTR_init_problem (kc1, n1, objGoal1, objType1, bndsLo1, bndsUp1,
                                cType1, cBndsLo1, cBndsUp1,
                                jacIxVar1, jacIxConstr1,
                                hessRow1, hessCol1,
                                x1Init, None)
ret2 = KTR_init_problem (kc2, n2, objGoal2, objType2, bndsLo2, bndsUp2,
                                cType2, cBndsLo2, cBndsUp2,
                                jacIxVar2, jacIxConstr2,
                                hessRow2, hessCol2,
                                x2Init, None)
if ret1:
    raise RuntimeError ("Error initializing the problem, "
                                + "Knitro status = "
                                + str(ret1))
if ret2:
    raise RuntimeError ("Error initializing the problem, "
                                + "Knitro status = "
                                + str(ret2))
#---- SOLVE THE PROBLEM.
#----
#---- RETURN STATUS CODES ARE DEFINED IN "knitro.h" AND DESCRIBED
#---- IN THE KNITRO MANUAL.
x1       = [0] * n1
lambda_1 = [0] * (m1 + n1)
x2       = [0] * n2
lambda_2 = [0] * (m1 + n1)
obj1     = [0]
obj2     = [0]
status= [0]



def normal_like(data,m,sd):
    out=1/(sd)*np.exp(-(data-m)**2/(2*sd**2))
    return out

counter=0
q_old=np.array([0,1])
q=np.array([0.3,0.7])
x1=x1Init
x2=x2Init
while(np.abs(q[0]-q_old[0])>0.01):
    counter=counter+1
    print(counter)
    q_old=q
    plike=np.zeros([data_length,2])

    plike[:,0]=q[0]*normal_like(data,x1[0]*cap_x+x1[2]*cap_y,
        np.sqrt(x2[0]*x1[0]+x2[1]))
    plike[:,1]=q[1]*normal_like(data,x1[1]*cap_x+x1[3]*cap_y,
        np.sqrt(x2[0]*x1[1]+x2[1]))

    sum_like=np.sum(plike,axis=1)
    posteriors=np.zeros([data_length,2])
    for i in range(2):
        posteriors[:,i]=plike[:,i]/sum_like

    nStatus = KTR_solve (kc1, x1, lambda_1, 0, obj1,
                             None, None, None, None, None, None)
    # sol1=scipy.optimize.minimize(MStep1,x1)
    # x1=sol1.x
    [a1hat,a2hat]=x1[0:2]

    [type1_meanhat,type2_meanhat]=[x1[0]*cap_x+x1[2]*cap_y,x1[1]*cap_x+x1[3]*cap_y]
    
    nStatus = KTR_solve (kc2, x2, lambda_2, 0, obj2,
                             None, None, None, None, None, None)

    # sol2=scipy.optimize.minimize(MStep2,x2, bounds=((0,None),(0,None)))
    # x2=sol2.x
    q=np.mean(posteriors,axis=0)

    print(x1)
    print(x2)
    print(q)
    KTR_restart(kc1,x1,lambda_1)
    KTR_restart(kc2,x2,lambda_2)

print(q)
print(x1)
print(x2)


#---- BE CERTAIN THE NATIVE OBJECT INSTANCE IS DESTROYED.
KTR_free (kc1)
KTR_free (kc2)

#+++++++++++++++++++ End of source file +++++++++++++++++++++++++++++


Ray W

unread,
Aug 13, 2018, 6:57:14 AM8/13/18
to Artelys Knitro forum
I figure I'd follow up/close this post. So I tried a different version of the problem, where the heteroskedasticity is simpler: Z ~ N(aX+bY,cX^2+d) where a and b vary over the 2 types, but using the same general syntax, and I got reasonable results. Seems like even in the 'simplest' of cases, the algorithm may have some weird convergence properties; this is a property of the problem, not that my syntax was incorrect.

Richard Waltz

unread,
Aug 13, 2018, 12:25:06 PM8/13/18
to kni...@googlegroups.com
Hi,

Thanks for the update.  Just a few suggestions that could help stabilize performance:

1) For the second (k2) problem that has lower bounds on the variables you may want to try algorithm 3 (active-set) or 4 (sqp).  These may be more stable/reliable on small problems whose only constraints are simple bounds on the variable.  I suspect this could stabilize performance. 

  if KTR_set_int_param_by_name(kc2, "algorithm",3):
      raise RuntimeError ("Error setting algorithm")

or

  if KTR_set_int_param_by_name(kc2, "algorithm",4):
      raise RuntimeError ("Error setting algorithm")

If you do stick with the default interior-point algorithm for the k2 problem, then you may want to change the barrier update rule to use a monotone decrease of the barrier parameter, which may be more stable:

  if KTR_set_int_param_by_name(kc2, "bar_murule", 1):
      raise RuntimeError ("Error setting parameter 'bar_murule'")

2) If you are having any issues with solution precision, you may want to try turning the scaling off (for both k1 and k2 problems):

  if KTR_set_int_param_by_name(kc1, "scale", 0):
      raise RuntimeError ("Error setting parameter 'scale'")

3) If you can you should upgrade to the latest Knitro 11.0.1 to see if this version improves performance.

4) If you can provide exact derivatives for the objective function this would help a lot.  In this case you would want to make sure to use the "derivcheck" option in Knitro to check that the provided derivatives are correct.

Please feel free to send us your updated model and we'd be happy to look further into the performance, or let us know if you have further issues.

Best,
-Richard


Richard WALTZ
Senior Scientist
 
Artelys USA
 


From: kni...@googlegroups.com [kni...@googlegroups.com] on behalf of Ray W [ray.l...@gmail.com]
Sent: Monday, August 13, 2018 5:34 AM
To: Artelys Knitro forum
Subject: [Knitro] Re: Running multiple knitro instances in a single python file

--
You received this message because you are subscribed to the Artelys "Knitro Nonlinear Optimization Solver" google group.
To post to this group, send email to kni...@googlegroups.com
To unsubscribe from this group, send email to knitro-un...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/knitro?hl=en
Thank You,
Artelys
http://www.artelys.com/en/optimization-tools/knitro
---
You received this message because you are subscribed to the Google Groups "Artelys Knitro forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to knitro+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages