NLS routine in LSTAR function

17 views
Skip to first unread message

Alexander Haider

unread,
Feb 15, 2018, 7:07:49 PM2/15/18
to tsdyn
Hi,

I was studying the lstar function in the tsDyn package as I am trying to understand the model in more detail. However, two points of the non-linear optimization routine are not clear to me and I was hoping someone could help me out here:

1) concerning the gradient function gradEhat, line 196: 

J = - cbind(gGamma, gTh) / sqrt(str$n.used)

It is not clear to me why J is divided by sqrt(str$n.used). The rest of the code within the gradient function are perfectly clear to me (gGamma being the derivative of the objective function wrt to gamma, multiplying by -2 in the last step,...). But after staring at the SS function for some time it's still not clear to me why J is divided by sqrt(str$n.used).

2) concerning the SS function I was just wondering where the penalty (variable pen; I am assuming pen stands for penalty) comes from. As far I understand the code the idea is to penalize the objective function if too many observations end up in one regime. Intuitively that makes sense to me but I was wondering if there is literature on that. I was having a look at Teräsvirta (1994) and Fransens and van Dijk (2000) but the implementation of the NLS procedure is not covered in detail in any of the two sources. 

I would really appreciate it if someone could could help me out with the gradient problem and the penalty in the SS function (pointers to some literature would be very much appreciated).

Thanks,
Alex

Reply all
Reply to author
Forward
0 new messages