Diebold-Li implementation in IRIS - 'No stable Solution in #1'

143 views
Skip to first unread message

Hector Herrada

unread,
Jan 27, 2014, 12:14:22 PM1/27/14
to iris-t...@googlegroups.com
Hi guys, I am trying to implement the Diebold-Li yield curve model with IRIS. This model can be represented in state-space form and can be estimated with Bayesian methods (in the paper they use MLE).

The measurement block is done with the Nelson-Siegel technique, and the transition block models the latent factors (i.e. level, slope, and curvature). I am getting the 'No stable Solution' warning and could not figure out why this is the case. I have attached the files needed (you only need to run 'Read_Diebold'). Any insights will be appreciated.

Thanks!

H
Diebold-Li.zip

Michael Johnston

unread,
Jan 28, 2014, 3:11:23 AM1/28/14
to Hector Herrada, iris-t...@googlegroups.com
Hi Hector, 

I was actually working on this last year a bit, although I didn't completely finish. I've been meaning to return to this because I would like to create a tutorial on term structure modelling in IRIS, but I've just been swamped with other things. But the code does replicate Diebold and Li (2006) and the yields-only model from Diebold, Rudebusch, and Aruoba (2006) correctly. Most of what you need to do the yields-macro model is there as well, I just haven't had a chance to finish it. Also illustrates how to create tables like you see in the papers using the IRIS automated PDF reporting functionality. I will give you what I have with the caveat that it is incomplete and the yield-macro estimation gives incorrect results because I didn't use very good initial conditions -- maybe you can finish it and send it back to me. :-) :-) 

I do not normally distribute incomplete/incorrect materials (at least intentionally) but it seems it could be useful to you right now. At first glance I do not see why your model doesn't solve -- maybe just a typo -- but the basic setup looks correct. Take a look at how I started to implement it and let me know what you think. 

Best,

Michael




--
You received this message because you are subscribed to the Google Groups "iris-toolbox-discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to iris-toolbox...@googlegroups.com.
To post to this group, send email to iris-t...@googlegroups.com.
Visit this group at http://groups.google.com/group/iris-toolbox.
For more options, visit https://groups.google.com/groups/opt_out.

Term structure materials.zip

Hector Herrada

unread,
Jan 28, 2014, 11:33:13 AM1/28/14
to iris-t...@googlegroups.com, Hector Herrada
Hi Michael, thank you for sharing your work. I will take a look at it and see if I can finish it. I will gladly share whatever I can advance on with you. I was able to replicate the paper but using MLE (I'd be glad to share it with you if interested). I would like to do the same with Bayesian methods to explore the possibility of modeling the yield curve within a DSGE model that I'm currently developing in IRIS.

Thanks,

Hector

Hector Herrada

unread,
Jan 28, 2014, 8:20:37 PM1/28/14
to iris-t...@googlegroups.com, Hector Herrada
Michael, you were right in regards to the initial conditions used. I have followed the steps the authors take to obtain these initial values and I am know getting close results to those in the paper for both, the yields_only and macro_yield versions. I will clean the files up and will send them to you tomorrow along with a couple of questions.

Hector

On Tuesday, January 28, 2014 2:11:23 AM UTC-6, Michael Johnston wrote:

Hector Herrada

unread,
Jan 29, 2014, 6:54:07 PM1/29/14
to iris-t...@googlegroups.com, Hector Herrada
Michael, I used my data set instead of yours. Somehow the numbers I get with it are significantly closer to those in the paper. I found this data set online where some guys were trying to replicate the paper on RATS (the set is missing one observation). I also used 'fmincon' instead of 'pso'. I did get closer results in my original code (no IRIS) when I also used the estimated Q and R matrices (instead of using ones) from the two-step process to get the initial conditions.

I am hoping you or anybody else from the team could help me with the following questions:

  1. I only obtained good numbers using the filter option 'stochastic'. Could you briefly explain the difference with 'fixed' and 'optimal'?
  2. I have a DSGE model that I usually estimate using Bayesian methods. Let's assume that I am able to embed this term structure model within the DSGE framework. Is IRIS able to estimate the DSGE parameters with Bayesian methods and those parameters of the term structure model with MLE methods? I noticed that we use ranges to constrain the parameters of the term structure model, which are uniform distributions actually. I hope this question makes sense.
  3. I do not understand why the authors and you force all of the parameters (abs values) in the transition matrix to be less than 1. I have estimated the model without restrictions on these parameters getting 1 or 2 of them slightly above one, but the eigenvalues of the whole matrix are still below 1; i.e. we have a stable process. What am I missing?
  4. I noticed that you obtain good values using 'pso' in the yields-only model and you do not use the initial conditions suggested in the paper. I'm just curious why 'pso' is able to perform well without these initial conditions. Is there a survey on 'pso' that you would recommend?
Thanks so much!

Hector
dra.zip

Michael Johnston

unread,
Jan 30, 2014, 3:16:49 AM1/30/14
to Hector Herrada, iris-t...@googlegroups.com
Great Hector! Thanks so much! 

I knew there was an initial conditions problem, so I started to try the swarm to find better initial conditions, but then found it pretty inefficient and got distracted by other things. It seems the way you have set it up works much better! 

Amusing that your dataset works better than mine -- I got mine from Frank Diebold's website. 

1. My disclaimer is Jaromir wrote all of this code, so he may be able to correct my explanations or provide more detail. But basically these settings control the way the initial state vector and MSE matrix are created. 
  • With 'stochastic' the MSE is based on model asymptotics where possible. If you take the transition equation and multiply it by itself then you get an equation which implies some steady state variance-covariance matrix for stationary variables and this is used to initialize the MSE matrix. The mean is the unconditional mean of the model. (Obviously these concepts are not a well-defined concept for unit root processes.) This is how the initial conditions are created for most of Chapter 3 of Harvey. 
  • With 'fixed' the MSE matrix is zeros (complete certainty) and the initial condition is the asymptotic mean (where possible) and zero (for unit root processes). 
  • With 'optimal' the initial conditions are treated as parameters to be estimated. These can be concentrated out of the likelihood, however, and estimated conditional on the parameter vector. See the discussion on Rosenberg's algorithm in Chapter 3 of Harvey (I think). 
2. Yeah, well, sort of. You can simply not specify priors (or specify flat priors) for the things you were estimating by MLE. But anytime you are using priors in a joint estimation procedure I guess you are technically using Bayesian methods (the priors on one parameter may affect the estimates of another). 

It's not really necessary to constrain the parameters. I think I was just doing that because I was starting to use the particle swarm to find initial conditions and when the parameter space is compact the bounds are used to initialize the first population. 

3. Really? I thought the final estimates in the paper were all between 0 and 1 in absolute value. In this case the constraints should not affect the solution. Did you compare the likelihood value you obtained to the one corresponding to the parameter estimates in the paper? 

4. Both particle swarms and genetic algorithms tend to work well on highly non-linear optimization problems with lots of local minima. The algorithms are stochastic and are designed to be willing to make enough mistakes (i.e., uphill steps) to find a more global optimum. So I use these algorithms a lot to find initial conditions for optimization problems. It's not really necessary in this case, obviously, because there are other ways of obtaining initial conditions which are probably more efficient. But I just wanted to demonstrate that it can be useful in certain cases. It could probably work in the yields-macro model as well, but would require a larger population and/or more generations and the computer I have at the moment is pretty weak. But there are no guarantees with any stochastic optimization routine. 

I wrote a little summary of my perspective of the usefulness of swarm algorithms in a tutorial for IRIS. It's pretty chatty and non-technical but I think it gives the gist of how to use the algorithms. Unfortunately Jaromir is changing the downloads section right now so the tutorials have disappeared. I've attached it. Note that you may need to re-run read_model.m in order to get things to work because the model class definition has changed since the tutorial was created. 

Best,

Michael




Michaels_Particle_Swarm_Tutorial_20130613.zip
Reply all
Reply to author
Forward
0 new messages