Generalized Impulse Response Function using StatsModels

1,944 views
Skip to first unread message

Charles Martineau

unread,
Oct 13, 2014, 12:07:18 AM10/13/14
to pystat...@googlegroups.com
Hi everyone,

Is there a way with StatsModels to compute a forecast error variance decomposition using the Generalized Impulse Response Function of Pesaran and Shin (1998)?

With the generalized impulse response, we don't attempt to orthogonalize shocks. The generalized approach allows correlated shocks but accounts for them appropriately using the historically observed distribution of the errors.

Thank you

josef...@gmail.com

unread,
Oct 13, 2014, 8:24:57 AM10/13/14
to pystatsmodels
I'm pretty much out of this, however, It doesn't look like it's
available but all the pieces are there and it should be relatively
easy to calculate.

It's just replacing the cholesky of sigma by sigma itself and dividing
by the diagonal (sigma_jj), if I interpret equation 10 and the
following correctly.

from briefly looking at the code
irf.IRAnalysis has currently 3 versions of IRF, including one for
structural VAR. The models have the corresponding MA representation of
the coefficients (lag polynomial), and also calculated the simulations
for the error band of the IRF

Josef

>
> Thank you

Charles Martineau

unread,
Oct 13, 2014, 10:43:25 AM10/13/14
to pystat...@googlegroups.com
Hi Josef,

Thanks for your reply. I know how to compute the "new" sigma but I have no idea if I calculating the forecast error variance decomposition correctly. If you can see my mistake in my code below... I would be really happy.

So if I want to compute my own Gen. Imp. Res. Function and the the forecast error variance I do:

import statsmodels.tsa.api as ts
model
= ts.VAR(data)  # I have 19 equations
results
= model.fit(2)  # for two lags
coeff
= results.coefs
phis
= ma_rep(coeff, maxn=10)  # 10 for up to 10 steps ahead.
mse
= results.mse(nsteps)
sigma
= results.resid.cov()

s
= np.mat(sigma)*np.mat(np.linalg.inv(np.diag(np.sqrt(np.diag(sigma)))))  # where s is sigma itself and dividing by the diagonal (sigma_jj)


Then to compute the FEVD, it gets a bit more tricky for me. I do two options.

(1)

ort = results.orth_ma_rep(maxn=10, P=s)   # where s is my "new" sigma matrix -- but I don't think this is good
ort
= (ort[:10] ** 2).cumsum(axis=0)
fevd
= np.zeros((nsteps, neq, neq))
for i in range(nsteps):
   fevd
[i] = (ort[i].T / mse[i]).T

Or, I do, (2)

forc_covs = np.zeros((10, 19, 19))   # 10 for is 10 steps ahead, the H, and 19 is the number of equations.
for h in range(10):
   phi
= phis[h]
   
var = chain_dot(phi, s)
   forc_covs
[h] =  var
forc_covs
= (forc_covs[:nsteps]**2).cumsum(axis=0)
fevd
= np.zeros((nsteps, neq, neq))
for i in range(nsteps):
   fevd
[i] = (forc_covs[i].T / mse[i]).T

None of the both options generates the expected results. I am not sure where I am doing my mistake. Any help is greatly appreciated.

josef...@gmail.com

unread,
Oct 13, 2014, 12:15:37 PM10/13/14
to pystatsmodels
I'm not really able to check the calculations, since I'm not familiar with it.
I'm just doing "pattern recognition"

class FEVD(object) takes P as an argument which can be replaced by \Sigma.

The only difference from the equation after equation (10) it seems to
me is then to divide the results of FEVD by \sigma_ii

Maybe a macroeconometrician can provide more help.

Josef

Charles Martineau

unread,
Oct 13, 2014, 12:51:22 PM10/13/14
to pystat...@googlegroups.com
Thanks Josef,

I see that the class FEVD takes P as an argument, but if I do:

fevd = results.fevd(nsteps, var_decomp=s)

where the var_decomp is an option in the FEVD, nothing changes in my answer ... it is as if I was computing the ort.impulse response function.

Hopefully a macroeconomist can help me soon : )

josef...@gmail.com

unread,
Oct 13, 2014, 1:12:05 PM10/13/14
to pystatsmodels
On Mon, Oct 13, 2014 at 12:51 PM, Charles Martineau
<martinea...@gmail.com> wrote:
> Thanks Josef,
>
> I see that the class FEVD takes P as an argument, but if I do:
>
> fevd = results.fevd(nsteps, var_decomp=s)
>
> where the var_decomp is an option in the FEVD, nothing changes in my answer
> ... it is as if I was computing the ort.impulse response function.

If you managed to get the generalized irf, then you could try to
replace the call in FEVD,__init__
self.irfobj = model.irf(var_decomp=P, periods=periods)
by a call to your irf


about using results.fevd(nsteps, var_decomp=s):

BaseIRAnalysis uses P only for svar and for lr
it's missing in
self.orth_irfs = model.orth_ma_rep(periods)
even though model.orth_ma_rep looks like it should be able to handle general P

Josef

josef...@gmail.com

unread,
Oct 13, 2014, 1:27:21 PM10/13/14
to pystatsmodels
to the calculation (not about whether it's correct)

np.mat(sigma)*np.mat(np.linalg.inv(np.diag(np.sqrt(np.diag(sigma)))))

should be the same as

sigma / np.sqrt(np.diag(sigma))[:, None]

i.e. elementwise division of each row instead of matrix algebra

Josef

Charles Martineau

unread,
Oct 13, 2014, 2:41:56 PM10/13/14
to pystat...@googlegroups.com
I also tried to get the generalized using irf as follow:

model = ts.VAR(data)
results = model.fit(2)
x = results.irf(periods=10, var_decomp=s)  # where s is np.mat(sigma)*np.mat(np.linalg.inv(np.diag(np.sqrt(np.diag(sigma)))))
fevd = x.model.fevd(10, s)

again ... fevd is just the normal fevd with orth. impulse response ...

Charles Martineau

unread,
Oct 22, 2014, 2:10:17 AM10/22/14
to pystat...@googlegroups.com
This is still driving me crazy that I haven't found a correct way to implement this in Python... with WinRats you can do:

->compute FactorMatrix=%sigma*inv(%diag(%sqrt(%xdiag(%sigma))))
->errors(model=volvar,steps=nsteps,factor=gfactor,stderrs=gstderrs,results=gfevd)

The 'gfevd' is your generalized forecast error variance without assuming orthogonality and so on....

Anyone knows how?

Charles Martineau

unread,
Oct 22, 2014, 4:41:18 PM10/22/14
to pystat...@googlegroups.com
I should add some clarification:

The %sigma*inv(%diag(%sqrt(%xdiag(%sigma)))) gets you the the new vacovar residual matrix. In the function 'errors' you specify the model like in Python and nsteps are the 'H steps forward' you want to forecast.

Charles Martineau

unread,
Oct 23, 2014, 7:20:31 PM10/23/14
to pystat...@googlegroups.com
This is the closest thing that I can get:

import pandas as pd
import numpy as np
import statsmodels.tsa.api as ts
from collections import OrderedDict

lags
= 2

nsteps
= 10
model
= ts.VAR(data)
results
= model.fit(lags)

sigma
= results.resid.cov()
s
= np.mat(sigma)*np.mat(np.linalg.inv(np.diag(np.sqrt(np.diag(sigma)))))


irfobj
= results.irf(periods=10, var_decomp=s)

orth_irfs
= irfobj.orth_irfs

irfs
= (orth_irfs[:nsteps] ** 2).cumsum(axis=0)

mse
= results.mse(nsteps)


fevd
= np.zeros((nsteps, neq, neq))

for i in range(nsteps):    
    fevd
[i] = (irfs[i].T/ mse[i]).T


decomp
= fevd.swapaxes(0, 1)


but the answer that I get using this method is far away from the true values that I get using WinRats. I know that the results in WinRats are the accurate one.

josef...@gmail.com

unread,
Oct 23, 2014, 7:59:27 PM10/23/14
to pystatsmodels
On Thu, Oct 23, 2014 at 7:20 PM, Charles Martineau
<martinea...@gmail.com> wrote:
> This is the closest thing that I can get:
>
> import pandas as pd
> import numpy as np
> import statsmodels.tsa.api as ts
> from collections import OrderedDict
>
> lags = 2
>
> nsteps = 10
> model = ts.VAR(data)
> results = model.fit(lags)
> sigma = results.resid.cov()
> s = np.mat(sigma)*np.mat(np.linalg.inv(np.diag(np.sqrt(np.diag(sigma)))))
>
> irfobj = results.irf(periods=10, var_decomp=s)
>
> orth_irfs = irfobj.orth_irfs
>
> irfs = (orth_irfs[:nsteps] ** 2).cumsum(axis=0)
>
> mse = results.mse(nsteps)
>
>
> fevd = np.zeros((nsteps, neq, neq))
>
> for i in range(nsteps):
> fevd[i] = (irfs[i].T/ mse[i]).T
>
>
> decomp = fevd.swapaxes(0, 1)
>
>
> but the answer that I get using this method is far away from the true values
> that I get using WinRats. I know that the results in WinRats are the
> accurate one.


Can you write a full example or test case with the expected results,
maybe using the macro dataset that is included in statsmodels.dataset?

I might be able to help with debugging but I won't be able to come up
with an example nor with expected reference numbers.
From what I've seen before, I don't think it will work without
changing some of the statsmodels var/irf source code.

Did you check whether all the other prior things, parameter estimates
and error correlation, agree with RATS? I don't have RATS available
and never worked with it, I read a bit of their documentation.

Josef

Charles Martineau

unread,
Oct 23, 2014, 11:19:05 PM10/23/14
to pystat...@googlegroups.com
Hi Josef,

Alright thanks for the help! Ok, attach to this msg is the data. The data is the same used in the study of Diebold and Yilmaz 2009 ( http://www.econstor.eu/bitstream/10419/43200/1/599227087.pdf - also officially published in Economic Theory journal). This paper uses only the normal cholesky decomposition forecast error variance decomposition ( I will explain below where the generalized VAR comes into play). With this data, you can easily replicate the findings in Table 4, page 17. The main results in the table are the 10 "H-steps" FEVD for a two lag VAR . Here's my code to exactly replicate the table:

   
lags = 2
    nsteps
= 10

    data
= pd.read_csv('DY_data.csv')
    data
= data.set_index(pd.to_datetime(data.date))
    data
= data.drop('date', 1)
    model
= ts.VAR(data)
    names
= model.names
    nvar
= len(names)
    results
= model.fit(lags)

    fevd
= results.fevd(nsteps)
   
""" Decomposition of Variance for Series at nstep """
    gfedv
= OrderedDict()
   
for x in range(len(names)):
        fevd_decomp
= fevd.__dict__.items()[5][1][x]
        gfedv
[names[x]] = list(fevd_decomp[nsteps-1]*100)

    gfedv
= pd.DataFrame(gfedv)

    gfedv
.index = names

   
""" Get the contribution from others """
    diag
= []
   
for x in names:
        diag
.append(gfedv.ix[x, x])
    cont_from
= pd.DataFrame((100 - np.array(diag)))
    cont_from
.index = names
    cont_from
= cont_from.rename(columns={0: 'Contribution From Others'})
    cont_from_sum
= cont_from.sum()

   
"""Get the contribution to others"""
    tot_cont
= gfedv.sum(axis=1)

    cont_to
= pd.DataFrame((np.array(tot_cont) - np.array(diag)))
    cont_to
.index = names
    cont_to
= cont_to.T
    cont_to
= cont_to.rename(index={0: 'Contribution to others'})

    tot_cont
= pd.DataFrame(tot_cont)
    tot_cont
= tot_cont.rename(columns={0: 'Contribution including own'})
    tot_cont
= tot_cont.T

    spillover
= cont_to.sum(axis=1)/nvar
    spillover
= spillover.values

   
""" Reproduce table """
    gfedv
= gfedv.transpose()

   
""" Build Table Results """
    res
= gfedv.append(cont_to)
    res
= res.append(tot_cont)
    res
= res.join(cont_from)

    res
.loc['Contribution to others', 'Contribution From Others'] = cont_from_sum.values
    res
= res.fillna(spillover)

I was able to construct this Python code based on the WinRats Code provided here: https://estima.com/forum/viewtopic.php?f=8&t=986
This code has both the methodology used by DY 2009 and DY 2012. In DY 2012, https://ideas.repec.org/p/koc/wpaper/1001.html, they use the Generalized Impulse Response Function developed by Koop, Pesaran and Potter (1996) and Pesaran and Shin (1998). This WinRats code, implements the DY 2012 methodology using the DY 2009 data.

When using the WinRats code, with the DY 2012 method (GIR VAR) and the attached CSV data, you should get this table result:

                            US   UK  FRA  GER  HKG  JPN  AUS  IDN  KOR  MYS  PHL  SGP  TAI  THA  ARG  BRA  CHL  MEX  TUR  From Others
US                         27.3 16.9 17.5 14.0  3.8  1.0  3.3  0.3  1.5  0.3  0.9  1.3  2.6  0.1  1.6  1.3  0.8  3.2  2.2          73
UK                          9.1 30.5 21.7 15.8  4.5  1.7  3.3  0.4  1.1  0.4  0.5  1.7  1.7  0.3  1.9  1.5  0.6  2.0  1.4          70
FRA                         9.0 20.5 30.1 20.5  3.5  1.2  1.8  0.3  0.6  0.3  0.5  1.3  1.5  0.3  2.3  2.3  0.3  2.2  1.7          70
GER                        10.2 19.9 24.2 27.3  3.1  1.2  1.3  0.3  0.7  0.4  0.6  1.2  1.4  0.3  2.1  1.9  0.2  1.9  1.9          73
HKG                         1.2  0.9  1.4  1.2 53.9  0.8  6.2  2.5  4.5  2.8  8.3  5.5  3.6  2.1  0.4  1.9  0.2  2.2  0.4          46
JPN                         2.0  4.2  3.6  3.6  2.0 66.2  1.5  0.4  1.9  2.7  0.3  1.2  1.8  0.4  2.0  2.0  0.1  1.3  2.8          34
AUS                         5.5  4.5  3.3  2.5 29.3  0.7 31.6  0.8  1.6  0.8  3.0  2.6  1.1  0.1  1.2  2.3  2.3  6.0  1.0          68
IDN                         1.8  1.6  1.2  1.9  4.9  0.5  1.8 50.7  6.8  3.7 10.4  7.6  0.9  2.0  0.3  1.2  0.9  0.8  1.1          49
KOR                         1.6  1.3  1.2  1.4  6.9  1.4  1.4  9.0 50.0  2.1  2.8  7.1  9.2  2.5  0.2  0.3  0.0  0.7  0.9          50
MYS                         1.0  0.9  0.5  0.8  6.0  1.2  2.7  1.4  2.4 61.6  3.9  1.6  1.2  0.9  2.3  5.7  0.4  3.2  2.1          38
PHL                         1.4  0.7  0.8  1.1  6.7  0.2  1.4  8.5  2.4  5.8 57.7  5.6  0.9  1.8  0.7  2.1  0.8  1.0  0.5          42
SGP                         5.1  5.0  4.5  4.3  6.6  1.4  3.5  5.7  5.6  2.9  3.3 31.6  3.9  4.6  2.3  3.6  1.8  2.9  1.5          68
TAI                         5.8  2.5  3.1  3.0  2.9  1.2  0.6  0.4  9.2  0.5  1.5  3.5 61.1  0.7  0.9  0.2  0.3  1.2  1.5          39
THA                         0.3  0.4  0.5  0.4  6.8  0.5  0.3  4.1  3.8  1.1  3.5  9.3  1.5 65.1  0.2  0.9  0.1  0.8  0.2          35
ARG                         2.1  2.4  3.6  3.1  2.2  0.9  2.6  0.3  0.4  2.0  0.2  1.5  0.9  0.3 54.7  8.3  1.9 10.5  2.1          45
BRA                         2.3  2.8  3.1  3.0  7.8  0.9  6.0  1.2  0.5  7.9  2.2  2.5  0.8  0.6 10.2 34.7  3.4  8.7  1.3          65
CHL                         2.3  1.8  1.1  0.8  2.2  0.1  4.8  1.6  0.1  1.8  1.1  3.4  0.1  0.1  4.4  7.3 61.1  5.2  0.7          39
MEX                         3.6  2.9  3.3  3.0 15.0  0.3  7.3  0.3  0.9  1.9  1.9  1.4  1.0  0.2  6.4  7.1  2.5 38.8  2.1          61
TUR                         2.1  2.8  3.3  3.9  3.7  0.4  2.1  0.9  2.0  2.9  1.0  1.4  4.9  0.3  1.6  1.7  0.7  3.2 61.4          39
Contribution to others       67   92   98   84  118   16   52   38   46   40   46   60   39   18   41   51   17   57   25        1005
Contribution including own   94  122  128  112  172   82   83   89   96  102  104   91  100   83   96   86   78   96   87       52.9%

 And it is this freakin table that I can't replicate!!

Thank you for all the help  
DY_data.xls

josef...@gmail.com

unread,
Oct 24, 2014, 8:15:09 PM10/24/14
to pystatsmodels
Took me a few hours
one change in statsmodels source

----
# -*- coding: utf-8 -*-
"""
Created on Fri Oct 24 19:48:45 2014

Author: Josef Perktold, Charles Martineau

"""

import numpy as np
import pandas as pd
import statsmodels.tsa.api as ts


lags = 2
nsteps = 10
data = pd.read_csv('DY_data_2_no_extra.csv', delimiter='\t')
data = data.set_index(pd.to_datetime(data.date))
data = data.drop('date', 1)
data = data.drop('no', 1)
model = ts.VAR(data)
names = model.names
nvar = len(names)
results = model.fit(lags)

sigma_u = np.asarray(results.sigma_u)
sd_u = np.sqrt(np.diag(sigma_u))
fevd = results.fevd(nsteps, sigma_u / sd_u)

fe = fevd.decomp[:,-1,:]
fevd_normalized = (fe / fe.sum(1)[:,None] * 100)

print np.round(fevd_normalized, 1)[:,:7]

res = '''\
US                         27.3 16.9 17.5 14.0  3.8  1.0  3.3  0.3  1.5  0.3  0.9  1.3  2.6  0.1  1.6  1.3  0.8  3.2  2.2          73
UK                          9.1 30.5 21.7 15.8  4.5  1.7  3.3  0.4  1.1  0.4  0.5  1.7  1.7  0.3  1.9  1.5  0.6  2.0  1.4          70
FRA                         9.0 20.5 30.1 20.5  3.5  1.2  1.8  0.3  0.6  0.3  0.5  1.3  1.5  0.3  2.3  2.3  0.3  2.2  1.7          70
GER                        10.2 19.9 24.2 27.3  3.1  1.2  1.3  0.3  0.7  0.4  0.6  1.2  1.4  0.3  2.1  1.9  0.2  1.9  1.9          73
HKG                         1.2  0.9  1.4  1.2 53.9  0.8  6.2  2.5  4.5  2.8  8.3  5.5  3.6  2.1  0.4  1.9  0.2  2.2  0.4          46
JPN                         2.0  4.2  3.6  3.6  2.0 66.2  1.5  0.4  1.9  2.7  0.3  1.2  1.8  0.4  2.0  2.0  0.1  1.3  2.8          34
AUS                         5.5  4.5  3.3  2.5 29.3  0.7 31.6  0.8  1.6  0.8  3.0  2.6  1.1  0.1  1.2  2.3  2.3  6.0  1.0          68
IDN                         1.8  1.6  1.2  1.9  4.9  0.5  1.8 50.7  6.8  3.7 10.4  7.6  0.9  2.0  0.3  1.2  0.9  0.8  1.1          49
KOR                         1.6  1.3  1.2  1.4  6.9  1.4  1.4  9.0 50.0  2.1  2.8  7.1  9.2  2.5  0.2  0.3  0.0  0.7  0.9          50
MYS                         1.0  0.9  0.5  0.8  6.0  1.2  2.7  1.4  2.4 61.6  3.9  1.6  1.2  0.9  2.3  5.7  0.4  3.2  2.1          38
PHL                         1.4  0.7  0.8  1.1  6.7  0.2  1.4  8.5  2.4  5.8 57.7  5.6  0.9  1.8  0.7  2.1  0.8  1.0  0.5          42
SGP                         5.1  5.0  4.5  4.3  6.6  1.4  3.5  5.7  5.6  2.9  3.3 31.6  3.9  4.6  2.3  3.6  1.8  2.9  1.5          68
TAI                         5.8  2.5  3.1  3.0  2.9  1.2  0.6  0.4  9.2  0.5  1.5  3.5 61.1  0.7  0.9  0.2  0.3  1.2  1.5          39
THA                         0.3  0.4  0.5  0.4  6.8  0.5  0.3  4.1  3.8  1.1  3.5  9.3  1.5 65.1  0.2  0.9  0.1  0.8  0.2          35
ARG                         2.1  2.4  3.6  3.1  2.2  0.9  2.6  0.3  0.4  2.0  0.2  1.5  0.9  0.3 54.7  8.3  1.9 10.5  2.1          45
BRA                         2.3  2.8  3.1  3.0  7.8  0.9  6.0  1.2  0.5  7.9  2.2  2.5  0.8  0.6 10.2 34.7  3.4  8.7  1.3          65
CHL                         2.3  1.8  1.1  0.8  2.2  0.1  4.8  1.6  0.1  1.8  1.1  3.4  0.1  0.1  4.4  7.3 61.1  5.2  0.7          39
MEX                         3.6  2.9  3.3  3.0 15.0  0.3  7.3  0.3  0.9  1.9  1.9  1.4  1.0  0.2  6.4  7.1  2.5 38.8  2.1          61
TUR                         2.1  2.8  3.3  3.9  3.7  0.4  2.1  0.9  2.0  2.9  1.0  1.4  4.9  0.3  1.6  1.7  0.7  3.2 61.4          39
Contributiontoothers       67   92   98   84  118   16   52   38   46   40   46   60   39   18   41   51   17   57   25        1005
Contributionincludingown   94  122  128  112  172   82   83   89   96  102  104   91  100   83   96   86   78   96   87       52.9'''.split('\n')
ss = ' '.join([s.split(' ', 1)[1] for s in res])
resarr = np.array(ss.split(), float).reshape(21,20)

cont_incl = fevd_normalized.sum(0)
cont_to = fevd_normalized.sum(0) - np.diag(fevd_normalized)
cont_from  = fevd_normalized.sum(1) - np.diag(fevd_normalized)

print np.max(np.abs(np.round(fevd_normalized, 1) - resarr[:-2, :-1]))


for c in [cont_incl, cont_to, cont_from]:
    print np.round(c)


'''
>>> pd.DataFrame(np.round(fevd_normalized, 1), columns=names)
    vdjia  vftse  vfra  vger  vhkg  vjpn  vaus  vidn  vkor  vmys  vphl  vsgp  \
0    27.3   16.9  17.5  14.0   3.8   1.0   3.3   0.3   1.5   0.3   0.9   1.3   
1     9.1   30.5  21.7  15.8   4.5   1.7   3.3   0.4   1.1   0.4   0.5   1.7   
2     9.0   20.5  30.1  20.5   3.5   1.2   1.8   0.3   0.6   0.3   0.5   1.3   
3    10.2   19.9  24.2  27.3   3.1   1.2   1.3   0.3   0.7   0.4   0.6   1.2   
4     1.2    0.9   1.4   1.2  53.9   0.8   6.2   2.5   4.5   2.8   8.3   5.5   
5     2.0    4.2   3.6   3.6   2.0  66.2   1.5   0.4   1.9   2.7   0.3   1.2   
6     5.5    4.5   3.3   2.5  29.3   0.7  31.6   0.8   1.6   0.8   3.0   2.6   
7     1.8    1.6   1.2   1.9   4.9   0.5   1.8  50.7   6.8   3.7  10.4   7.6   
8     1.6    1.3   1.2   1.4   6.9   1.4   1.4   9.0  50.0   2.1   2.8   7.1   
9     1.0    0.9   0.5   0.8   6.0   1.2   2.7   1.4   2.4  61.6   3.9   1.6   
10    1.4    0.7   0.8   1.1   6.7   0.2   1.4   8.5   2.4   5.8  57.7   5.6   
11    5.1    5.0   4.5   4.3   6.6   1.4   3.5   5.7   5.6   2.9   3.3  31.6   
12    5.8    2.5   3.1   3.0   2.9   1.2   0.6   0.4   9.2   0.5   1.5   3.5   
13    0.3    0.4   0.5   0.4   6.8   0.5   0.3   4.1   3.8   1.1   3.5   9.3   
14    2.1    2.4   3.6   3.1   2.2   0.9   2.6   0.3   0.4   2.0   0.2   1.5   
15    2.3    2.8   3.1   3.0   7.8   0.9   6.0   1.2   0.5   7.9   2.2   2.5   
16    2.3    1.8   1.1   0.8   2.2   0.1   4.8   1.6   0.1   1.8   1.1   3.4   
17    3.6    2.9   3.3   3.0  15.0   0.3   7.3   0.3   0.9   1.9   1.9   1.4   
18    2.1    2.8   3.3   3.9   3.7   0.4   2.1   0.9   2.0   2.9   1.0   1.4   

    vtai  vtha  varg  vbra  vchl  vmex  vtur  
0    2.6   0.1   1.6   1.3   0.8   3.2   2.2  
1    1.7   0.3   1.9   1.5   0.6   2.0   1.4  
2    1.5   0.3   2.3   2.3   0.3   2.2   1.7  
3    1.4   0.3   2.1   1.9   0.2   1.9   1.9  
4    3.6   2.1   0.4   1.9   0.2   2.2   0.4  
5    1.8   0.4   2.0   2.0   0.1   1.3   2.8  
6    1.1   0.1   1.2   2.3   2.3   6.0   1.0  
7    0.9   2.0   0.3   1.2   0.9   0.8   1.1  
8    9.2   2.5   0.2   0.3   0.0   0.7   0.9  
9    1.2   0.9   2.3   5.7   0.4   3.2   2.1  
10   0.9   1.8   0.7   2.1   0.8   1.0   0.5  
11   3.9   4.6   2.3   3.6   1.8   2.9   1.5  
12  61.1   0.7   0.9   0.2   0.3   1.2   1.5  
13   1.5  65.1   0.2   0.9   0.1   0.8   0.2  
14   0.9   0.3  54.7   8.3   1.9  10.5   2.1  
15   0.8   0.6  10.2  34.7   3.4   8.7   1.3  
16   0.1   0.1   4.4   7.3  61.1   5.2   0.7  
17   1.0   0.2   6.4   7.1   2.5  38.8   2.1  
18   4.9   0.3   1.6   1.7   0.7   3.2  61.4  
'''



----


Josef

josef...@gmail.com

unread,
Oct 24, 2014, 9:44:31 PM10/24/14
to pystatsmodels
As partial explanation, after the fact

- The sigma of the disturbances/innovations is `sigma_u` which has to be used in the definition of the `P` matrix. I don't really know what the difference to your sigma =results.resid.cov() is (some degrees of freedom correction).

- The change that I made in the source is now in BaseIRAnalysis

```
-            self.orth_irfs = model.orth_ma_rep(periods)
+            self.orth_irfs = model.orth_ma_rep(periods, P=P)
```

- I wasn't sure whether to use row normalization or column normalization at several places, just trial and error to arrive at a solution.

- The code of VAR has a lot of functionality, but the structure and namings are pretty awful
`model` in various places might refer to the model, the results or the process instance.
The split into the three classes, model, process, results, makes a lot of sense but figuring out which part is actually calculating the different parts of irf and fevd is difficult.

So, I'm not sure how the changes are supposed to go throughout this code to fully support the choice of P, i.e. transformed IRF and FEVD and associated confidence intervals.

Josef

Charles Martineau

unread,
Oct 25, 2014, 1:23:24 AM10/25/14
to pystat...@googlegroups.com
Wow thank you very much Josef! That's fantastic!

When you say:

"The code of VAR has a lot of functionality, but the structure and namings are pretty awful.
The split into the three classes, model, process, results, makes a lot of sense but figuring out which part is actually calculating the different parts of irf and fevd is difficult." -- Agree! That is what made my life so painful.

Well I tried your code and everything works fine and I understand all the steps. Thank you Josef. I am planning to make both the DY2009 and DY2012 code available on my website eventually and I will give you the credit as well in the authors' name field in each files.

If I ever meet you at some econ. conference, I'll come to say thank you in person.
Charles
Reply all
Reply to author
Forward
0 new messages