Peersman 2004 OBES
Peersman 2004 OBES
Dear Tom,
I'm trying to replicate Geert Peersman paper on Oxford Bulletin of Economics and Statistics (2004) on the monetary transmission in the eurozone.
Basically, he sets up a big vector Y, which contains blocks of variables for single memeber states of Euro Area and Euro area aggregate as well.
I know that this is a sort of near-VAR, thus we should try to estimate it by SUR, but I don't get how we can give RATS 40 variables (5 for each block) to run a VAR.
Could you help me?
Thank you in advance for listening to me.
I'm trying to replicate Geert Peersman paper on Oxford Bulletin of Economics and Statistics (2004) on the monetary transmission in the eurozone.
Basically, he sets up a big vector Y, which contains blocks of variables for single memeber states of Euro Area and Euro area aggregate as well.
I know that this is a sort of near-VAR, thus we should try to estimate it by SUR, but I don't get how we can give RATS 40 variables (5 for each block) to run a VAR.
Could you help me?
Thank you in advance for listening to me.
Re: Peersman 2004 OBES
Dear Tom,
maybe I got it: we just could set up the model equation by equation, keeping together by end(system)?
Is it possible to impose a monetary policy shock, having the IRF for each country?
Thanks a lot for your help!
maybe I got it: we just could set up the model equation by equation, keeping together by end(system)?
Is it possible to impose a monetary policy shock, having the IRF for each country?
Thanks a lot for your help!
Re: Peersman 2004 OBES
Although this is for a somewhat different application (it's a Pesaran-style "global VAR"), it shows how you can put a model together in a fairly flexible way by "adding" equations to a model variable type. That's done in the bottom two loops.
Are you sure he was using SUR on a 40 equation model? That's quite a large model for a quarterly macro data set.
Are you sure he was using SUR on a 40 equation model? That's quite a large model for a quarterly macro data set.
Code: Select all
open data ekintl.xls
calendar(q) 1957
data(format=xls,org=columns) 1957:1 1989:3 usagnp gbrgdp
deugnp fragdp itagdp jpngnp
*
dofor s = usagnp gbrgdp deugnp fragdp itagdp jpngnp
set(scratch) s = log(s{0})
end dofor s
*
* These have relative weights in the columns
*
compute n=6
dec rect weights(n,n)
input weights
0.0000 0.1889 0.1233 0.0995 0.0967 0.3528
0.0791 0.0000 0.1164 0.1211 0.1020 0.0384
0.0809 0.1825 0.0000 0.2121 0.2371 0.0601
0.0434 0.1260 0.1408 0.0000 0.1729 0.0250
0.0304 0.0765 0.1135 0.1247 0.0000 0.0163
0.2019 0.0525 0.0524 0.0329 0.0296 0.0000
*
dec vect[series] gdp(n)
set gdp(1) = usagnp
set gdp(2) = gbrgdp
set gdp(3) = deugnp
set gdp(4) = fragdp
set gdp(5) = itagdp
set gdp(6) = jpngnp
*
dec vect[series] twgdp(n)
dec vect[equation] twgdpeq(n) gdpeq(n)
dec vect[series] gdpresid(n)
*
do i=1,n
set twgdp(i) = %dot(%xcol(weights,i),%xt(gdp,t))
equation(identity,coeffs=%xcol(weights,i)) twgdpeq(i) twgdp(i)
# gdp
end do i
*
do i=1,n
linreg(define=gdpeq(i)) gdp(i) / gdpresid(i)
# constant gdp(i){1} twgdp(i){0 1}
end do i
*
vcv
# gdpresid
*
group(vcv=%sigma) globalvar
do i=1,n
compute globalvar=globalvar+gdpeq(i)
end do i
do i=1,n
compute globalvar=globalvar+twgdpeq(i)
end do i
*
errors(steps=24,model=globalvar)- Attachments
-
- ekintl.xls
- Data file
- (19.85 KiB) Downloaded 1200 times
Re: Peersman 2004 OBES
Thanks a lot, Tom!
I'll study your code, trying to adapt to my problem.
I'm pretty sure of it, anyway, here is the paper, if you want have a look!
http://www.feb.ugent.be/Fineco/gert/doc ... ES2004.pdf
I'll study your code, trying to adapt to my problem.
I'm pretty sure of it, anyway, here is the paper, if you want have a look!
http://www.feb.ugent.be/Fineco/gert/doc ... ES2004.pdf
Re: Peersman 2004 OBES
Do you have the original data set?
Re: Peersman 2004 OBES
Hi Tom,
I don't have the original dataset: I'm trying to build a huge dataset for euro area memeber states, but as you probably know it's not so easy...
I f you want, you can find something similar, but with definitely MORE series (since this is a Factor Model) in the Eickmeier JAE 2009 paper, which you can download from JAE site here
http://qed.econ.queensu.ca/jae/2009-v24.6/
Do you have some news for me?
your code helps a lot, but I'm still far from being done...
Thanks in advance!
I don't have the original dataset: I'm trying to build a huge dataset for euro area memeber states, but as you probably know it's not so easy...
I f you want, you can find something similar, but with definitely MORE series (since this is a Factor Model) in the Eickmeier JAE 2009 paper, which you can download from JAE site here
http://qed.econ.queensu.ca/jae/2009-v24.6/
Do you have some news for me?
your code helps a lot, but I'm still far from being done...
Thanks in advance!
Re: Peersman 2004 OBES
If you want, I can send you some data on France, Germany and Italy! just let me know!
Re: Peersman 2004 OBES
Apparently, this was *not* done with a full system SUR. Instead, each country has done using a separate SUR estimation with the "X", "Y1", "Y2" and its own "Z" variables, but not the other countries "Z"'s. That makes the whole specification much simpler, and avoids the problems of estimating a SUR with too many equations relative to data points.
Re: Peersman 2004 OBES
The following does one of Peersman's models with the Italy data:
Rather than doing bootstrapping, this does Gibbs sampling, which is quite a bit quicker for SUR models. This requires the procedures in this source file:
The use of these procedures is described in the workbook for the Bayesian Econometrics course: http://www.estima.com/courses_completed.shtml.
The following helps set up a near-VAR in a simple and easy-to-modify fashion. It uses two SYSTEM definitions, one which sets up the "exogenous" block, the other setting up the dependent block, using %rlfromtable to fill in the DETERMINISTIC components.
Rather than doing bootstrapping, this does Gibbs sampling, which is quite a bit quicker for SUR models. This requires the procedures in this source file:
The use of these procedures is described in the workbook for the Bayesian Econometrics course: http://www.estima.com/courses_completed.shtml.
The following helps set up a near-VAR in a simple and easy-to-modify fashion. It uses two SYSTEM definitions, one which sets up the "exogenous" block, the other setting up the dependent block, using %rlfromtable to fill in the DETERMINISTIC components.
Code: Select all
*
* Exogenous (world, US) regressors
*
equation xlist *
# constant trend stus{0 to nlags} yus{0 to nlags} wp{0 to nlags}
*
* "Y2" regressors
*
equation y2list *
# m3nsa{1 to nlags} stn{1 to nlags} eer{1 to nlags}
*
* European subsystem
*
system(model=euro)
variables yer hicp m3nsa stn eer
lags 1 to nlags
det %rlfromtable(%eqntable(xlist))
end(system)
*
* Country-specific subsystem
*
system(model=specific)
variables yit cpit otit yminit cpminit
lags 1 to nlags
det %rlfromtable(%eqntable(xlist)) %rlfromtable(%eqntable(y2list))
end(system)
*
* Combine model and estimate by SUR
*
compute combined=euro+specific
sur(model=combined,resids=resids,cvout=omegares)Peersman 2004 OBES with Sign restrictions
Dear Tom,
thanks a lot for posting your code: it's clearly more efficient than mine, and I totally see your point on Gibbs sampling!
Just one question: if I want to run a Cholesky Decomposition, I just add the standard code for impulses.
But may I get identification through Uhlig (2005) sign restrictions?
I guess I should just define a betasur instead of betaols, and it should work!
Am I wrong or too optimistic?
Thanks, anyway!
thanks a lot for posting your code: it's clearly more efficient than mine, and I totally see your point on Gibbs sampling!
Just one question: if I want to run a Cholesky Decomposition, I just add the standard code for impulses.
But may I get identification through Uhlig (2005) sign restrictions?
I guess I should just define a betasur instead of betaols, and it should work!
Am I wrong or too optimistic?
Thanks, anyway!
Re: Peersman 2004 OBES with Sign restrictions
That's actually being done is this example, except that it then zeros out the subdiagonal for column 4. If you eliminate this, it's doing a standard Cholesky.KOBE24 wrote:Dear Tom,
thanks a lot for posting your code: it's clearly more efficient than mine, and I totally see your point on Gibbs sampling!
Just one question: if I want to run a Cholesky Decomposition, I just add the standard code for impulses.
You might want to ask Harald Uhlig about that, but offhand I think it would be OK. The choice of impulse vector doesn't restrict the other parameters (sigma and the coefficients) so you should be able to Gibbs sample those two, then draw impulse vectors conditional on them.KOBE24 wrote:But may I get identification through Uhlig (2005) sign restrictions?
I guess I should just define a betasur instead of betaols, and it should work!
Am I wrong or too optimistic?