Cushman & Zha JME 1997

Use this forum for posting example programs or short bits of sample code.
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Cushman & Zha JME 1997

Unread post by TomDoan »

This is a replication file for Cushman and Zha(1997), "Identifying monetary policy in a small open economy under flexible exchange rates," Journal of Monetary Economics, vol 39, no 3, 433-448. This is a rather large (11 variable) structural VAR with a set of four variables treated as exogenous, and thus left out of the lag structure. The "near-VAR" structure for this complicates the Monte Carlo integration because the marginal likelihood of the covariance matrix depends upon the observed covariance matrix of the residuals at the current draws for the lag coefficients rather than the observed covariance matrix at the OLS estimates only. Also complicating this is the size of the model---over 1200 coefficients---which makes the sampling method in the MONTENEARSVAR.RPF program nearly unusable because of the time required.

This uses a very efficient sampling method which is specific to the two-block near-VAR, where one set of regressors is a subset of the other.

The original paper had some technical errors in its sampling procedure, so the results from this differ (mainly in the width of the confidence bands).

Note that this needs a revised version of @MCProcessIRF
dc11cit.rpf
Program file (revised 16 October 2013)
(14.83 KiB) Downloaded 1624 times
dc11cit.asc
Data file
(52.97 KiB) Downloaded 1635 times
CRMS
Posts: 7
Joined: Fri Mar 23, 2012 9:52 am

Re: Cushman & Zha JME 1997

Unread post by CRMS »

Dear Tom,

is there a RATS implementation of the rank condition for global identification developed by Rubio-Ramírez, Waggoner and Zha (RES 2010)? I tried to check this condition for the model that we are discussing here (Cushman and Zha JME 1997), but I always get matrices Mj with different ranks between 9 and 11. Yet, in the working paper version of Rubio-Ramírez et al., the authors expressively cite the model by Cushman and Zha as an example of a globally identified model where identification is achieved through the lag structure. I don't see how they achieve that result.
CRMS
Posts: 7
Joined: Fri Mar 23, 2012 9:52 am

Re: Cushman & Zha JME 1997

Unread post by CRMS »

Dear Tom,

sorry I have to bring up this topic once again.

After another reading of the Cushman & Zha paper, I stumbled across the following passage on page 437:
To avoid potentially unreasonable restrictions, the non-Canadian block y2 is simply kept in its reduced form with normalization in the lower-triangularized order of y*, P*, R*, and Wxp*. (my emphasis)
I am not sure how this normalization is achieved in the current version of the code, where the World block looks as follows:

Code: Select all

frml a22frml = $
 ||c(1,1),  0.0 ,  0.0 ,  0.0|$
   c(2,1),c(2,2),  0.0 ,  0.0|$
   c(3,1),c(3,2),c(3,3),  0.0|$
   c(4,1),c(4,2),c(4,3),c(4,4)||
Wouldn't normalization require the c(n,n) coefficients to be equal to one?

I tried to modify the formula by inserting ones on the diagonal, but the results were highly unreasonable. Unfortunately, it appears that the normalization assumption is a necessary requirement in order to have an identified model. Or am I reading something wrong?

I also noted that the results for the World block coefficients presented by Tao Zha on his homepage are different from the ones estimated by this code; for example, his values for c(3,2) and c(4,2) are negative instead of positive. I don't think that's due to another variable ordering problem...
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: Cushman & Zha JME 1997

Unread post by TomDoan »

Since it's a DMATRIX=IDENTITY model, you can't "normalize" with unit coefficients. That may be a poor choice of words---it sounds like they simply mean that rather than try to come up with a structural model for the world block, they just picked a convenient just-identified "model".
shimarats
Posts: 31
Joined: Fri Feb 07, 2014 12:51 pm

Re: Cushman & Zha JME 1997

Unread post by shimarats »

hi dear tom.
i read the article of cushman and zha(1997).but i cannot understand how organized matrix as below.that means how put the d11 and b11. this coefficient how organized inside this matrix.thanks

estimate
*
compute y2m2=%xsubmat(%sigma,nvari+1,nvar,nvari+1,nvar)
dec packed c(nvaro,nvaro)
nonlin(parmset=block11od) a12 a13 a14 a15 a16 a17 $
a23 a31 a32 a54 a64 a74 a65 a75 a76
nonlin(parmset=block11d) d11 d22 d33 d44 d55 d66 d77
nonlin(parmset=block12) b11 b12 b13 b14 b33 b34
nonlin(parmset=block22) c
*
* inv(%decomp(y2m2)) is actually the optimal value for the Cholesky
* block. We back a bit off of that to get a curvature estimate for Monte
* Carlo integration.
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: Cushman & Zha JME 1997

Unread post by TomDoan »

block11od are the off-diagonal elements in the 1,1 block of the SVAR. block11d are the diagonals. block12 are the parameters in the cross block and block22 are the elements in the 2,2 block.
hashieh
Posts: 2
Joined: Fri Oct 28, 2016 1:10 am

Cushman and Zha (1997) Replication Adapation

Unread post by hashieh »

Hi Tom (or whoever else who can help!),

I'm very new to RATS, so please bear with me. I'm looking through the Cushman and Zha replication files and wanted to use that base code for my thesis project. For the most part, I understand what's going on with the code but since the replication files use a different sampler than the base paper, I'm having some difficulty adapting the sampling portion of the code (and the resulting IRFs) to my project.

Also, in terms of the IRFs, how do I implement it so that I get an IRF for each of my variables with a shock from each variable? Essentially need a 10x10, but as individual graphs rather than one big giant one. Attached is my code so far (also not sure why the gluing of the matrices is not working. I have also attached the full matrix representation of my restriction matrix).



Thanks for any help!!
Attachments
CHINASVARCOMMAND1.RPF
Code so far
(10.78 KiB) Downloaded 1246 times
SystemofContempVariables.pdf
Restriction Matrix
(86.91 KiB) Downloaded 1064 times
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: Cushman and Zha (1997) Replication Adapation

Unread post by TomDoan »

The block sampler for the lag coefficients is purely mechanical, so you should just be able to use it as is. The part of this that might require some ingenuity is the M-H sampler for the SVAR coefficients. You're missing the CVMODEL on the full VAR which is used to give the starting values for the chain and also the variance for the increments in the Random Walk Metropolis. (The line that you have marked as .25*??? is based upon those estimates). The multiplier might need to be adjusted up or down to get a well-behaved sampler.

This part of their program is to generate responses for certain linear combinations of the endogenous variables, and only for two shocks. If you want the straightforward all-responses-to-all-shocks, you can just do the same basic bookkeeping as in a program like montevar.rpf: do IMPULSES with the FLATTEN option to save into the %%RESPONSES matrix.

Code: Select all

   *
   dim %%responsems(draw)(NMSResponses,steps)
   do h=1,steps
      compute %%responsems(draw)(1,h)=impulses(1,1)(h)
      compute %%responsems(draw)(2,h)=impulses(2,1)(h)
      compute %%responsems(draw)(3,h)=impulses(3,1)(h)
      compute %%responsems(draw)(4,h)=impulses(7,1)(h)
      compute %%responsems(draw)(5,h)=impulses(6,1)(h)
      compute %%responsems(draw)(6,h)=impulses(5,1)(h)
      compute %%responsems(draw)(7,h)=impulses(4,1)(h)
      compute %%responsems(draw)(8,h)=impulses(5,1)(h)-impulses(4,1)(h)
      compute %%responsems(draw)(9,h)=impulses(3,1)(h)-(impulses(7,1)(h+3)-impulses(7,1)(h))*400.0
      compute %%responsems(draw)(10,h)=impulses(1,1)(h)+impulses(7,1)(h)-impulses(9,1)(h)
      compute %%responsems(draw)(11,h)=impulses(3,1)(h)-impulses(10,1)(h)+(impulses(1,1)(h+3)-impulses(1,1)(h))*400.0
   end do h
   *
   * Compute responses to U.S. output
   *
   compute [vector] usoutput=%xcol(faclast,8)
   impulse(noprint,model=surmodel,shock=usoutput,$
      results=impulses,steps=steps)
   dim %%responseus(draw)(NUSResponses,steps)
   do h=1,steps
      compute %%responseus(draw)(1,h)=impulses(1,1)(h)
      compute %%responseus(draw)(2,h)=impulses(2,1)(h)
      compute %%responseus(draw)(3,h)=impulses(3,1)(h)
      compute %%responseus(draw)(4,h)=impulses(7,1)(h)
      compute %%responseus(draw)(5,h)=impulses(6,1)(h)
      compute %%responseus(draw)(6,h)=impulses(5,1)(h)
      compute %%responseus(draw)(7,h)=impulses(4,1)(h)
      compute %%responseus(draw)(8,h)=impulses(5,1)(h)-impulses(4,1)(h)
      compute %%responseus(draw)(9,h)=impulses(3,1)(h)-(impulses(7,1)(h+3)-impulses(7,1)(h))*400.0
      compute %%responseus(draw)(10,h)=impulses(1,1)(h)+impulses(7,1)(h)-impulses(9,1)(h)
      compute %%responseus(draw)(11,h)=impulses(8,1)(h)
      compute %%responseus(draw)(12,h)=impulses(9,1)(h)
      compute %%responseus(draw)(13,h)=impulses(10,1)(h)
      compute %%responseus(draw)(14,h)=impulses(11,1)(h)
      compute %%responseus(draw)(15,h)=impulses(10,1)(h)-(impulses(9,1)(h+3)-impulses(9,1)(h))*400.0
   end do h
@MCGRAPHIRF with the option PAGE=ONE will do the graphs one per page. (That will be a lot of pages!)

Although the VAR e-course doesn't include the Cushman-Zha example in particular, it does describe how to handle the M-H sampling for an SVAR (again, that's the one non-mechanical part of this) and how to process sampled IRF's.
hashieh
Posts: 2
Joined: Fri Oct 28, 2016 1:10 am

Re: Cushman & Zha JME 1997

Unread post by hashieh »

Hi Tom,

Thanks for your quick reply. So I'm pretty new to this. When you mean do some basic bookkeeping in montevar.rpf, am I replacing that block of code that they use to generate the responses to two shocks with some iteration of montevar.rpf? Do I also keep the GibbsSUR program?

So I would in essence be combining montevar.rpf with the flatten option + the @mcgraphirf function to get the graphs per page?

Also relative to the replication code, would I be replacing the @MCProcessIRF with the @MCGRAPHIRF?
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: Cushman & Zha JME 1997

Unread post by TomDoan »

Everything in the draw loop down to the compute %modelsetcoeffs(...) instruction is specific to doing the draws for a specific type of model. Pretty much everything below that depends only upon what IRF's you want. (Note however that Cushman-Zha program uses surmodel as the model name and faclast as the shock matrix, while montevar uses varmodel and fsigma). You would just need to graft the bookkeeping end of the montevar program to the end of the Cushman-Zha program with the needed variables renamed.
bjing
Posts: 5
Joined: Thu Aug 31, 2023 3:06 pm

Re: Cushman & Zha JME 1997

Unread post by bjing »

Hi,
I'm new to RATS and now I'm trying to replicate the Cushman and Zha (1997) paper for a course project.
I had a relatively large dataset, but for simplicity, I chose the first five domestic variables and two exogenous variables from US. I read the original code of the paper and tried to find answers from the user's guide and online forum, but some problems are still not fixed on my end. I attached my code and dataset for reference.
My questions are:
1. The biggest one is that when I try to get the domestic money shock responses, it seems the shock is not from a change in interest rate, and I have no idea which variable brings the shock.

Code: Select all

   compute [vector] money=%xcol(faclast,3)
   compute money=%if(money(2)>0.0,-1.0,1.0)*money
   impulse(noprint,model=surmodel,shock=money,$
      results=impulses,steps=steps)
To my knowledge, this part seems to deal with the shock, and since the interest rate is the third variable in the system, I get the third column out of the matrix. Please let me know which part of code was wrong.
2. Refer to the above code, I can't understand what

Code: Select all

(money(2)>0.0,-1.0,1.0)
means, although it's explained that this line "makes sure the shock as the desired sign". I suppose this has something to do with my first question, but I have no idea how to fix it.
3. The last part of my code which is intended to do the responses for shock from US reports an error. I tried a lot but failed to find the solution.

Code: Select all

@MCProcessIRF(nvar=NUSResponses,center=median,$
   irf=irf,lower=lower,upper=upper)
do i=1,NUSResponses,4
   ewise thispage(j)=USResponses(i+j-1)
   spgraph(vfields=4,ylabels=thispage,samesize,$
       footer="Figure 3"+pageref(%block(i,4)))
   do j=i,%imin(i+3,NUSResponses)
      graph(nodates,number=0) 3
      # irf(j,1) / 1
      # lower(j,1) / 2
      # upper(j,1) / 2
   end do j
   spgraph(done)
end do i
4. With respect to some previously asked questions, I have something more to say.

Code: Select all

compute c=.9*inv(%decomp(y2m2))
Can I use .9 as my adjustment directly?

Code: Select all

compute fcvx=.25*%decomp(%xx)
.25 is used for the reason of acceptance rate. How could I check the acceptance rate of my data?

I truly appreciate it if you could help me with these questions. Thanks in advance.
Attachments
datamfull.xlsx
(56.18 KiB) Downloaded 424 times
usblex.RPF
(12.3 KiB) Downloaded 454 times
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: Cushman & Zha JME 1997

Unread post by TomDoan »

bjing wrote:Hi,
1. The biggest one is that when I try to get the domestic money shock responses, it seems the shock is not from a change in interest rate, and I have no idea which variable brings the shock.

Code: Select all

   compute [vector] money=%xcol(faclast,3)
   compute money=%if(money(2)>0.0,-1.0,1.0)*money
   impulse(noprint,model=surmodel,shock=money,$
      results=impulses,steps=steps)
To my knowledge, this part seems to deal with the shock, and since the interest rate is the third variable in the system, I get the third column out of the matrix. Please let me know which part of code was wrong.
compute MSResponses=||"R","Ex", "M","Int","P",$
"RInt","REx"||

doesn't match with

system(model=openmodel)
variables lnR EXR Int lnM2 Inf

In one of the two, you have M and INT switched.
bjing wrote: 2. Refer to the above code, I can't understand what

Code: Select all

(money(2)>0.0,-1.0,1.0)
means, although it's explained that this line "makes sure the shock as the desired sign". I suppose this has something to do with my first question, but I have no idea how to fix it.
Any column in the A matrix can be have its sign flipped without changing the model (since the diagonals aren't normalized to 1). As written in your code (which may not be correct; depending upon how you fix #1), this makes sure the contemporaneous response of variable 2 to shock 3 is negative.
bjing wrote: 3. The last part of my code which is intended to do the responses for shock from US reports an error. I tried a lot but failed to find the solution.

Code: Select all

@MCProcessIRF(nvar=NUSResponses,center=median,$
   irf=irf,lower=lower,upper=upper)
do i=1,NUSResponses,4
   ewise thispage(j)=USResponses(i+j-1)
   spgraph(vfields=4,ylabels=thispage,samesize,$
       footer="Figure 3"+pageref(%block(i,4)))
   do j=i,%imin(i+3,NUSResponses)
      graph(nodates,number=0) 3
      # irf(j,1) / 1
      # lower(j,1) / 2
      # upper(j,1) / 2
   end do j
   spgraph(done)
end do i
It looks like you have some typos in this:

compute %%responseus(draw)(1,h)=impulses(1,1)(h)
compute %%responsems(draw)(1,h)=impulses(1,1)(h)
compute %%responsems(draw)(2,h)=impulses(2,1)(h)
compute %%responsems(draw)(3,h)=impulses(4,1)(h)
compute %%responsems(draw)(4,h)=impulses(3,1)(h)
compute %%responsems(draw)(5,h)=impulses(5,1)(h)
compute %%responsems(draw)(6,h)=impulses(3,1)(h)-(impulses(5,1)(h+3)-impulses(5,1)(h))*400.0
compute %%responsems(draw)(7,h)=impulses(2,1)(h)+impulses(5,1)(h)-impulses(6,1)(h)
compute %%responseus(draw)(8,h)=impulses(6,1)(h)
compute %%responseus(draw)(9,h)=impulses(7,1)(h)

Those should all be %%responseus(draw)....
bjing wrote: 4. With respect to some previously asked questions, I have something more to say.

Code: Select all

compute c=.9*inv(%decomp(y2m2))
Can I use .9 as my adjustment directly?
Yes. It's just designed to get if off the ML values which improves the behavior of the sampler
bjing wrote:

Code: Select all

compute fcvx=.25*%decomp(%xx)
.25 is used for the reason of acceptance rate. How could I check the acceptance rate of my data?
The code counts the number of acceptances as the variable ACCEPT. You can check that vs the number of draws. However, the infobox (progress) shows the running percentage of acceptances.
bjing
Posts: 5
Joined: Thu Aug 31, 2023 3:06 pm

Re: Cushman & Zha JME 1997

Unread post by bjing »

Thanks for the quick response. I changed my code according to your suggestions. Two questions came up as follows.
1. It seems the response of the interest rate to itself is still negative.
2. The results I get from the Covariance Model-Likelihood - Estimation by BFGS is local optimum, not global optimum since the values from "log likelihood" and "log likelihood unrestricted" are different. I tried tens of different initial values but failed to make these two the same. I noticed that the code you provided for Cushman and Zha (1997) paper also had the same problem, so I suspect this doesn't influence the final impulse responses. Should I continue to try if I want to have the global optimum? How would this influence the final outcome?
Thanks again for answering my questions.
Attachments
usblex.RPF
(12.21 KiB) Downloaded 394 times
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: Cushman & Zha JME 1997

Unread post by TomDoan »

bjing wrote:Thanks for the quick response. I changed my code according to your suggestions. Two questions came up as follows.
1. It seems the response of the interest rate to itself is still negative.
bjing wrote: compute [vector] money=%xcol(faclast,3)
*compute money=%if(money(4)>0.0,-1.0,1.0)*money
The second line is commented out. The combination is also wrong for the order of your series. I assume you want the fourth shock to be money, so the 3 should be 4. And interest rate is series 3, not 4 in the second line.
bjing wrote: 2. The results I get from the Covariance Model-Likelihood - Estimation by BFGS is local optimum, not global optimum since the values from "log likelihood" and "log likelihood unrestricted" are different. I tried tens of different initial values but failed to make these two the same. I noticed that the code you provided for Cushman and Zha (1997) paper also had the same problem, so I suspect this doesn't influence the final impulse responses. Should I continue to try if I want to have the global optimum? How would this influence the final outcome?
Thanks again for answering my questions.
No. Not only is your A11 model overidentified (only 13 free), cross term matrix has quite a few extra zeros. Overall, you have 18 free coefficients on a 7 x 7 covariance matrix, so 28.
bjing
Posts: 5
Joined: Thu Aug 31, 2023 3:06 pm

Re: Cushman & Zha JME 1997

Unread post by bjing »

Thanks so much for your timely feedback. I really appreciate it.
Actually I tried to make interest rate the shock, and that's the third variable in my system. I made some changes to make it easier to read.

Code: Select all

compute [vector] rate=%xcol(faclast,3)
In this case, if I want to make sure the fourth variable ("money") responds negatively to the interest rate shock, I need to make it this way:

Code: Select all

compute rate=%if(rate(4)>0.0,-1.0,1.0)*rate
But this will give really weird responses of all the variables, i.e., all of them become insignificant, let alone the positive interest rate shock and the negative money response.
after apply the constrains from money.png
after apply the constrains from money.png (54.99 KiB) Viewed 39091 times
Then I tried to comment out the previous line, but this gave negative initial response of interest rate to itself, which doesn't make sense.
The negative interest rate response to itself.png
The negative interest rate response to itself.png (82.42 KiB) Viewed 39091 times
I also tried different variables under the same method. In some cases, I also have the same issue that the own initial response is negative.
I also attached the screenshots for these two cases.
Thanks so much for bearing with my questions. I'm very grateful for your help.
Attachments
usblex.RPF
(10.99 KiB) Downloaded 437 times
Post Reply