VAR with time-varying parameters and stochastic volatility
VAR with time-varying parameters and stochastic volatility
Following up on a previous post by Tom Doan, I have attached code and data for estimation of a VAR with time-varying parameters and stochastic volatility. This roughly replicates the basic results of Giorgio Primiceri, (2005), "Time Varying Structural Vector Autoregressions and Monetary Policy," Review of Economic Studies, 72, 821-852.
- Attachments
-
- repPrimiceri.rpf
- program file, which calls data file and procedure
- (26 KiB) Downloaded 621 times
-
- REPPRIMICERI_10.RGF
- plot (Rats 7.3) of posterior estimates of equation volatilities
- (9.25 KiB) Downloaded 2898 times
-
- data.UScurrent.xls
- data file
- (55 KiB) Downloaded 2448 times
-
- VARTVPKSC.src
- procedure file that does most calculations
- (31.34 KiB) Downloaded 2679 times
Todd Clark
Economic Research Dept.
Federal Reserve Bank of Cleveland
Economic Research Dept.
Federal Reserve Bank of Cleveland
Re: VAR with time-varying parameters and stochastic volatility
Dear Tom/Todd,
I am trying to use these codes in a different context.
The DLM routine, at some point in the MCMC draws, spits out "1.#QNANe+000". I tried with different priors, different transformation of the data and also checking for explosive draws. It does not seem to work. It usually starts in the draw of the alpha's, sometimes in delta's. Any advice?
Luching.
I am trying to use these codes in a different context.
The DLM routine, at some point in the MCMC draws, spits out "1.#QNANe+000". I tried with different priors, different transformation of the data and also checking for explosive draws. It does not seem to work. It usually starts in the draw of the alpha's, sometimes in delta's. Any advice?
Luching.
Re: VAR with time-varying parameters and stochastic volatility
I think the problem comes from a cholesky decomposition step in the carter kohn routine embedded in DLM. So, in general I checked %decomp command. I am not too sure how RATS does cholesky decompositions, but it seems to give me answers where it shouldn't. For instance,
A=[1,1;1,1] returns [1,1;0,0] as cholesky in RATS. Whereas, in MATLAB it returns a "non positive definite" error.
A=[1,1;1,1] returns [1,1;0,0] as cholesky in RATS. Whereas, in MATLAB it returns a "non positive definite" error.
Re: VAR with time-varying parameters and stochastic volatili
That's Matlab's problem, not ours. 1,1|0,0 is the Cholesky factor of 1,1|1,1.luching wrote:I think the problem comes from a cholesky decomposition step in the carter kohn routine embedded in DLM. So, in general I checked %decomp command. I am not too sure how RATS does cholesky decompositions, but it seems to give me answers where it shouldn't. For instance,
A=[1,1;1,1] returns [1,1;0,0] as cholesky in RATS. Whereas, in MATLAB it returns a "non positive definite" error.
Re: VAR with time-varying parameters and stochastic volatility
Thanks. Could you suggest a way to circumvent the earlier problem with DLM? Should I just skip that draw? How can I "trap" it?
Re: VAR with time-varying parameters and stochastic volatility
I don't know if this does you any good, but I have had less trouble with that type of error (on my machine, it shows up as values of "NaN") with estimating these types of models on a Linux server than on a desktop.
Todd Clark
Economic Research Dept.
Federal Reserve Bank of Cleveland
Economic Research Dept.
Federal Reserve Bank of Cleveland
Re: VAR with time-varying parameters and stochastic volatility
A problem value like that should be caught with a check for %VALID. What you're seeing is a de-normal different from the one that we use for representing NA's internally, but we check for all the de-normal codes. If someone could send an example (program, data and a SEED) that can reproduce it, we'll take a look.
Re: VAR with time-varying parameters and stochastic volatility
That would be great. I have attached a .zip file with a simpler example, of using Gibbs sampling to estimate a local level model. That said, the problem is somewhat idiosyncratic. If it doesn't show up in your checking, I can send code for the full TVP-stochastic volatility example, which more systematically yields the problem.
More specifically, in some programs I have written that take advantage of DLM in generating Gibbs sampling estimates of models with some form of time-varying parameters, I am finding that the same program with the same settings (including a pre-set seed value) will run fine in one attempt but then generate no results -- just values of "nan" -- in another. In some limited testing, there also seems to be some sensitivity to the platform. The same program with the same settings will generate just "nan" values on my Mac (7.3) but then run okay in Windows (7.2) or Linux (7.3), but I have also run into the "nan" no-results in Linux.
As to this example, in one run, I got no results (given in locallevel.NAN.out). In another run of exactly the same program, a few minutes later, I got results (given in locallevel.noNAN.out). I ran both on my Mac (batch mode), with v.7.3 of RatsPro.
More specifically, in some programs I have written that take advantage of DLM in generating Gibbs sampling estimates of models with some form of time-varying parameters, I am finding that the same program with the same settings (including a pre-set seed value) will run fine in one attempt but then generate no results -- just values of "nan" -- in another. In some limited testing, there also seems to be some sensitivity to the platform. The same program with the same settings will generate just "nan" values on my Mac (7.3) but then run okay in Windows (7.2) or Linux (7.3), but I have also run into the "nan" no-results in Linux.
As to this example, in one run, I got no results (given in locallevel.NAN.out). In another run of exactly the same program, a few minutes later, I got results (given in locallevel.noNAN.out). I ran both on my Mac (batch mode), with v.7.3 of RatsPro.
- Attachments
-
- Archive.zip
- (26.45 KiB) Downloaded 1771 times
Todd Clark
Economic Research Dept.
Federal Reserve Bank of Cleveland
Economic Research Dept.
Federal Reserve Bank of Cleveland
-
jonasdovern
- Posts: 97
- Joined: Sat Apr 11, 2009 10:30 am
Re: VAR with time-varying parameters and stochastic volatili
Has there been any suggestion on how to fix the "1.#QNANe+000"-problem in the meantime?
Re: VAR with time-varying parameters and stochastic volatili
Dear Tom,
I would like to compare the linear and TVP-VAR models in terms of godness of fit. Is there any available test for this with RATS?
Thanks
I would like to compare the linear and TVP-VAR models in terms of godness of fit. Is there any available test for this with RATS?
Thanks
Re: VAR with time-varying parameters and stochastic volatili
Dear Todd,
in VARTVPKSC.src, STEP 4: Draw {select_draw_1,....,select_draw_T}, the states used in the KSC mixture distribution - could you please tell me why we have to divide by sqrt(var_mixdist(i)) in line 4 of the following loop?
do ii = 1,nvar
do vtime=stpt,endpt
ewise tempinside(i) = (ystar2mat(vtime)(ii) - (2.*deltadraw(vtime)(ii) + mean_mixdist(i)))/sqrt(var_mixdist(i))
ewise tempdensity(i) = %density(tempinside(i))/sqrt(var_mixdist(i))
comp selection = pr_mixdist.*tempdensity
comp select_draw(vtime)(ii) = %ranbranch(selection)
end do vtime
end do ii
I tried looking up how %density calculates the standard normal density but I could not find the answer in the manuals. If the standard normal density is defined as:
1/sqrt(2*pi*sigma^2)*exp(-.5*((x - mu)^2)/sigma^2)
and I feed %density with (x - mu)/sigma, i.e. tempinside(i), what does %density give me?
Thanks in advance!
in VARTVPKSC.src, STEP 4: Draw {select_draw_1,....,select_draw_T}, the states used in the KSC mixture distribution - could you please tell me why we have to divide by sqrt(var_mixdist(i)) in line 4 of the following loop?
do ii = 1,nvar
do vtime=stpt,endpt
ewise tempinside(i) = (ystar2mat(vtime)(ii) - (2.*deltadraw(vtime)(ii) + mean_mixdist(i)))/sqrt(var_mixdist(i))
ewise tempdensity(i) = %density(tempinside(i))/sqrt(var_mixdist(i))
comp selection = pr_mixdist.*tempdensity
comp select_draw(vtime)(ii) = %ranbranch(selection)
end do vtime
end do ii
I tried looking up how %density calculates the standard normal density but I could not find the answer in the manuals. If the standard normal density is defined as:
1/sqrt(2*pi*sigma^2)*exp(-.5*((x - mu)^2)/sigma^2)
and I feed %density with (x - mu)/sigma, i.e. tempinside(i), what does %density give me?
Thanks in advance!
Re: VAR with time-varying parameters and stochastic volatili
We need to compute the density value for a normal random variable. The textbook formula for that value is (sigma^2*2*pi)*exp(-1*(x-mu)^2/(2*sigma^2)), where mu and sigma denote the mean and st. dev. of x. The %density function is specific to a standard normal random variable, which means it imposes sigma = 1 and mu = 0. In the calculations in the code, the "tempinside" vector contains the term corresponding to (x-mu)/sigma. The value of %density((x-mu)/sigma) then has to be divided by sigma to deliver the correct value of the density of x.
Todd Clark
Economic Research Dept.
Federal Reserve Bank of Cleveland
Economic Research Dept.
Federal Reserve Bank of Cleveland
Re: VAR with time-varying parameters and stochastic volatili
I am visiting this topic after a while. The codes by Todd or more generally the DLM routine in RATS use the Durbin-Koopman instead of Carter-Kohn to draw the states. I was just wondering if there are any specific advantages for doing so.
Re: VAR with time-varying parameters and stochastic volatili
They are distinct algorithms for doing the same type of draw, so either is OK. The advantage of D-K is that it is quite a bit simpler once you have written code for Kalman filtering and smoothing, while Carter-Kohn is a completely separate set of calculations.luching wrote:I am visiting this topic after a while. The codes by Todd or more generally the DLM routine in RATS use the Durbin-Koopman instead of Carter-Kohn to draw the states. I was just wondering if there are any specific advantages for doing so.
Re: VAR with time-varying parameters and stochastic volatili
With an intercept term in the VAR system, the time varying VAR model should be able to handle trends in the data because the time varying intercept term takes care of the trend. For instance, US inflation and nominal rate have an obvious downward trend. But from a stationarity standpoint, it looks like the data must be de-trended.
Any advice on this?
Any advice on this?