Recursive VECM - Johansen ML technique
Recursive VECM - Johansen ML technique
Hi Tom,
I am attempting to run a recursive VECM, estimation and one-step ahead OOS forecasts using @JOHMLE.src.
I have two choices, either:
(a) I can "fix" the number of cointegrating vectors from economic theory or from IS estimation, and then run the do loop; or
(b) let the data choose the number of cointegrating vectors based on: if Trace>Trace-95%, traversing down the Rank column, within the loop.
(a) I can do, however how would I do (b) using @JOHMLE.src?
Also, do I have to normalise the cointegrating vectors prior to forecasting?
many thanks,
Amarjit
I am attempting to run a recursive VECM, estimation and one-step ahead OOS forecasts using @JOHMLE.src.
I have two choices, either:
(a) I can "fix" the number of cointegrating vectors from economic theory or from IS estimation, and then run the do loop; or
(b) let the data choose the number of cointegrating vectors based on: if Trace>Trace-95%, traversing down the Rank column, within the loop.
(a) I can do, however how would I do (b) using @JOHMLE.src?
Also, do I have to normalise the cointegrating vectors prior to forecasting?
many thanks,
Amarjit
Re: Recursive VECM - Johansen ML technique
(b) seems like a really bad idea. If you are doing only relatively short-term forecasts, it shouldn't really matter much (in fact, if you are doing short-term forecasting, you don't even need to do a VECM in the first place). If you are doing longer-term forecasts, if the cointegrating space is that volatile, the long-term forecasts will be as well.ac_1 wrote:Hi Tom,
I am attempting to run a recursive VECM, estimation and one-step ahead OOS forecasts using @JOHMLE.src.
I have two choices, either:
(a) I can "fix" the number of cointegrating vectors from economic theory or from IS estimation, and then run the do loop; or
(b) let the data choose the number of cointegrating vectors based on: if Trace>Trace-95%, traversing down the Rank column, within the loop.
(a) I can do, however how would I do (b) using @JOHMLE.src?
Absolutely not. The loadings adjust to deal with the scale of the beta's.ac_1 wrote: Also, do I have to normalise the cointegrating vectors prior to forecasting?
Re: Recursive VECM - Johansen ML technique
Many Thanks.
Are the SE's still applicable in FORECAST(STDERRS=...), for graphing PI's in a VECM?
Are the SE's still applicable in FORECAST(STDERRS=...), for graphing PI's in a VECM?
Re: Recursive VECM - Johansen ML technique
Yes, as the estimates assuming coefficients are known.ac_1 wrote:Many Thanks.
Are the SE's still applicable in FORECAST(STDERRS=...), for graphing PI's in a VECM?
Re: Recursive VECM - Johansen ML technique
Thanks.
Do you mean short-term forecasting as just one-step ahead?
Questions regarding interpretation and differencing in financial & macro variables:
In single-equation ARMA modelling with stationary I(0) one would expect coefficient betas to be (mostly) between plus/minus 1 implying a non-explosive/non-persistent series. Correct? E.g. Enders AETS Table 2.4 the a1 term is greater than 1 for AR(7), AR(2) and AR(1,2,7), and except for a0 the constant, less than plus/minus 1 in all others.
As coefficients are on occasion not necessarily within those bounds, how does one interpret coefficients greater than plus/minus 1, especially in multi-equation modelling, VAR-in-levels/VAR-in-differences/VECM?
I have seen the following: https://stats.stackexchange.com/questio ... rpretation which points to IRF's, but doesn't mention actual interpretation of the weights.
Obviously, coefficients will be time-varying estimated recursively or rolling.
Is a VAR estimated in levels with combinations of I(1) and I(0) variables (including a simple bivariate VAR with an I(1) variable and an I(0) variable), that is NOT cointegrated, spurious?
For multi-equation models is it fair to say:
(a) If all variables are I(0), specify in levels.
(b) If all variables are I(1) and are cointegrated, specify in levels. If to specify in just first-differences there is misspecification as the ECT is omitted.
(c) If all variables are I(1) and NOT cointegrated, specify in first-differences, also the standard inferences can be applied.
(d) If I(1) and I(0) variables are cointegrated, specify in levels.
(e) If I(1) and I(0) variables are NOT cointegrated, specify with I(1) variables in first-differences and I(0) in levels.
Similarly for I(2) variables. Any other combinations?
Do you mean short-term forecasting as just one-step ahead?
Questions regarding interpretation and differencing in financial & macro variables:
In single-equation ARMA modelling with stationary I(0) one would expect coefficient betas to be (mostly) between plus/minus 1 implying a non-explosive/non-persistent series. Correct? E.g. Enders AETS Table 2.4 the a1 term is greater than 1 for AR(7), AR(2) and AR(1,2,7), and except for a0 the constant, less than plus/minus 1 in all others.
As coefficients are on occasion not necessarily within those bounds, how does one interpret coefficients greater than plus/minus 1, especially in multi-equation modelling, VAR-in-levels/VAR-in-differences/VECM?
I have seen the following: https://stats.stackexchange.com/questio ... rpretation which points to IRF's, but doesn't mention actual interpretation of the weights.
Obviously, coefficients will be time-varying estimated recursively or rolling.
Is a VAR estimated in levels with combinations of I(1) and I(0) variables (including a simple bivariate VAR with an I(1) variable and an I(0) variable), that is NOT cointegrated, spurious?
For multi-equation models is it fair to say:
(a) If all variables are I(0), specify in levels.
(b) If all variables are I(1) and are cointegrated, specify in levels. If to specify in just first-differences there is misspecification as the ECT is omitted.
(c) If all variables are I(1) and NOT cointegrated, specify in first-differences, also the standard inferences can be applied.
(d) If I(1) and I(0) variables are cointegrated, specify in levels.
(e) If I(1) and I(0) variables are NOT cointegrated, specify with I(1) variables in first-differences and I(0) in levels.
Similarly for I(2) variables. Any other combinations?
Re: Recursive VECM - Johansen ML technique
No. One or two years.ac_1 wrote:Thanks.
Do you mean short-term forecasting as just one-step ahead?
That's just wrong. Individual coefficients don't tell you anything about whether the model is stationary. The stationary process (1-.9L)^2 y = e, converts to the AR representation (1-1.8L+.81L^2) y = e, or y = 1.8y{1}-.81y{2}+e so the first lag is not just bigger than one but *much* bigger than 1.ac_1 wrote: Questions regarding interpretation and differencing in financial & macro variables:
In single-equation ARMA modelling with stationary I(0) one would expect coefficient betas to be (mostly) between plus/minus 1 implying a non-explosive/non-persistent series. Correct? E.g. Enders AETS Table 2.4 the a1 term is greater than 1 for AR(7), AR(2) and AR(1,2,7), and except for a0 the constant, less than plus/minus 1 in all others.
Re: Recursive VECM - Johansen ML technique
Yes, understood the algebra. How is (1-.9L)^2 y = e a stationary process?TomDoan wrote: That's just wrong. Individual coefficients don't tell you anything about whether the model is stationary. The stationary process (1-.9L)^2 y = e, converts to the AR representation (1-1.8L+.81L^2) y = e, or y = 1.8y{1}-.81y{2}+e so the first lag is not just bigger than one but *much* bigger than 1.
Do I interpret the weights on the AR lags, in the usual regression way? In an AR(1) 'If y{1} increases by 1 unit, y will be expected, everything else being equal, to increase by phi units'.
Are IRF's (for single equation's) a more appropriate interpretation?
How about the cointegration specifications?
Re: Recursive VECM - Johansen ML technique
The roots are on the proper side of the unit circle. If you don't know that, you need to review how AR processes work.ac_1 wrote:Yes, understood the algebra. How is (1-.9L)^2 y = e a stationary process?TomDoan wrote: That's just wrong. Individual coefficients don't tell you anything about whether the model is stationary. The stationary process (1-.9L)^2 y = e, converts to the AR representation (1-1.8L+.81L^2) y = e, or y = 1.8y{1}-.81y{2}+e so the first lag is not just bigger than one but *much* bigger than 1.
You don't try to "interpret" the individual coefficients. That's why it's considered to be bad form to list the coefficients of a VAR---they provide no useful information. IRF's show the dynamics implied by the process.ac_1 wrote: Do I interpret the weights on the AR lags, in the usual regression way? In an AR(1) 'If y{1} increases by 1 unit, y will be expected, everything else being equal, to increase by phi units'.
Are IRF's (for single equation's) a more appropriate interpretation?
See https://estima.com/ratshelp/spuriousregression.html regarding spurious regressions.ac_1 wrote: How about the cointegration specifications?
I(1) and I(0) variables cannot be cointegrated. Cointegration is a relationship between I(1) variables. It is sometimes helpful to add I(0) variables to an existing cointegrating relationship, for small sample improvements to estimation, but it doesn't change anything theoretically.
Estimating in levels is never "wrong"; it just might be somewhat less efficient than imposing true restrictions. Estimating in differences is wrong in the presence of cointegration.
Re: Recursive VECM - Johansen ML technique
The discriminant = 0. I get repeated roots = 0.9. The characteristic roots lie inside the unit circle. It's I(0) i.e. stationary. The series is convergent and stable as shown via plotting the time path where the arbitrary constants are set to 1 with t running from 1 to 100.TomDoan wrote: The roots are on the proper side of the unit circle. If you don't know that, you need to review how AR processes work.
Code: Select all
equation(noconst,ar=2,coeffs=||+1.8,-0.81||) arma y
*
compute croots=%polycxroots(%eqnlagpoly(arma,y))
disp croots
disp croots(1)
disp croots(2)
*
compute invcroots = 1.0/1.11111
disp 'invcroots:' invcroots
*
compute A1 = 1.0
compute A2 = 1.0
*
cal(irregular)
allocate 100
do t = 1, 100
set res1 1 100 = A1*(invcroots)^t + A2*(invcroots)^t
end t
*
prin / res1
*
graph 1
# res1
If I calculate the inverse roots and plot the graph, I can still make comparisons with information criteria regarding fit of various models.
Is it bad form to "interpret" the t-stats for an I(0) AR process?TomDoan wrote: You don't try to "interpret" the individual coefficients. That's why it's considered to be bad form to list the coefficients of a VAR---they provide no useful information. IRF's show the dynamics implied by the process.
Also, for e.g. VAR(4)
Code: Select all
compute companion=%modelcompanion(var4model)
eigen(cvalues=cv) companion
disp cv
Re: Recursive VECM - Johansen ML technique
First of all, the algebraic calculation for an IRF for a process with repeated roots is more complicated than that. (It's covered in Hamilton). And no, the IRF converges to zero because the process is stationary---the mean of the process is irrelevant to the IRF calculation since it looks only at the AR dynamics.ac_1 wrote:The discriminant = 0. I get repeated roots = 0.9. The characteristic roots lie inside the unit circle. It's I(0) i.e. stationary. The series is convergent and stable as shown via plotting the time path where the arbitrary constants are set to 1 with t running from 1 to 100.TomDoan wrote: The roots are on the proper side of the unit circle. If you don't know that, you need to review how AR processes work.
Does the graph tend to zero as that's the mean of this AR(2) without a constant? Is the plot defined as the IRF for an AR process?Code: Select all
equation(noconst,ar=2,coeffs=||+1.8,-0.81||) arma y * compute croots=%polycxroots(%eqnlagpoly(arma,y)) disp croots disp croots(1) disp croots(2) * compute invcroots = 1.0/1.11111 disp 'invcroots:' invcroots * compute A1 = 1.0 compute A2 = 1.0 * cal(irregular) allocate 100 do t = 1, 100 set res1 1 100 = A1*(invcroots)^t + A2*(invcroots)^t end t * prin / res1 * graph 1 # res1
If I calculate the inverse roots and plot the graph, I can still make comparisons with information criteria regarding fit of various models.
TomDoan wrote: You don't try to "interpret" the individual coefficients. That's why it's considered to be bad form to list the coefficients of a VAR---they provide no useful information. IRF's show the dynamics implied by the process.
Yes.ac_1 wrote:Is it bad form to "interpret" the t-stats for an I(0) AR process?
The roots of the process.ac_1 wrote: Also, for e.g. VAR(4)
Has 8 results: the brackets ( , ) to include complex roots; are these the roots or inverse roots?Code: Select all
compute companion=%modelcompanion(var4model) eigen(cvalues=cv) companion disp cv
Re: Recursive VECM - Johansen ML technique
Thanks!
For the AR(2) that should say:
For variables in levels, let's say from @varlagsselect the number of lags = 2.
Hence, using SYSTEM, for 'like-with-like' comparisons, estimating
(i) a VAR-in-levels: set LAGS = 1 2
(ii) a VAR-in-differences: set LAGS = 1 2
(iii) a VECM: set LAGS = 1 2 3 i.e. 3 lagged levels are equivalent to 2 lagged changes. Therefore the VECM will have 2 lags and ECT('s).
And for @JohMLE for use with the above VECM, LAGS = (2+1)=3.
Correct?
For the AR(2) that should say:
Questions on lag selection:ac_1 wrote: If I calculate the roots...
For variables in levels, let's say from @varlagsselect the number of lags = 2.
Hence, using SYSTEM, for 'like-with-like' comparisons, estimating
(i) a VAR-in-levels: set LAGS = 1 2
(ii) a VAR-in-differences: set LAGS = 1 2
(iii) a VECM: set LAGS = 1 2 3 i.e. 3 lagged levels are equivalent to 2 lagged changes. Therefore the VECM will have 2 lags and ECT('s).
And for @JohMLE for use with the above VECM, LAGS = (2+1)=3.
Correct?
Re: Recursive VECM - Johansen ML technique
No. If you get @VARLAGSELECT of 2 in levels, then the VAR in differences would be LAGS 1, and @JOHMLE would be LAGS=2.ac_1 wrote:Thanks!
For the AR(2) that should say:Questions on lag selection:ac_1 wrote: If I calculate the roots...
For variables in levels, let's say from @varlagsselect the number of lags = 2.
Hence, using SYSTEM, for 'like-with-like' comparisons, estimating
(i) a VAR-in-levels: set LAGS = 1 2
(ii) a VAR-in-differences: set LAGS = 1 2
(iii) a VECM: set LAGS = 1 2 3 i.e. 3 lagged levels are equivalent to 2 lagged changes. Therefore the VECM will have 2 lags and ECT('s).
And for @JohMLE for use with the above VECM, LAGS = (2+1)=3.
Correct?
Re: Recursive VECM - Johansen ML technique
If I get @VARLAGSELECT of 1 in levels, then the VAR in differences would be no lags, and @JOHMLE would be LAGS=1.TomDoan wrote: No. If you get @VARLAGSELECT of 2 in levels, then the VAR in differences would be LAGS 1, and @JOHMLE would be LAGS=2.
Correct?
To confirm:
(a) Unit root tests e.g. ADF: tests if a time series variable is non-stationary and possesses a unit root.
(b) Calculating/Plotting (inverse) characteristic roots of ARIMA models: checks if the sequence generated from the model/specification/process is stable, stationary and invertible.
Correct?
Re: Recursive VECM - Johansen ML technique
That's correct.ac_1 wrote:If I get @VARLAGSELECT of 1 in levels, then the VAR in differences would be no lags, and @JOHMLE would be LAGS=1.TomDoan wrote: No. If you get @VARLAGSELECT of 2 in levels, then the VAR in differences would be LAGS 1, and @JOHMLE would be LAGS=2.
Correct?
(b) tells you nothing about "stability". Invertibility usually is applied to the MA part, not the AR part.ac_1 wrote: To confirm:
(a) Unit root tests e.g. ADF: tests if a time series variable is non-stationary and possesses a unit root.
(b) Calculating/Plotting (inverse) characteristic roots of ARIMA models: checks if the sequence generated from the model/specification/process is stable, stationary and invertible.
Correct?
Re: Recursive VECM - Johansen ML technique
For (b) does this mean for any series used with the model the sequence generated i.e. process implied is: stationary and invertible (MA part)?
For ARIMA I can calculate the roots and inverse roots as from viewtopic.php?f=5&t=724&hilit=arroots
I can then use %real and %imag to return the respective parts of the complex number
I'd like to plot the inverse AR roots within a unit circle and inverse MA roots within a unit circle as in: https://otexts.com/fpp2/arima-r.html
There's SCATTER, GRAPH, GRTEXT, etc, but I'm not certain how to plot?
For ARIMA I can calculate the roots and inverse roots as from viewtopic.php?f=5&t=724&hilit=arroots
Code: Select all
compute ARroots=%polycxroots(%eqnlagpoly(bjeq,caemp))
compute MAroots=%polycxroots(%eqnlagpoly(bjeq,%mvgavge))
compute iARroot1 = 1.0/(ARroots(1))
Code: Select all
compute iARroot1_real = %real(iARroot1)
compute iARroot1_imag = %imag(iARroot1)
There's SCATTER, GRAPH, GRTEXT, etc, but I'm not certain how to plot?