Page 1 of 4

Recursive VECM - Johansen ML technique

Posted: Sun Feb 06, 2022 3:45 am
by ac_1
Hi Tom,

I am attempting to run a recursive VECM, estimation and one-step ahead OOS forecasts using @JOHMLE.src.

I have two choices, either:
(a) I can "fix" the number of cointegrating vectors from economic theory or from IS estimation, and then run the do loop; or
(b) let the data choose the number of cointegrating vectors based on: if Trace>Trace-95%, traversing down the Rank column, within the loop.

(a) I can do, however how would I do (b) using @JOHMLE.src?

Also, do I have to normalise the cointegrating vectors prior to forecasting?

many thanks,
Amarjit

Re: Recursive VECM - Johansen ML technique

Posted: Sun Feb 06, 2022 9:28 am
by TomDoan
ac_1 wrote:Hi Tom,

I am attempting to run a recursive VECM, estimation and one-step ahead OOS forecasts using @JOHMLE.src.

I have two choices, either:
(a) I can "fix" the number of cointegrating vectors from economic theory or from IS estimation, and then run the do loop; or
(b) let the data choose the number of cointegrating vectors based on: if Trace>Trace-95%, traversing down the Rank column, within the loop.

(a) I can do, however how would I do (b) using @JOHMLE.src?
(b) seems like a really bad idea. If you are doing only relatively short-term forecasts, it shouldn't really matter much (in fact, if you are doing short-term forecasting, you don't even need to do a VECM in the first place). If you are doing longer-term forecasts, if the cointegrating space is that volatile, the long-term forecasts will be as well.
ac_1 wrote: Also, do I have to normalise the cointegrating vectors prior to forecasting?
Absolutely not. The loadings adjust to deal with the scale of the beta's.

Re: Recursive VECM - Johansen ML technique

Posted: Sun Feb 06, 2022 11:11 am
by ac_1
Many Thanks.

Are the SE's still applicable in FORECAST(STDERRS=...), for graphing PI's in a VECM?

Re: Recursive VECM - Johansen ML technique

Posted: Sun Feb 06, 2022 2:25 pm
by TomDoan
ac_1 wrote:Many Thanks.

Are the SE's still applicable in FORECAST(STDERRS=...), for graphing PI's in a VECM?
Yes, as the estimates assuming coefficients are known.

Re: Recursive VECM - Johansen ML technique

Posted: Mon Feb 14, 2022 10:34 am
by ac_1
Thanks.

Do you mean short-term forecasting as just one-step ahead?

Questions regarding interpretation and differencing in financial & macro variables:

In single-equation ARMA modelling with stationary I(0) one would expect coefficient betas to be (mostly) between plus/minus 1 implying a non-explosive/non-persistent series. Correct? E.g. Enders AETS Table 2.4 the a1 term is greater than 1 for AR(7), AR(2) and AR(1,2,7), and except for a0 the constant, less than plus/minus 1 in all others.

As coefficients are on occasion not necessarily within those bounds, how does one interpret coefficients greater than plus/minus 1, especially in multi-equation modelling, VAR-in-levels/VAR-in-differences/VECM?

I have seen the following: https://stats.stackexchange.com/questio ... rpretation which points to IRF's, but doesn't mention actual interpretation of the weights.

Obviously, coefficients will be time-varying estimated recursively or rolling.

Is a VAR estimated in levels with combinations of I(1) and I(0) variables (including a simple bivariate VAR with an I(1) variable and an I(0) variable), that is NOT cointegrated, spurious?

For multi-equation models is it fair to say:
(a) If all variables are I(0), specify in levels.
(b) If all variables are I(1) and are cointegrated, specify in levels. If to specify in just first-differences there is misspecification as the ECT is omitted.
(c) If all variables are I(1) and NOT cointegrated, specify in first-differences, also the standard inferences can be applied.
(d) If I(1) and I(0) variables are cointegrated, specify in levels.
(e) If I(1) and I(0) variables are NOT cointegrated, specify with I(1) variables in first-differences and I(0) in levels.

Similarly for I(2) variables. Any other combinations?

Re: Recursive VECM - Johansen ML technique

Posted: Mon Feb 14, 2022 3:26 pm
by TomDoan
ac_1 wrote:Thanks.

Do you mean short-term forecasting as just one-step ahead?
No. One or two years.
ac_1 wrote: Questions regarding interpretation and differencing in financial & macro variables:

In single-equation ARMA modelling with stationary I(0) one would expect coefficient betas to be (mostly) between plus/minus 1 implying a non-explosive/non-persistent series. Correct? E.g. Enders AETS Table 2.4 the a1 term is greater than 1 for AR(7), AR(2) and AR(1,2,7), and except for a0 the constant, less than plus/minus 1 in all others.
That's just wrong. Individual coefficients don't tell you anything about whether the model is stationary. The stationary process (1-.9L)^2 y = e, converts to the AR representation (1-1.8L+.81L^2) y = e, or y = 1.8y{1}-.81y{2}+e so the first lag is not just bigger than one but *much* bigger than 1.

Re: Recursive VECM - Johansen ML technique

Posted: Tue Feb 15, 2022 4:46 am
by ac_1
TomDoan wrote: That's just wrong. Individual coefficients don't tell you anything about whether the model is stationary. The stationary process (1-.9L)^2 y = e, converts to the AR representation (1-1.8L+.81L^2) y = e, or y = 1.8y{1}-.81y{2}+e so the first lag is not just bigger than one but *much* bigger than 1.
Yes, understood the algebra. How is (1-.9L)^2 y = e a stationary process?


Do I interpret the weights on the AR lags, in the usual regression way? In an AR(1) 'If y{1} increases by 1 unit, y will be expected, everything else being equal, to increase by phi units'.
Are IRF's (for single equation's) a more appropriate interpretation?


How about the cointegration specifications?

Re: Recursive VECM - Johansen ML technique

Posted: Tue Feb 15, 2022 7:55 am
by TomDoan
ac_1 wrote:
TomDoan wrote: That's just wrong. Individual coefficients don't tell you anything about whether the model is stationary. The stationary process (1-.9L)^2 y = e, converts to the AR representation (1-1.8L+.81L^2) y = e, or y = 1.8y{1}-.81y{2}+e so the first lag is not just bigger than one but *much* bigger than 1.
Yes, understood the algebra. How is (1-.9L)^2 y = e a stationary process?
The roots are on the proper side of the unit circle. If you don't know that, you need to review how AR processes work.
ac_1 wrote: Do I interpret the weights on the AR lags, in the usual regression way? In an AR(1) 'If y{1} increases by 1 unit, y will be expected, everything else being equal, to increase by phi units'.
Are IRF's (for single equation's) a more appropriate interpretation?
You don't try to "interpret" the individual coefficients. That's why it's considered to be bad form to list the coefficients of a VAR---they provide no useful information. IRF's show the dynamics implied by the process.
ac_1 wrote: How about the cointegration specifications?
See https://estima.com/ratshelp/spuriousregression.html regarding spurious regressions.

I(1) and I(0) variables cannot be cointegrated. Cointegration is a relationship between I(1) variables. It is sometimes helpful to add I(0) variables to an existing cointegrating relationship, for small sample improvements to estimation, but it doesn't change anything theoretically.

Estimating in levels is never "wrong"; it just might be somewhat less efficient than imposing true restrictions. Estimating in differences is wrong in the presence of cointegration.

Re: Recursive VECM - Johansen ML technique

Posted: Fri Feb 18, 2022 4:21 am
by ac_1
TomDoan wrote: The roots are on the proper side of the unit circle. If you don't know that, you need to review how AR processes work.
The discriminant = 0. I get repeated roots = 0.9. The characteristic roots lie inside the unit circle. It's I(0) i.e. stationary. The series is convergent and stable as shown via plotting the time path where the arbitrary constants are set to 1 with t running from 1 to 100.

Code: Select all

equation(noconst,ar=2,coeffs=||+1.8,-0.81||) arma y
*
compute croots=%polycxroots(%eqnlagpoly(arma,y))
disp croots
disp croots(1)
disp croots(2)
*
compute invcroots = 1.0/1.11111
disp 'invcroots:' invcroots
*
compute A1 = 1.0
compute A2 = 1.0
*
cal(irregular)
allocate 100
do t = 1, 100
   set res1 1 100 = A1*(invcroots)^t + A2*(invcroots)^t
end t
*
prin / res1
*
graph 1
# res1
Does the graph tend to zero as that's the mean of this AR(2) without a constant? Is the plot defined as the IRF for an AR process?
If I calculate the inverse roots and plot the graph, I can still make comparisons with information criteria regarding fit of various models.

TomDoan wrote: You don't try to "interpret" the individual coefficients. That's why it's considered to be bad form to list the coefficients of a VAR---they provide no useful information. IRF's show the dynamics implied by the process.
Is it bad form to "interpret" the t-stats for an I(0) AR process?

Also, for e.g. VAR(4)

Code: Select all

compute companion=%modelcompanion(var4model)
eigen(cvalues=cv) companion
disp cv
Has 8 results: the brackets ( , ) to include complex roots; are these the roots or inverse roots?

Re: Recursive VECM - Johansen ML technique

Posted: Fri Feb 18, 2022 12:48 pm
by TomDoan
ac_1 wrote:
TomDoan wrote: The roots are on the proper side of the unit circle. If you don't know that, you need to review how AR processes work.
The discriminant = 0. I get repeated roots = 0.9. The characteristic roots lie inside the unit circle. It's I(0) i.e. stationary. The series is convergent and stable as shown via plotting the time path where the arbitrary constants are set to 1 with t running from 1 to 100.

Code: Select all

equation(noconst,ar=2,coeffs=||+1.8,-0.81||) arma y
*
compute croots=%polycxroots(%eqnlagpoly(arma,y))
disp croots
disp croots(1)
disp croots(2)
*
compute invcroots = 1.0/1.11111
disp 'invcroots:' invcroots
*
compute A1 = 1.0
compute A2 = 1.0
*
cal(irregular)
allocate 100
do t = 1, 100
   set res1 1 100 = A1*(invcroots)^t + A2*(invcroots)^t
end t
*
prin / res1
*
graph 1
# res1
Does the graph tend to zero as that's the mean of this AR(2) without a constant? Is the plot defined as the IRF for an AR process?
If I calculate the inverse roots and plot the graph, I can still make comparisons with information criteria regarding fit of various models.
First of all, the algebraic calculation for an IRF for a process with repeated roots is more complicated than that. (It's covered in Hamilton). And no, the IRF converges to zero because the process is stationary---the mean of the process is irrelevant to the IRF calculation since it looks only at the AR dynamics.
TomDoan wrote: You don't try to "interpret" the individual coefficients. That's why it's considered to be bad form to list the coefficients of a VAR---they provide no useful information. IRF's show the dynamics implied by the process.
ac_1 wrote:Is it bad form to "interpret" the t-stats for an I(0) AR process?
Yes.
ac_1 wrote: Also, for e.g. VAR(4)

Code: Select all

compute companion=%modelcompanion(var4model)
eigen(cvalues=cv) companion
disp cv
Has 8 results: the brackets ( , ) to include complex roots; are these the roots or inverse roots?
The roots of the process.

Re: Recursive VECM - Johansen ML technique

Posted: Fri Feb 18, 2022 1:25 pm
by ac_1
Thanks!

For the AR(2) that should say:
ac_1 wrote: If I calculate the roots...
Questions on lag selection:

For variables in levels, let's say from @varlagsselect the number of lags = 2.

Hence, using SYSTEM, for 'like-with-like' comparisons, estimating
(i) a VAR-in-levels: set LAGS = 1 2
(ii) a VAR-in-differences: set LAGS = 1 2
(iii) a VECM: set LAGS = 1 2 3 i.e. 3 lagged levels are equivalent to 2 lagged changes. Therefore the VECM will have 2 lags and ECT('s).

And for @JohMLE for use with the above VECM, LAGS = (2+1)=3.

Correct?

Re: Recursive VECM - Johansen ML technique

Posted: Sun Feb 20, 2022 4:25 pm
by TomDoan
ac_1 wrote:Thanks!

For the AR(2) that should say:
ac_1 wrote: If I calculate the roots...
Questions on lag selection:

For variables in levels, let's say from @varlagsselect the number of lags = 2.

Hence, using SYSTEM, for 'like-with-like' comparisons, estimating
(i) a VAR-in-levels: set LAGS = 1 2
(ii) a VAR-in-differences: set LAGS = 1 2
(iii) a VECM: set LAGS = 1 2 3 i.e. 3 lagged levels are equivalent to 2 lagged changes. Therefore the VECM will have 2 lags and ECT('s).

And for @JohMLE for use with the above VECM, LAGS = (2+1)=3.

Correct?
No. If you get @VARLAGSELECT of 2 in levels, then the VAR in differences would be LAGS 1, and @JOHMLE would be LAGS=2.

Re: Recursive VECM - Johansen ML technique

Posted: Mon Feb 21, 2022 3:22 am
by ac_1
TomDoan wrote: No. If you get @VARLAGSELECT of 2 in levels, then the VAR in differences would be LAGS 1, and @JOHMLE would be LAGS=2.
If I get @VARLAGSELECT of 1 in levels, then the VAR in differences would be no lags, and @JOHMLE would be LAGS=1.

Correct?


To confirm:
(a) Unit root tests e.g. ADF: tests if a time series variable is non-stationary and possesses a unit root.
(b) Calculating/Plotting (inverse) characteristic roots of ARIMA models: checks if the sequence generated from the model/specification/process is stable, stationary and invertible.

Correct?

Re: Recursive VECM - Johansen ML technique

Posted: Mon Feb 21, 2022 1:23 pm
by TomDoan
ac_1 wrote:
TomDoan wrote: No. If you get @VARLAGSELECT of 2 in levels, then the VAR in differences would be LAGS 1, and @JOHMLE would be LAGS=2.
If I get @VARLAGSELECT of 1 in levels, then the VAR in differences would be no lags, and @JOHMLE would be LAGS=1.

Correct?
That's correct.
ac_1 wrote: To confirm:
(a) Unit root tests e.g. ADF: tests if a time series variable is non-stationary and possesses a unit root.
(b) Calculating/Plotting (inverse) characteristic roots of ARIMA models: checks if the sequence generated from the model/specification/process is stable, stationary and invertible.

Correct?
(b) tells you nothing about "stability". Invertibility usually is applied to the MA part, not the AR part.

Re: Recursive VECM - Johansen ML technique

Posted: Tue Feb 22, 2022 3:10 am
by ac_1
For (b) does this mean for any series used with the model the sequence generated i.e. process implied is: stationary and invertible (MA part)?

For ARIMA I can calculate the roots and inverse roots as from viewtopic.php?f=5&t=724&hilit=arroots

Code: Select all

compute ARroots=%polycxroots(%eqnlagpoly(bjeq,caemp))
compute MAroots=%polycxroots(%eqnlagpoly(bjeq,%mvgavge))

compute iARroot1 = 1.0/(ARroots(1))
I can then use %real and %imag to return the respective parts of the complex number

Code: Select all

compute iARroot1_real = %real(iARroot1)
compute iARroot1_imag = %imag(iARroot1)
I'd like to plot the inverse AR roots within a unit circle and inverse MA roots within a unit circle as in: https://otexts.com/fpp2/arima-r.html

There's SCATTER, GRAPH, GRTEXT, etc, but I'm not certain how to plot? :)