Page 2 of 4
Re: Koutmos JBFA 1996 Multivariate EGARCH
Posted: Sun Jul 05, 2015 9:49 pm
by TomDoan
f_ta wrote:Dear Tom,
Thank you for your prompt reply. Changing the xsteep variables to contemporaneous still did not lead to convergence. I also tried leaving out the exogenous variable and did not get any valid results.
You need XSTEEP to be lagged,
not contemporaneous. The problem is when you put the current value in.
f_ta wrote:
As I have several datasets some of which include negative values, I do not think that log-returns will work. I am trying to do a similar study as In (2007) Volatility spillovers across international swap markets: The US, Japan, and the UK, and Toyoshima & Hamori (2012) Volatility transmission of swap spreads among the US, Japan and the UK: a cross-correlation function approach, where both state that they are using first differences.
Swap spreads are different from exchange rates. Exchange rates should be modeled in log differences. Period.
f_ta wrote:
Is it possible to change the distribution in the maximize function to t-distribution?
Sure. Change this to use %logtdensity and add an extra parameter for the degrees of freedom.
frml logl = hh(t)=EGARCHSpillover(t),%logdensity(hh(t),%xt(u,t))
Re: Koutmos JBFA 1996 Multivariate EGARCH
Posted: Fri May 06, 2016 3:22 pm
by pls
Here is the code with your corrections incorporated.
I would like to know how to change the code so as to incorporate additional exogenous variables ex1 and ex2 in the equations for conditional volatility. The coefficients of these variables and their statistical significance also need to be established.
Would the change be as follows:
*Read in the ex1 and ex2 series from the data
*Declare the coefficient vector cex(n)
dec vect cex(n) ;* Coefficients of exogenous variables in conditional volatility in EGARCH
*Change to the conditional volatility
log h(i) = c(i) + sum_j a(i)(j) z(j){1} + g(i) log h(i){-1} + cex(1)*ex1{i}+cex(2)*ex2{i}
*Change to the nonlin parameter set
nonlin(parmset=garchparms) c g a d cex
*set initial values for cex in the do loop
do i=1,n
linreg(equation=meaneq,noprint) y(i) gstart gend
set u(i) = %if(%valid(%resids),%resids,0.0)
compute beta(i) = %beta
compute c(i) = log(%seesq)*(1-.80)
compute g(i) =.80
compute a(i) = %unitv(n,i)*.25
compute d(i) =0.0
compute cex(i)=0.0
end do i
I am not sure how to incorporate a test for significance of the cex coefficients.
Thanks.
Code: Select all
all 1292
open data h:\Datafnr.xls
data(format=xls,org=obs) / eurf usdf eursteep usdsteep
*
compute n=2
*
compute gstart=3,gend=1292
*
dec vect[series] y(n)
set y(1) = (eurf-eurf{1})
set y(2) = (usdf-usdf{1})
*
dec vect[series] s(n)
set s(1) = (eursteep-eursteep{1})
set s(2) = (usdsteep-usdsteep{1})
*
dec vect[equation] ar1eq(n)
do i=1,n
equation ar1eq(i) y(i)
# constant y(i){1} s(i) {1}
end do i
*
group ar1model ar1eq(1) ar1eq(2)
*
garch(model=ar1model,mv=cc,variance=exp,asymmetric,distrib=t) gstart gend
*
equation meaneq *
# constant y(1){1} y(2){1} s(i) {1}
*
dec vect[series] u(n) ;* Residuals
dec vect[frml] mean(n) ;* Model for mean
dec vect[frml] z(n) ;* EGARCH indexes
dec series[vect] hhd ;* Variances
dec series[symm] hh ;* Full covariance matrices
*
dec vect[vect] beta(n)
nonlin(parmset=meanparms) beta
*
* The variance for equation i takes the form:
*
* log h(i) = c(i) + sum_j a(i)(j) z(j){1} + g(i) log h(i){-1}
*
* z(i) = abs(u(i)/sqrt(h(i))) - sqrt(2/pi) + d(i)*u(i)/sqrt(h(i))
*
dec vect c(n) ;* Variance intercepts in EGARCH
dec vect g(n) ;* Lagged variance coefficients in EGARCH
dec vect d(n) ;* Asymmetry coefficients in EGARCH
dec vect[vect] a(n) ;* Lagged z terms in EGARCH
nonlin(parmset=garchparms) c g a d
*
do i=1,n
frml(equation=meaneq,vector=beta(i)) mean(i)
frml z(i) = abs(u(&i){0})/sqrt(hhd{0}(&i))-sqrt(2/%pi)+$
d(&i)*u(&i){0}/sqrt(hhd{0}(&i))
end do i
*
dec packed rr(n-1,n-1)
nonlin(parmset=ccparms) rr
*
function EGARCHSpillover t
type symmetric EGARCHSpillover
type integer t
*
local integer i j
local real hlog
dim EGARCHSpillover(n,n)
*
do i=1,n
compute hlog=c(i)+g(i)*log(hhd(t-1)(i))
do j=1,n
compute hlog=hlog+a(i)(j)*z(j)(t-1)
end do j
compute hhd(t)(i) = exp(hlog)
end do i
ewise EGARCHSpillover(i,j)=$
%if(i==j,hhd(t)(i),rr(i-1,j)*sqrt(hhd(t)(i)*hhd(t)(j)))
compute hh(t)=EGARCHSpillover
*
do i=1,n
compute u(i)(t) = y(i)(t)-mean(i)(t)
end do i
*
end EGARCHSpillover
*
* Log likelihood
*
frml logl = hh(t)=EGARCHSpillover(t),%logdensity(hh(t),%xt(u,t))
*
*
do i=1,n
linreg(equation=meaneq,noprint) y(i) gstart gend
set u(i) = %if(%valid(%resids),%resids,0.0)
compute beta(i) = %beta
compute c(i) = log(%seesq)*(1-.80)
compute g(i) =.80
compute a(i) = %unitv(n,i)*.25
compute d(i) =0.0
end do i
*
vcv gstart gend
# u
gset hhd = %xdiag(%sigma)
gset hh = %sigma
ewise rr(i,j)=%cvtocorr(%sigma)(i+1,j)
*
*
maximize(parmset=meanparms+garchparms+ccparms,$
pmethod=simplex,piters=20,method=bfgs,iters=500) logl gstart gend
*
* Diagnostics
*
dec vect[series] ustd(n)
*
dec vect[labels] vl(2)
compute vl=||"EUR","USD"||
report(action=define)
report(atrow=2,atcol=1,fillby=cols) "$E(z_{i,t})$" "$E(z^2_{i,t})$" "$LB(12);z_{i,t}$" "$LB(12);z^2_{i,t}$"
do i=1,n
report(atrow=1,atcol=i+1,align=center) vl(i)
set ustd(i) %regstart() %regend() = u(i)/sqrt(hh(t)(i,i))
set ustdsq = ustd(i)^2
sstats(mean) %regstart() %regend() ustd(i)>>eustd ustdsq>>eustdsq
report(atrow=2,atcol=i+1,fillby=cols) eustd eustdsq
corr(number=12,qstats,noprint) ustd(i)
report(atrow=4,atcol=i+1,special=1+fix(%qsignif<.05)) %qstat
corr(number=12,qstats,noprint) ustdsq
report(atrow=5,atcol=i+1,special=1+fix(%qsignif<.05)) %qstat
@regcorrs(number=10,nocrits,nograph,qstat) ustd(i)
compute q1=%cdstat,q1signif=%signif
@regcorrs(number=10,nocrits,nograph,qstat,dfc=2) ustdsq
compute q2=%cdstat,q2signif=%signif
report(atrow=i+6,atcol=1) vl(i) q1 q1signif q2 q2signif
end do i
report(action=format,atrow=2,atcol=2,picture="*.####",align=decimal)
report(action=show)
*
@mvqstat(lags=10)
# ustd
*
@mvarchtest(lags=2)
# ustd
*
Re: Koutmos JBFA 1996 Multivariate EGARCH
Posted: Fri May 06, 2016 5:53 pm
by TomDoan
I assume you mean joint significance. You can do an LR test by zeroing them out and leaving them out of the parameter set to do a restricted estimate and then use the likelihood from that and from the unrestricted model. The easiest way to set up a Wald test is to use Statistics->Regression Tests, pick the Exclusion Tests and select the CEX variables in the variables list.
Re: Koutmos JBFA 1996 Multivariate EGARCH
Posted: Mon May 09, 2016 1:44 pm
by pls
Thanks. I was able to add in the two exogenous variables to the conditional volatility equations and estimate the results. I am also using dummy variables to account for structural breaks as in Mensi et al (2015). The results seem ok as follows:
Code: Select all
MAXIMIZE - Estimation by BFGS
NO CONVERGENCE IN 36 ITERATIONS
LAST CRITERION WAS 0.0000000
SUBITERATIONS LIMIT EXCEEDED.
ESTIMATION POSSIBLY HAS STALLED OR MACHINE ROUNDOFF IS MAKING FURTHER PROGRESS DIFFICULT
TRY HIGHER SUBITERATIONS LIMIT, TIGHTER CVCRIT, DIFFERENT SETTING FOR EXACTLINE OR ALPHA ON NLPAR
RESTARTING ESTIMATION FROM LAST ESTIMATES OR DIFFERENT INITIAL GUESSES MIGHT ALSO WORK
Usable Observations 1215
Function Value -5439.0775
Variable Coeff Std Error T-Stat Signif
************************************************************************************
1. BETA(1)(1) -0.012421519 0.116828744 -0.10632 0.91532652
2. BETA(1)(2) -0.081520322 0.025993039 -3.13624 0.00171131
3. BETA(1)(3) -0.106175736 0.100483299 -1.05665 0.29067109
4. BETA(2)(1) 0.007042679 0.013655180 0.51575 0.60602803
5. BETA(2)(2) -0.000261832 0.002611416 -0.10026 0.92013431
6. BETA(2)(3) 0.276676905 0.025298677 10.93642 0.00000000
7. C(1) 0.674286155 0.007635233 88.31245 0.00000000
8. C(2) 0.164640561 0.004317175 38.13618 0.00000000
9. G(1) 0.806757232 0.000318709 2531.33191 0.00000000
10. G(2) 0.912143814 0.003301601 276.27321 0.00000000
11. A(1)(1) 0.139639023 0.029463231 4.73943 0.00000214
12. A(1)(2) -0.199349689 0.003626055 -54.97701 0.00000000
13. A(2)(1) -0.240250653 0.017863615 -13.44916 0.00000000
14. A(2)(2) -0.244720698 0.009389733 -26.06258 0.00000000
15. D1(1) -0.139433901 0.088082348 -1.58299 0.11342264
16. D1(2) -0.266840701 0.141283600 -1.88869 0.05893359
17. CAD(1) 0.004078937 0.000271458 15.02604 0.00000000
18. CAD(2) -0.001131485 0.000068957 -16.40853 0.00000000
19. CEX(1) -0.013176157 0.001443906 -9.12536 0.00000000
20. CEX(2) 0.005590435 0.000449244 12.44409 0.00000000
21. DDB1(1) 0.014534384 0.021152852 0.68711 0.49201198
22. DDB1(2) -0.136677621 0.005920139 -23.08690 0.00000000
23. DDB1(3) -0.093316316 0.007501241 -12.44012 0.00000000
24. DDB1(4) -0.086028252 0.006536302 -13.16161 0.00000000
25. DDB1(5) 0.154807684 0.021652811 7.14954 0.00000000
26. DDB1(6) -0.031535619 0.008950171 -3.52347 0.00042594
27. DDB1(7) -0.135393042 0.014395931 -9.40495 0.00000000
28. DDB2(1) -0.010278334 0.014686882 -0.69983 0.48403291
29. DDB2(2) -0.047115304 0.014256369 -3.30486 0.00095024
30. DDB2(3) -0.073451856 0.011480004 -6.39824 0.00000000
31. DDB2(4) -0.063378009 0.016213695 -3.90892 0.00009271
32. DDB2(5) -0.053751281 0.015430966 -3.48334 0.00049520
33. DDB2(6) -0.085540765 0.007180433 -11.91304 0.00000000
34. DDB2(7) -0.026475691 0.015054387 -1.75867 0.07863366
35. DDB2(8) -0.087029647 0.010424226 -8.34879 0.00000000
36. DDB2(9) -0.152941591 0.005787576 -26.42585 0.00000000
37. DDB2(10) -0.113451178 0.014136078 -8.02565 0.00000000
38. DDB2(11) -0.236656525 0.021890945 -10.81070 0.00000000
39. DDB2(12) -0.174324917 0.023490015 -7.42123 0.00000000
40. DDB2(13) -0.274323959 0.024140118 -11.36382 0.00000000
41. DDB2(14) -0.287127238 0.035063577 -8.18876 0.00000000
42. DDB2(15) -0.043494105 0.023241971 -1.87136 0.06129515
43. DDB2(16) -0.256199944 0.009210494 -27.81609 0.00000000
44. DDB2(17) -0.144553573 0.013201948 -10.94941 0.00000000
45. RR(1,1) 0.001697149 0.025710374 0.06601 0.94736964
However, I modified the program slightly and kept only one exogenous variable. I get the following strange result in which the standard errors are all zero.
Code: Select all
MAXIMIZE - Estimation by BFGS
Convergence in 29 Iterations. Final criterion was 0.0000000 <= 0.0000100
LOW ITERATION COUNT ON BFGS MAY LEAD TO POOR ESTIMATES FOR STANDARD ERRORS
Usable Observations 1215
Function Value NA
Variable Coeff Std Error T-Stat Signif
************************************************************************************
1. BETA(1)(1) -1.3297e+021 0.0000 0.00000 0.00000000
2. BETA(1)(2) 3.3440e+020 0.0000 0.00000 0.00000000
3. BETA(1)(3) 4.3443e+021 0.0000 0.00000 0.00000000
4. BETA(2)(1) 7.6302e+020 0.0000 0.00000 0.00000000
5. BETA(2)(2) 1.7736e+020 0.0000 0.00000 0.00000000
6. BETA(2)(3) -2.4107e+021 0.0000 0.00000 0.00000000
7. C(1) 1.8513e+021 0.0000 0.00000 0.00000000
8. C(2) 1.4697e+020 0.0000 0.00000 0.00000000
9. G(1) -2.2015e+021 0.0000 0.00000 0.00000000
10. G(2) 6.0074e+020 0.0000 0.00000 0.00000000
11. A(1)(1) 5.0000e+020 0.0000 0.00000 0.00000000
12. A(1)(2) -2.1601e+020 0.0000 0.00000 0.00000000
13. A(2)(1) 2.7149e+021 0.0000 0.00000 0.00000000
14. A(2)(2) 2.4396e+021 0.0000 0.00000 0.00000000
15. D1(1) -2.0394e+021 0.0000 0.00000 0.00000000
16. D1(2) -3.4620e+021 0.0000 0.00000 0.00000000
17. CAD(1) 6.3825e+019 0.0000 0.00000 0.00000000
18. CAD(2) 1.2477e+019 0.0000 0.00000 0.00000000
19. DDB1(1) -3.0884e+021 0.0000 0.00000 0.00000000
20. DDB1(2) 8.0145e+020 0.0000 0.00000 0.00000000
21. DDB1(3) 7.8881e+020 0.0000 0.00000 0.00000000
22. DDB1(4) 1.4204e+020 0.0000 0.00000 0.00000000
23. DDB1(5) -2.8594e+020 0.0000 0.00000 0.00000000
24. DDB1(6) 1.2154e+020 0.0000 0.00000 0.00000000
25. DDB1(7) 1.3172e+021 0.0000 0.00000 0.00000000
26. DDB2(1) 5.9051e+020 0.0000 0.00000 0.00000000
27. DDB2(2) 5.4324e+020 0.0000 0.00000 0.00000000
28. DDB2(3) 3.9772e+020 0.0000 0.00000 0.00000000
29. DDB2(4) 8.4644e+020 0.0000 0.00000 0.00000000
30. DDB2(5) 3.6461e+020 0.0000 0.00000 0.00000000
31. DDB2(6) 4.9621e+020 0.0000 0.00000 0.00000000
32. DDB2(7) 9.5378e+020 0.0000 0.00000 0.00000000
33. DDB2(8) 4.0793e+020 0.0000 0.00000 0.00000000
34. DDB2(9) 6.0451e+020 0.0000 0.00000 0.00000000
35. DDB2(10) 1.7827e+021 0.0000 0.00000 0.00000000
36. DDB2(11) 1.4267e+021 0.0000 0.00000 0.00000000
37. DDB2(12) 1.2580e+021 0.0000 0.00000 0.00000000
38. DDB2(13) 1.6461e+021 0.0000 0.00000 0.00000000
39. DDB2(14) 1.3732e+021 0.0000 0.00000 0.00000000
40. DDB2(15) 5.7196e+020 0.0000 0.00000 0.00000000
41. DDB2(16) 1.1575e+021 0.0000 0.00000 0.00000000
42. DDB2(17) -8.2127e+019 0.0000 0.00000 0.00000000
43. RR(1,1) 2.7883e+020 0.0000 0.00000 0.00000000
This is the result of the maximize instruction. I am not sure what is causing this problem and how I can deal with it. The maximize instruction is as follows:
nlpar(derives=second)
*
maximize(parmset=meanparms+garchparms+ccparms,$
pmethod=simplex,piters=10,method=bfgs,iters=500) logl gstart gend
Thanks.
Re: Koutmos JBFA 1996 Multivariate EGARCH
Posted: Mon May 09, 2016 2:37 pm
by TomDoan
First off, the first set is not OK. It's screaming at you that the estimates didn't converge. Also, is the (effectively) zero contemporaneous correlation between the residuals reasonable?
Without knowing how you modified the program, it's hard to tell. If you added it to the end of the double dummy program, then the second MAXIMIZE will pick up the parameters from the results of the first. Given how strongly significant the dummies seem to be, dropping one might give you very, very bad guess values for the restricted model.
Re: Koutmos JBFA 1996 Multivariate EGARCH
Posted: Mon May 09, 2016 3:58 pm
by pls
Hi Tom:
I basically modified VEGARCH.RPF and thought based on the following from VEGARCH.RPF and previous posts in this thread that the message on nonconvergence is just a warning.
"* The model actually converges properly, though the diagnostics don't
* indicate that. If you have RATS 8.1 or later, you can activate the
* next instruction to get cleaner output. (It forces more accurate, but
* slower numerical derivatives)."
I have to think about the zero contemporaneous correlation.
I ran the programs separately, once with two exogenous regressors, the second time with just one, and cleared the memory before each run. Here is the code with the two exogenous regressors ind1 and ind2. For the code with just 1 exogenous regressor, I dropped ind2 and its coefficient and retained ind1 and its coefficient in all relevant places.
Code: Select all
set dlogp = log(price/price{1})*100.0
set dlogp2 = log(ADMD/ADMD{1})*100.0
compute n=2
*
* Define the return series
*
dec vect[series] y1(n)
set y1(1) = dlogp
set y1(2) = dlogp2
dec vect[series] s(n)
set s(1) = ind1
set s(2) = ind2
dec vect[series] db1(7)
dec vect[series] db2(17)
*set dummy variables for breakpoints
do j=1,7
set db1(j) = 0.0
end do j
do j=1,17
set db2(j) = 0.0
end do j
set db1(1) = %if(t>100.and.t<126, 1, 0)
set db1(2) = %if(t>125.and.t<444, 1, 0)
set db1(3) = %if(t>443.and.t<714, 1, 0)
set db1(4) = %if(t>713.and.t<990, 1, 0)
set db1(5) = %if(t>989.and.t<1021, 1, 0)
set db1(6) = %if(t>1020.and.t<1153, 1, 0)
set db1(7) = %if(t>1152.and.t<1218, 1, 0)
set db2(1) = %if(t>69.and.t<113, 1, 0)
set db2(2) = %if(t>112.and.t<172, 1, 0)
set db2(3) = %if(t>171.and.t<276, 1, 0)
set db2(4) = %if(t>275.and.t<312, 1, 0)
set db2(5) = %if(t>311.and.t<367, 1, 0)
set db2(6) = %if(t>366.and.t<510, 1, 0)
set db2(7) = %if(t>509.and.t<543, 1, 0)
set db2(8) = %if(t>542.and.t<637, 1, 0)
set db2(9) = %if(t>636.and.t<814, 1, 0)
set db2(10) = %if(t>813.and.t<855, 1, 0)
set db2(11) = %if(t>854.and.t<884, 1, 0)
set db2(12) = %if(t>883.and.t<910, 1, 0)
set db2(13) = %if(t>909.and.t<936, 1, 0)
set db2(14) = %if(t>935.and.t<959, 1, 0)
set db2(15) = %if(t>958.and.t<992, 1, 0)
set db2(16) = %if(t>991.and.t<1110, 1, 0)
set db2(17) = %if(t>1109.and.t<1218, 1, 0)
inquire(reglist) gstart gend
# dlogp dlogp2 ind1 ind2
display gstart gend
*
* This is the period of estimation, allowing for the lost data point for
* converting to returns, and the loss for the lags in the VAR.
*
compute gstart=3
*
* Table 1
*
dec vect[labels] vl(2)
compute vl=||"dlogp","dlogp2"||
report(action=define,title="Preliminary Statistics")
report(atrow=1,atcol=1,fillby=cols) "Statistics" "$\mu$" "$\sigma$" "S" "K" "D" $
"$LB(12) for R_t" "$LB(12) for R_t^2$"
do i=1,n
stats(noprint) y1(i) gstart gend
report(atrow=1,atcol=i+1,fillby=cols) vl(i) %mean sqrt(%variance) %skewness %kurtosis
@adtest(noprint) y1(i) gstart gend
report(atrow=6,atcol=i+1) %cdstat
corr(noprint,number=12,qstats) y1(i) gstart gend
report(atrow=7,atcol=i+1) %qstat
@mcleodli(noprint,number=12) y1(i) gstart gend
report(atrow=8,atcol=i+1) %cdstat
end do i
report(action=format,picture="*.####")
report(action=show)
*
* Template for mean equation. This is a VAR(1)
*
equation meaneq *
# constant y1(1){1} y1(2){1}
*
dec vect[series] u(n) ;* Residuals
dec vect[frml] mean(n) ;* Model for mean
dec vect[frml] z(n) ;* EGARCH indexes
dec series[vect] hhd ;* Variances
dec series[symm] hh ;* Full covariance matrices
*
* Coefficient vectors for the VAR (b)
*
dec vect[vect] beta(n)
nonlin(parmset=meanparms) beta
*
* The variance for equation i takes the form:
*
* log h(i) = c(i) + sum_j a(i)(j) z(j){1} + g(i) log h(i){-1} + d2 + cad(i) ind1{1} + cex(i) ind2{1}
*
* z(i) = abs(u(i)/sqrt(h(i))) - sqrt(2/pi) + d1(i)*u(i)/sqrt(h(i))
*
dec vect c(n) ;* Variance intercepts in EGARCH
dec vect g(n) ;* Lagged variance coefficients in EGARCH
dec vect d1(n) ;* Asymmetry coefficients in EGARCH
dec vect[vect] a(n) ;* Lagged z terms in EGARCH
dec vect cad(n) ;* Coefficients of ind1 in vol equation
dec vect cex(n) ;* Coefficients of ind2 in vol equation
dec vec ddb1(7)
dec vec ddb2(17)
nonlin(parmset=garchparms) c g a d1 cad cex ddb1 ddb2
*
* Set up the formulas for the mean and for calculating the "z". The z(i)
* FRML uses &i for all references to i, so they are resolved now as the
* formulas are being defined.
*
do i=1,n
frml(equation=meaneq,vector=beta(i)) mean(i)
frml z(i) = abs(u(&i){0})/sqrt(hhd{0}(&i))-sqrt(2/%pi)+$
d1(&i)*u(&i){0}/sqrt(hhd{0}(&i))
end do i
*
* Subdiagonal for correlation matrix. This is how CC models are handled.
*
dec packed rr(n-1,n-1)
nonlin(parmset=ccparms) rr
*
* Do calculations at time <<t>> for the residuals (u), variances (v) and
* full covariance matrix (return value for function).
*
function EGARCHSpillover t
type symmetric EGARCHSpillover
type integer t
*
local integer i j
local real hlog
dim EGARCHSpillover(n,n)
*
do i=1,n
compute hlog=c(i)+g(i)*log(hhd(t-1)(i))+cad(i)*ind1(t-1)+cex(i)*ind2(t-1)
do j=1,n
compute hlog=hlog+a(i)(j)*z(j)(t-1)
if j==1
do k1=1,7
compute hlog=hlog+ddb1(k1)*db1(k1) (t-1)
end do k1
else if j==2
do k2=1,17
compute hlog=hlog+ddb2(k2)*db2(k2) (t-1)
end do k2
end do j
compute hhd(t)(i) = exp(hlog)
end do i
ewise EGARCHSpillover(i,j)=$
%if(i==j,hhd(t)(i),rr(i-1,j)*sqrt(hhd(t)(i)*hhd(t)(j)))
compute hh(t)=EGARCHSpillover
do i=1,n
compute u(i)(t) = y1(i)(t)-mean(i)(t)
end do i
end EGARCHSpillover
*
* Log likelihood
*
frml logl = hh(t)=EGARCHSpillover(t),%logdensity(hh(t),%xt(u,t))
*
* Initial guess values from running regressions. The b's are the OLS
* estimates, a's, g's and d's are (except for the constant) standard
* guess values.
*
do i=1,n
linreg(equation=meaneq,noprint) y1(i) gstart gend
set u(i) = %if(%valid(%resids),%resids,0.0)
compute beta(i) = %beta
compute c(i) = log(%seesq)*(1-.80)
compute g(i) =.80
compute a(i) = %unitv(n,i)*.25
compute d1(i) =0.0
compute cad(i) =0.0
compute cex(i) =0.0
end do i
do i2=1,7
compute ddb1(i2)=0.0
end do i2
do i3=1,17
compute ddb2(i3)=0.0
end do i3
*
vcv gstart gend
# u
gset hhd = %xdiag(%sigma)
gset hh = %sigma
ewise rr(i,j)=%cvtocorr(%sigma)(i+1,j)
*
* The model actually converges properly, though the diagnostics don't
* indicate that. If you have RATS 8.1 or later, you can activate the
* next instruction to get cleaner output. (It forces more accurate, but
* slower numerical derivatives).
*
*nlpar(derives=second)
*
maximize(parmset=meanparms+garchparms+ccparms,$
pmethod=simplex,piters=10,method=bfgs,iters=500) logl gstart gend
*
* Diagnostics
*
dec vect[series] ustd(n)
*
report(action=define)
report(atrow=2,atcol=1,fillby=cols) "$E(z_{i,t})$" "$E(z^2_{i,t})$" "$LB(12);z_{i,t}$" "$LB(12);z^2_{i,t}$"
do i=1,n
report(atrow=1,atcol=i+1,align=center) vl(i)
set ustd(i) %regstart() %regend() = u(i)/sqrt(hh(t)(i,i))
set ustdsq = ustd(i)^2
sstats(mean) %regstart() %regend() ustd(i)>>eustd ustdsq>>eustdsq
report(atrow=2,atcol=i+1,fillby=cols) eustd eustdsq
corr(number=12,qstats,noprint) ustd(i)
report(atrow=4,atcol=i+1,special=1+fix(%qsignif<.05)) %qstat
corr(number=12,qstats,noprint) ustdsq
report(atrow=5,atcol=i+1,special=1+fix(%qsignif<.05)) %qstat
end do i
report(action=format,atrow=2,atcol=2,picture="*.####",align=decimal)
report(action=show)
*
Re: Koutmos JBFA 1996 Multivariate EGARCH
Posted: Mon May 09, 2016 4:14 pm
by TomDoan
I'm not sure what you intend to do with the dummies, but what you are doing doesn't seem to make any sense---each of the variances is going to get the same pair of shifts.
Re: Koutmos JBFA 1996 Multivariate EGARCH
Posted: Mon May 09, 2016 4:39 pm
by pls
The ind1 and ind2 are variables with different values for different t.
The first set of dummies are db1, 1 to 7, to account for structural breaks in the conditional volatility of dlogp.
The second set of dummies are db2, 1 through 17, to account for structural breaks in the conditional volatility of dlogp2.
I was trying to follow Mensi et al (2015) and basing the dummies db1 and db2 on Lamoureux and Lastrapes (1990).
Re: Koutmos JBFA 1996 Multivariate EGARCH
Posted: Mon May 09, 2016 5:02 pm
by TomDoan
You have it in the wrong place. The i loop is the loop over the variance being computed. The j loop is over all the variables in the model (to allow for the spillover effects). You want to move that above the J loop and test the value of I rather than J. As you have that written, both shifts will be added to both (log) variances.
Also, you have that written so the lagged dummy is used. That seems rather odd.
Re: Koutmos JBFA 1996 Multivariate EGARCH
Posted: Sun Nov 26, 2017 8:08 pm
by bekkdcc
Dear Tom,
I want to perform an asymmetric DCC(adcc) as a full estimate( not using 2-step procedure) and I know that I can use the code arch(model=var1,mv=adcc,.....) .
Then I realized that all the paper which studied asymmetric DCC uses 2-step procedure ( first they decide the model E-garch or others and then apply DCC as in appiello, Engle & Sheppard(2006) paper and codes)
When I looked the subject of the post and the replies here, I can not be sure about whether this is a procedure for Multivariate E-GARCH or CCC with asymmetry because after you estimate the (i=1 to n) E-garch for model using as
dec vect[equation] ar1eq(n)
do i=1,n
equation ar1eq(i) y(i)
# constant y(i){1}
end do i
group ar1model ar1eq(1) ar1eq(2) ar1eq(3) ar1eq(4)
garch(model=ar1model,mv=cc,variance=exp,asymmetric) gstart gend
you estimate a CC model as it is a system
system(model=var1)
variables y
lags 1
det constant
end(system)
*
garch(model=var1,mv=cc,iters=100,variance=koutmos,$
pmethod=simplex,piters=10,trace) gstart gend
So my first question is whether this model is Multivariate E-GARCH or CC with asymmetry .Can you explain it?
Second question if it is CC with asymmetry then we can also use it for DCC model as
.......
.......
garch(model=var1,mv=dcc,iters=100,variance=koutmos,$
pmethod=simplex,piters=10,trace) gstart gend
and the third question is whats the difference (in theory) between (system DCC with variance=koutmos and system ADCC)
garch(model=var1,mv=dcc,iters=100,variance=koutmos,$
pmethod=simplex,piters=10,trace) gstart gend
and
garch(model=var1,mv=adcc,rvectors=rd,pmethod=simplex,piters=20,method=bfgs,iters=100, hmatrices=hh,.............)) one-step estimation
and the last question is if my understanding is true, I can write the model GDCC of Cappiello, Engle & Sheppard(2006) as
.......
.......
* GDDCC (Generalized Diagonal DCC)
*
nonlin aq bq
compute aq=%const(sqrt(.1))
compute bq=%const(sqrt(.85))
compute gq=%const(0.0)
maximize(start=%(StartQC()),pmethod=simplex,piters=5,method=bfgs,title="GDDCC",variance=koutmos) logl gstart gend
disp "GDDCC BIC" -2.0*%logl+(%nreg+uniparms)*log(%nobs)
*
Thanks in advance
Re: Koutmos JBFA 1996 Multivariate EGARCH
Posted: Sun Nov 26, 2017 8:38 pm
by TomDoan
There's no reason to prefer a two-step DCC (or CC) to an estimate of the complete model unless the number of variables is so large to make estimating the full model infeasible---with modern computers you probably don't run into a major problem with that until you get to maybe N=10. However, the Koutmos model can't be done as a two-step anyway because all the variances depend upon all the other variances (through the standardization for the z's). Two step estimators can only be used if each variance model depends only on its own lagged variances and own lagged residuals.
Re: Koutmos JBFA 1996 Multivariate EGARCH
Posted: Sun Nov 26, 2017 9:54 pm
by bekkdcc
Dear Tom,
1.So this model is just Multivariate EGARCH model or Multivariate Constant Correlation(CC) Model with asyymetry?
2.I want to estimate this code... I know this is a asyymetric DCC
........
compute abbrev=||"a","b","cr","d","e","f"||
compute full = ||"aaa","bbb","ccc","ddd","eee","fff"||
dofor [string] suffix = abbrev
compute ss=%s("s"+suffix)
set %s("x"+suffix) = ss{0}
end dofor
*table
dec vect[string] longlabel(6)
compute longlabel=||"aaa","bbb","ccc","ddd","eee","fff"||
set htfe = 1
system(model=mvmean)
variables xtfe xgdp xexcr xfaiz xkh xvg
lags 1
det constant htfe
end(system)
*
garch(model=mvmean,mv=adcc,rvectors=rd,pmethod=simplex,piters=1,method=bfgs,iters=1000, hmatrices=hh,hadjust=%(htfe=sqrt(hh(t)(1,1))))
But I am not sure if I want to test the multivariate EGARCH , is this code format true...
...
....
* Table 2 - AR(1) EGARCH without spillover. This can be done with the
* GARCH instruction.
*
dec vect[equation] ar1eq(n)
do i=1,n
equation ar1eq(i) y(i)
# constant y(i){1}
end do i
group ar1model ar1eq(1) ar1eq(2) ar1eq(3) ar1eq(4) ar1eq(5) ar1eq(6)
garch(model=ar1model,mv=cc,variance=exp,asymmetric) gstart gend
*
system(model=var1)
variables y
lags 1
det constant
end(system)
*
garch(model=var1,mv=cc,iters=500,variance=koutmos, pmethod=simplex,piters=10,rvectors=rd,method=bfgs,hmatrices=hh,hadjust=%(htfe=sqrt(hh(t)(1,1), trace) gstart gend
and why do we use mv=cc? can we use also the others as mv=bekk or mv=dcc, ....
Re: Koutmos JBFA 1996 Multivariate EGARCH
Posted: Sun Nov 26, 2017 10:12 pm
by TomDoan
The Koutmos model is a special type of EGARCH with specific added cross equation ("spillover") effects. You can't describe it as anything other than that. He used CC, but you could also use DCC or ADCC.
ADCC vs DCC has nothing to do with the models for the individual variances---ADCC allows for asymmetry in the evolution of the correlation matrices, DCC does not.
You cannot use BEKK in any form, or VECH in any form with this type of variance model. See the
User's Guide.
Re: Koutmos JBFA 1996 Multivariate EGARCH
Posted: Fri Dec 01, 2017 8:27 am
by bekkdcc
Dear Tom,
I want to run the program, by not changing anything and also the variables name are the same just the numerical values are different, but the program give an error. I can not able to correct it. Can you say how ı can do it. Cause I could not see anything with data. Can you also try to run the prog with my data?
and for the same code and same data, there is difference in outputs in concept of calculated variables(FX ,FXC , USTD,.....). In one output they are avaialabe with NA(why they are NA, what is the missing part), the other output there is no (FX ,FXC , USTD,.....)
and
REG20. GARCH Cannot Be Used with Gaps/Missing Values
So,please can you run the code with data tmes, at least 2 times. I will be so plaesed, if you find where I am missing.
Thanks in advance
I made the changing in just to read the data of my own as
cal 1987 3 4
all 2015:3
open data dene.xls
*
* The data file is format=rats and contains stock prices for 9 countries.
* Here we retrieve the stock prices for France, Germany, Italy and
* Germany.
data(org=obs,format=xls) / fra ger ita uki
*
dec vect[series] y(n)
set y(1) = fra
set y(2) = ger
set y(3) = ita
set y(4) = uki
because these are return data in my study
Re: Koutmos JBFA 1996 Multivariate EGARCH
Posted: Fri Dec 01, 2017 10:12 am
by TomDoan
1. That's not much data for that complicated a model. Do you even have noticeable GARCH effects in quarterly data?
2. You're setting GSTART too soon (you aren't allowing for the lags in the mean model)
3. What you attached is not your data file. It's just four columns of time trends.