DMARIANO - Diebold-Mariano test (revised)

Use this forum to post complete RATS "procedures". Please be sure to include instructions on using the procedure and detailed references where applicable.

DMARIANO - Diebold-Mariano test (revised)

Postby TomDoan » Fri Jun 25, 2010 3:56 pm

Attached is a revision of the DMARIANO procedure for computing Diebold-Mariano forecast comparison tests. The main change was to format the output using REPORT and to add a TITLE option. A companion procedure for doing the Granger-Newbold forecast comparison test is at http://www.estima.com/forum/viewtopic.php?f=7&t=743.

dmariano.src
(3.32 KiB) Downloaded 406 times


Note: The Diebold-Mariano test should not be applied to situations where the competing models are nested (examples of nested models: AR(1) vs AR(2), no change vs any ARIMA(p,1,q)). An alternative testing procedure for those situations is provided at

http://www.estima.com/procs_perl/600/clarkforetest.src

This implements the Clark-McCracken test from Clark, Todd. E., and Michael W. McCracken, 2001, "Tests of Equal Forecast Accuracy and Encompassing for Nested Models," Journal of Econometrics 105 (Nov.), pp. 85-110.
TomDoan
 
Posts: 2720
Joined: Wed Nov 01, 2006 5:36 pm

Re: DMARIANO - revision of Diebold-Mariano procedure

Postby ac_1 » Sat Jun 26, 2010 3:21 am

Note: The Diebold-Mariano test should not be applied to situations where the competing models are nested (examples of nested models: AR(1) vs AR(2), no change vs any ARIMA(p,1,q)).


What are the reasons for this ?
ac_1
 
Posts: 56
Joined: Thu Apr 15, 2010 6:30 am
Location: Surrey, England, UK

Re: DMARIANO - revision of Diebold-Mariano procedure

Postby TomDoan » Sat Jun 26, 2010 12:12 pm

The only interesting test in that case is the adequacy of the more restricted model. Under the null, the forecasts of the two models are asymptotically perfectly correlated, which causes the asymptotics of the test to collapse.
TomDoan
 
Posts: 2720
Joined: Wed Nov 01, 2006 5:36 pm

Re: DMARIANO - revision of Diebold-Mariano procedure

Postby tclark » Thu Jul 01, 2010 11:21 am

As Tom kindly and succinctly indicates, testing equal accuracy of forecasts from nested models involves some complexities relative to tests applied to forecasts from non-nested models or other sources. As Tom notes, the root of the problem is that, at the population level, if the null hypothesis is that the smaller model is the true one, the forecast errors from the competing models are exactly the same and perfectly correlated, which means that the numerator and denominator of a Diebold-Mariano test are each limiting to zero as the estimation sample and prediction sample grow.

That said, what has become clearer with recent research is that the choice of test statistic and source of critical values depends on what one wants to know. Are you interested in testing equal forecast accuracy at the population level, which is in turn a test of whether the small model is the true one? Or are you instead interested in testing equal accuracy in a finite sample? The former hypothesis is the world of the Clark-McCracken work Tom mentioned, under which the procedure he mentioned can be used to generate tests and critical values. The latter hypothesis is treated in more recent work by Clark and McCracken and by Giacomini and White (Econometrica, 2006). The latter form of hypothesis can be tested with bootstrap methods developed in the more recent work by C-M. Alternatively, as long as the forecasts are generated under the so-called rolling scheme, the latter hypothesis can be tested (with asymptotic justification) with a Diebold-Mariano statistic compared against standard normal critical values, as shown by Giacomini and White. If the forecasts are generated under a recursive scheme, the D-M test cannot be justified under the Giacomini-White asymptotics, but in simulated data, the D-M test performs even a bit better than it does under the rolling scheme.

With the intention of being helpful (as opposed to self-serving), I have attached a recent survey Mike McCracken and I wrote that describes this more recent work (and provides the detailed references) and offers some Monte Carlo evidence on the alternative inference approaches. Our conclusion is that, for testing equal accuracy in a finite sample, the proposed bootstrap is most accurate. Of course, it requires the coding of a bootstrap. A conventional Diebold-Mariano test has the advantage that it is simpler to obtain critical values, but the conventional test seems to be modestly less reliable. But at least the conventional D-M test is conservative when applied to short-horizon forecasts. If you conduct a DM test and find it rejects the small model in favor of the large, it is a good sign that the large model is truly more accurate in the finite sample.
Attachments
14 Clark and McCracken Chapter.pdf
(465.76 KiB) Downloaded 406 times
Todd Clark
Economic Research Dept.
Federal Reserve Bank of Cleveland
tclark
 
Posts: 35
Joined: Wed Nov 08, 2006 4:20 pm

Re: DMARIANO - revision of Diebold-Mariano procedure

Postby ac_1 » Thu Jul 01, 2010 12:21 pm

Thanks for posting Todd! - I'll have a read.
ac_1
 
Posts: 56
Joined: Thu Apr 15, 2010 6:30 am
Location: Surrey, England, UK

Re: DMARIANO - revision of Diebold-Mariano procedure

Postby wendyyuan » Sun Sep 12, 2010 11:43 am

Thanks, Tom,
Do you have any idea about Modified Diebold Mariano(MDM) method? reference of 'Testing the equality of prediction mean squared errors (Harvey, Leybourne and Newbold, 1997)'
in this paper, the test statistic is
S1=square root of (n+1-2h+1/n*h(h-1))/n *S
where S1 is the statistics of MDM and S is the statistic of original Dmariano method.h is the step of forecast ahead, I am wondering how to get the value of n? how to certain the degree of freedom (n-1) here? if the n is big enough, the S1 equals S.

Appreciate any help.
wendyyuan
 
Posts: 20
Joined: Wed Jun 17, 2009 6:07 am

Re: DMARIANO - revision of Diebold-Mariano procedure

Postby tclark » Sun Sep 12, 2010 4:56 pm

The adjustment suggested by Harvey, et al is a small-sample adjustment to the autocorrelation-consistent estimate of the variance entering the DM test. In the notation to which you referred, n refers to the number of forecast observations, and h refers to the forecast horizon (1 for 1-step ahead, 2 for 2-step ahead, etc.).

Note that the Harvey, et al adjustment is only appropriate if the variance is estimated with the so-called "truncated" estimator of the variance (in the procedure Tom Doan put together, you would use the option lwindow=truncated).
Todd Clark
Economic Research Dept.
Federal Reserve Bank of Cleveland
tclark
 
Posts: 35
Joined: Wed Nov 08, 2006 4:20 pm

Re: DMARIANO - revision of Diebold-Mariano procedure

Postby wendyyuan » Tue Sep 21, 2010 11:51 am

tclark wrote:The adjustment suggested by Harvey, et al is a small-sample adjustment to the autocorrelation-consistent estimate of the variance entering the DM test. In the notation to which you referred, n refers to the number of forecast observations, and h refers to the forecast horizon (1 for 1-step ahead, 2 for 2-step ahead, etc.).

Note that the Harvey, et al adjustment is only appropriate if the variance is estimated with the so-called "truncated" estimator of the variance (in the procedure Tom Doan put together, you would use the option lwindow=truncated).


Thanks.
I try to programme the MDM based on the original code, but some weird result show up.
here i show the change i made in code.

*the linear regression*
linreg (robusterrors, lwindow=truncated,lags=lags,noprint) d startl endl
# constant
compute %cdstat=%tstats(1)*sqrt((%nobs-1)/%nobs)

*report part (t distribution)*
report (atrow=5,atcol=1) %l(f1) c1 %cdstat %tdensity (-%cdstat, %nobs)
report (atrow=5,atcol=1) %l(f1) c2 %cdstat %tdensity (+%cdstat, %nobs)



*the result*
Forecast MSE TEST Stat P(DM>X)
F1 0.01583005 -0.000000 0.37961
F2 0.03385692 0.000000 0.37961

why the "test stat" all zero? the P with same and their summation is not 1?


please give me some suggestion.

Appreciate any help!
wendyyuan
 
Posts: 20
Joined: Wed Jun 17, 2009 6:07 am

Re: DMARIANO - revision of Diebold-Mariano procedure

Postby TomDoan » Tue Sep 21, 2010 2:02 pm

This is going to do an integer divide inside the sqrt(...) since nothing there is a "real". Change it to -1.0 and it will work.

Code: Select all
compute %cdstat=%tstats(1)*sqrt((%nobs-1)/%nobs)
TomDoan
 
Posts: 2720
Joined: Wed Nov 01, 2006 5:36 pm

Re: DMARIANO - revision of Diebold-Mariano procedure

Postby wendyyuan » Thu Sep 23, 2010 5:24 am

TomDoan wrote:This is going to do an integer divide inside the sqrt(...) since nothing there is a "real". Change it to -1.0 and it will work.

Code: Select all
compute %cdstat=%tstats(1)*sqrt((%nobs-1)/%nobs)

thanks a lot. the value of 'test stat' makes sense Now, but it show the same the P(DM>X) value in two tests.

test stat P(DM>X)
-10.0220 0.00004
10.0220 0.00004

is it because the %tdensity is not cumulative density? if so, what is the cumulative one?

thanks a lot!
wendyyuan
 
Posts: 20
Joined: Wed Jun 17, 2009 6:07 am

Re: DMARIANO - revision of Diebold-Mariano procedure

Postby TomDoan » Thu Sep 23, 2010 2:06 pm

%TDENSITY is indeed the density function. The functions for the CDF are %TCDF (added with 7.3) for the [0,1] cumulative function or
%TTEST, which gives the (two-tailed) tail probabilities.
TomDoan
 
Posts: 2720
Joined: Wed Nov 01, 2006 5:36 pm

Re: DMARIANO - revision of Diebold-Mariano procedure

Postby wendyyuan » Fri Sep 24, 2010 4:41 am

TomDoan wrote:%TDENSITY is indeed the density function. The functions for the CDF are %TCDF (added with 7.3) for the [0,1] cumulative function or
%TTEST, which gives the (two-tailed) tail probabilities.

thanks, Tom
the %TCDF makes sense. But what i am using is RATS 6.35, how can i get the cumulative function of t distribution in this old version of Rats? Neither %tdensity or %ttest produce right result here.
Thanks
wendyyuan
 
Posts: 20
Joined: Wed Jun 17, 2009 6:07 am

Re: DMARIANO - revision of Diebold-Mariano procedure

Postby TomDoan » Fri Sep 24, 2010 6:18 am

The one-tailed tests for the statistic and -statistic are:

%if(%cdstat<0,1-.5*%ttest(%cdstat,degrees),.5*%ttest(%cdstat,degrees))
%if(%cdstat<0,.5*%ttest(%cdstat,degrees),1-.5*%ttest(%cdstat,degrees))

You have to 1/2 the tail probability that comes out of %ttest, then correct the probability if you want the left tail.
TomDoan
 
Posts: 2720
Joined: Wed Nov 01, 2006 5:36 pm

Re: DMARIANO - revision of Diebold-Mariano procedure

Postby wendyyuan » Fri Sep 24, 2010 11:06 am

TomDoan wrote:The one-tailed tests for the statistic and -statistic are:

%if(%cdstat<0,1-.5*%ttest(%cdstat,degrees),.5*%ttest(%cdstat,degrees))
%if(%cdstat<0,.5*%ttest(%cdstat,degrees),1-.5*%ttest(%cdstat,degrees))

You have to 1/2 the tail probability that comes out of %ttest, then correct the probability if you want the left tail.


it is a really good suggestion! Thanks very much
i tried to use

if %cdstat>0 {
report ....
report................1-%ttest(%cdstat,%nobs)/2
report................%ttest(%cdstat,%nobs)/2
}
else {
report ....
report................%ttest(%cdstat,%nobs)/2
report................1-%ttest(%cdstat,%nobs)/2
}


It works well now.
Appreciate it!
wendyyuan
 
Posts: 20
Joined: Wed Jun 17, 2009 6:07 am

Re: DMARIANO - revision of Diebold-Mariano procedure

Postby wendyyuan » Fri Oct 01, 2010 4:46 am

wendyyuan wrote:
TomDoan wrote:The one-tailed tests for the statistic and -statistic are:

%if(%cdstat<0,1-.5*%ttest(%cdstat,degrees),.5*%ttest(%cdstat,degrees))
%if(%cdstat<0,.5*%ttest(%cdstat,degrees),1-.5*%ttest(%cdstat,degrees))

You have to 1/2 the tail probability that comes out of %ttest, then correct the probability if you want the left tail.


it is a really good suggestion! Thanks very much
i tried to use

if %cdstat>0 {
report ....
report................1-%ttest(%cdstat,%nobs)/2
report................%ttest(%cdstat,%nobs)/2
}
else {
report ....
report................%ttest(%cdstat,%nobs)/2
report................1-%ttest(%cdstat,%nobs)/2
}


It works well now.


with the above code, i get some results but i have trouble explaining them.

For example:
the first one is easy to explain

test stat P(DM>X)
-1.0220 0.00004
1.0220 0.99996
here i accept the H0 coz the 1.0220 is smaller than critical value of 1.645 at 10% significant level.

test stat P(DM>X)
-10.0220 0.99996
10.0220 0.00004
or
test stat P(DM>X)
-10.0220 0.00004
10.0220 0.99996
how to explain this? In my opinion,for forecast 1: if statistic value is larger than critical value and if p>0.950, i reject H0, and accept H1 (m1<m2).
for forecast 2: if statistic value is larger than critical value and if p<0.050, i reject H0, and accept H1 (m1>m2).

i accept two controversial H1, m1<m2 and m1>m2. do i explain them in a wrong way? If so, how to make it right? I am wondering is it because of the hypothesis?
Appreciated!
wendyyuan
 
Posts: 20
Joined: Wed Jun 17, 2009 6:07 am

Next

Return to RATS Procedures

Who is online

Users browsing this forum: Bing [Bot] and 1 guest