maximum likelihood estimators

Questions and discussions on Time Series Analysis
Nabtheberliner
Posts: 33
Joined: Thu Apr 04, 2013 11:17 am

maximum likelihood estimators

Unread post by Nabtheberliner »

Hello everyone,
i'm new with RATS software, i used to work with SAS. So my questions are simple:
How do we program RATS to get the ML estimators when we study a VAR(p) process?
With the OLS estimators, there is no problem.
My second question is : How do we write or obtain the Mean adjusted form of a VAR process?
I didn't find any answers in the RATS user's guide, none in the examples and programs provided by Estima.

Thanks a lot for your help
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: maximum likelihood estimators

Unread post by TomDoan »

Nabtheberliner wrote:Hello everyone,
i'm new with RATS software, i used to work with SAS. So my questions are simple:
How do we program RATS to get the ML estimators when we study a VAR(p) process?
With the OLS estimators, there is no problem.
If you're talking about full-information maximum likelihood for a VAR, that can be done using a state-space representation with DLM. That's rarely done in econometrics since it precludes unit roots.
Nabtheberliner wrote: My second question is : How do we write or obtain the Mean adjusted form of a VAR process?
That's also rarely done except as a step in a more involved model, for exactly the same reason that ML isn't used much.

What is it that you're trying to do?
Nabtheberliner
Posts: 33
Joined: Thu Apr 04, 2013 11:17 am

Re: maximum likelihood estimators

Unread post by Nabtheberliner »

Actually i'm studying the Helmut Lûtkepohl's book "New introduction to multiple time series analysis", so i'm finishing the fifth chapter, and try to do the exercices.
I'm dealing with a bivariate VAR(1).
He asks to compute the OLS/YULE-WALKER and ML Estimators
Concerning the 2 first estimators i have no problem.

I have a problem to compute the forecast MSE Matrix aswell.
SO if you have any idea, please let me know.
Thanks a lot
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: maximum likelihood estimators

Unread post by TomDoan »

Nabtheberliner wrote:Actually i'm studying the Helmut Lûtkepohl's book "New introduction to multiple time series analysis", so i'm finishing the fifth chapter, and try to do the exercices.
I'm dealing with a bivariate VAR(1).
He asks to compute the OLS/YULE-WALKER and ML Estimators
Concerning the 2 first estimators i have no problem.
The mean-adjusted VAR (as defined in the book) simply means the VAR on the data with the sample means subtracted.

Which exercise are you looking at that is asking about the ML estimator for a VAR(1)?
Nabtheberliner wrote:I have a problem to compute the forecast MSE Matrix aswell.
SO if you have any idea, please let me know.
Thanks a lot
Have you looked at the worked examples for the book?
Nabtheberliner
Posts: 33
Joined: Thu Apr 04, 2013 11:17 am

Re: maximum likelihood estimators

Unread post by Nabtheberliner »

Dear Tom,
i send you here the programs for the exercices.There is a attached file called E2 from Lütkepohl.
You are right i wasn't precise enough, the matter is how do i get in the Problem 3.13 the estimation of the covariance matrix of the asymptotic distribution of the ML estimators.
Also what do mean by "worked examples for the book",do you mean the textbook programs provided by estima?
By the way do you have the book " New Intro To Multiple..." ?
Tahnks a lot for your help

PROGRAM

OPEN DATA "C:\Users\nabihamaraoui\Desktop\NEW INTRODUCTION TO MULTIPLE TIME SERIES ANALYSIS\E2.txt"
CALENDAR(Q) 1949:1
DATA(FORMAT=PRN,ORG=COLUMNS) 1949:01 1974:04 y1 y2

*PROBLEM 3.11:Plot thwo times series y1t and y2t and comment on the stationarity and stability of the series

spgraph(header=' series y1t and y2t',vfields=2,hfields=1,footer='figure 1') 2
graph(header='series y1t',vlabel='Y1',hlabel='dates',footer='panel:1',key=upleft) 1
# y1
graph(header='series y2t',vlabel='y2',hlabel='dates',footer='panel:2',key=upleft) 1
# y2
spgraph(done)

*PROBLEM 3.12: Estimate the parameters of a VAR(1) model for(y1t,y2t)' using multivaiate
*LS,that is compute B^(coefficient matrix) and SIGMA^u(residual cov matrix).
*comment the stability of the estimated process

system(model=US)
variables y1 y2
lags 1
det constant
end(system)
estimate(sigma,outsigma=V) * 1968:4

* This will give the roots of the companion matrix, which will be the
* reciprocals of the roots of the polynomial in the text. Thus, the
* stability condition is for the largest (first in the order produced by
* EIGEN) to be less than one.
*
compute companion=%modelcompanion(US)
eigen(cvalues=cv) companion
disp cv(1) "with absolute value" %cabs(cv(1))
*
* We can notice that the coefficients are not good,in the first regression
*with y1 as dependent variable, the coef of y1(1)=1.013
* and we check with roots of the companion matrix which are greater than 1
* so the estimated VAr(1) process is not stable
*
*PROBLEM 3.13: Use the adjusted-mean form of the VAR(1) model and estimate the
*the coefficients .Assume that the data generation process is Gaussian and
*estimatthe covariance matrix of the asymptotic distribution of the ML estimators

* first i get the sample mean

table

*i substract the sample mean from the data

set y1t = y1 - 110.8990
set y2t = y2 - 6.0980
Attachments
E2.txt
(1.84 KiB) Downloaded 885 times
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: maximum likelihood estimators

Unread post by TomDoan »

Nabtheberliner wrote:Dear Tom,
i send you here the programs for the exercices.There is a attached file called E2 from Lütkepohl.
You are right i wasn't precise enough, the matter is how do i get in the Problem 3.13 the estimation of the covariance matrix of the asymptotic distribution of the ML estimators.
Also what do mean by "worked examples for the book",do you mean the textbook programs provided by estima?
By the way do you have the book " New Intro To Multiple..." ?
Tahnks a lot for your help
See

http://www.estima.com/cgi-bin/bookbrows ... =lutkepohl

They are also on the software distribution.
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: maximum likelihood estimators

Unread post by TomDoan »

Nabtheberliner wrote:PROGRAM

OPEN DATA "C:\Users\nabihamaraoui\Desktop\NEW INTRODUCTION TO MULTIPLE TIME SERIES ANALYSIS\E2.txt"
CALENDAR(Q) 1949:1
DATA(FORMAT=PRN,ORG=COLUMNS) 1949:01 1974:04 y1 y2

*PROBLEM 3.11:Plot thwo times series y1t and y2t and comment on the stationarity and stability of the series

spgraph(header=' series y1t and y2t',vfields=2,hfields=1,footer='figure 1') 2
graph(header='series y1t',vlabel='Y1',hlabel='dates',footer='panel:1',key=upleft) 1
# y1
graph(header='series y2t',vlabel='y2',hlabel='dates',footer='panel:2',key=upleft) 1
# y2
spgraph(done)

*PROBLEM 3.12: Estimate the parameters of a VAR(1) model for(y1t,y2t)' using multivaiate
*LS,that is compute B^(coefficient matrix) and SIGMA^u(residual cov matrix).
*comment the stability of the estimated process

system(model=US)
variables y1 y2
lags 1
det constant
end(system)
estimate(sigma,outsigma=V) * 1968:4

* This will give the roots of the companion matrix, which will be the
* reciprocals of the roots of the polynomial in the text. Thus, the
* stability condition is for the largest (first in the order produced by
* EIGEN) to be less than one.
*
compute companion=%modelcompanion(US)
eigen(cvalues=cv) companion
disp cv(1) "with absolute value" %cabs(cv(1))
*
* We can notice that the coefficients are not good,in the first regression
*with y1 as dependent variable, the coef of y1(1)=1.013
* and we check with roots of the companion matrix which are greater than 1
* so the estimated VAr(1) process is not stable
*
*PROBLEM 3.13: Use the adjusted-mean form of the VAR(1) model and estimate the
*the coefficients .Assume that the data generation process is Gaussian and
*estimatthe covariance matrix of the asymptotic distribution of the ML estimators

* first i get the sample mean

table

*i substract the sample mean from the data

set y1t = y1 - 110.8990
set y2t = y2 - 6.0980
The first suggestion is never to do a de-meaning calculation like that. Do

diff(center) y1 / y1t
diff(center) y2 / y2t

You don't lose precision that way.

The calculation required is

system(model=USDEMEAN)
variables y1t y2t
lags 1
end(system)
estimate(sigma,outsigma=VDEMEAN) * 1968:4

You run the VAR by least squares on the mean-adjusted data without a CONSTANT in the regression. The full covariance matrix would be

disp %kroneker(%sigma,%xx)
Nabtheberliner
Posts: 33
Joined: Thu Apr 04, 2013 11:17 am

Re: maximum likelihood estimators

Unread post by Nabtheberliner »

Thanks a lot Tom, i check this out
Nabtheberliner
Posts: 33
Joined: Thu Apr 04, 2013 11:17 am

Re: maximum likelihood estimators

Unread post by Nabtheberliner »

Hello Tom,
Thanks it workx perfectly!
A quick question 'cause i'm jumping from SAS to RATS, it's not easy, so my question is:
the options outsigma is supposed to provide the covariance matrix of the residuals, why in the output it is named Covariance\Correlation Matrix of Coefficient?it's a bit confusing,unless i missunderstood if so sorry for that.
Best regards
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: maximum likelihood estimators

Unread post by TomDoan »

Nabtheberliner wrote:Hello Tom,
Thanks it workx perfectly!
A quick question 'cause i'm jumping from SAS to RATS, it's not easy, so my question is:
the options outsigma is supposed to provide the covariance matrix of the residuals, why in the output it is named Covariance\Correlation Matrix of Coefficient?it's a bit confusing,unless i missunderstood if so sorry for that.
Best regards
The structure of the output is described on page Int–76 of the version 8 Introduction to RATS. The OUTSIGMA option (or %SIGMA variable) are the covariance matrices themselves and they are symmetric. For the output, rather than duplicate the numbers above the diagonal, it converts those to correlations which you may also find interesting. In fact, while the covariance matrix is what generally gets used in further calculations, it's the correlations (the above the diagonal) that actually provide more useful information for a quick check of what is happening.
Nabtheberliner
Posts: 33
Joined: Thu Apr 04, 2013 11:17 am

Re: maximum likelihood estimators

Unread post by Nabtheberliner »

Thanks Tom,
Now i'm very clear about what is inside the output.
Yet, it remains a little confusion.
Indeed, in the RATS Handbook, W.Enders discribes the outsigma as the covariance matrix of the residuals, so my confusion is coefficients and residuals, more precisely
do i have to consider that covariance matrix of the residuals and the covariance matrix of the coefficients are the same? which looks strange for me.
Sorry if it has to be obvious.
Thanks Tom
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: maximum likelihood estimators

Unread post by TomDoan »

Nabtheberliner wrote:Thanks Tom,
Now i'm very clear about what is inside the output.
Yet, it remains a little confusion.
Indeed, in the RATS Handbook, W.Enders discribes the outsigma as the covariance matrix of the residuals, so my confusion is coefficients and residuals, more precisely
do i have to consider that covariance matrix of the residuals and the covariance matrix of the coefficients are the same? which looks strange for me.
Sorry if it has to be obvious.
Thanks Tom
No, they're different. They just use the same method of display (the covariance\correlation matrix). RATS doesn't have an option to produce a covariance matrix of coefficients for a VAR as that would ordinarily be too large a matrix to be viewable. You can do

display %kroneker(%sigma,%xx)

or

compute covmat=%krokener(%sigma,%xx)
medit covmat
Nabtheberliner
Posts: 33
Joined: Thu Apr 04, 2013 11:17 am

Re: maximum likelihood estimators

Unread post by Nabtheberliner »

Dear Tom,
Thanks a lot for your help.
I looked at it and applied it to the Chapter 2 to be clear, the example that Luetkepohl gives with the file E1:west german economy and the 3 variables invest/income/conumption.
So if i understood before going any further with my questions, i wrote the program

Code: Select all

OPEN DATA "C:\Users\naceur\Desktop\econometrics time series\NEW INTRO TO MULTIPLE TIME SERIES ANALYSIS\etude1 West german economy Lütkepôhl\westgermaneco.txt"
CALENDAR(Q) 1960:1
DATA(FORMAT=PRN,ORG=COLUMNS) 1960:01 1982:04 invest income cons

set ldinvest = log(invest / invest{1})
set ldincome = log(income / income{1})
set ldcons = log(cons / cons{1})

system(model=L)
variables ldinvest ldincome ldcons
lags 1 to 2
det constant
end(system)
estimate(sigma,outsigma=V) * 1978:4

display %outsigma=V
OUTPUT which is the same in the book "New Intro To Multiple...." p.80 , which is the estimate of the residual covariance matrix

Covariance\Correlation Matrix of Coefficients


LDINVEST LDINCOME LDCONS
LDINVEST 0.0019254179 0.13242415 0.28275482
LDINCOME 0.0000647493 0.0001241684 0.55526108
LDCONS 0.0001114228 0.0000555654 0.0000806498


I rescale it to get the same exactly in the book:

Code: Select all

display "Rescaled Covariance Matrix" 1.0e+4*%sigma*%nobs/(%nobs-%nreg)
Estimate of the residual covariance matrix

21.29629
0.71617 1.37338
1.23240 0.61459 0.89204


Code: Select all

display %kroneker(%sigma,%xx)

OUTPUT which is covariance matrices for the coefficients
0.01423 0.00347 0.00238 0.00208 -0.02033 -0.01675 3.33643e-004 4.78539e-004 1.16571e-004 8.01993e-005 6.99329e-005 -6.83716e-004 -5.63211e-004 1.12200e-005 8.23486e-004 2.00600e-004 1.38010e-004 1.20343e-004
-0.00118 -9.69192e-004 1.93077e-005
0.00347 0.01411 -0.00882 -6.00646e-005 0.00161 -0.01793 2.00045e-004 1.16571e-004 4.74354e-004 -2.96758e-004 -2.01989e-006 ........................................................................................etcera


If cross the informations from Lütkepohl,Enders and you,
Is that correctly understood?
Thanks Tom for your patience, i hope it is clear for you
Attachments
westgermaneco.txt
(2.45 KiB) Downloaded 871 times
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: maximum likelihood estimators

Unread post by TomDoan »

Yes, that would be how you get the grand covariance matrix of the coefficients. As you can see, it isn't very practical to display it---even for a rather small VAR like this, it's a 21 x 21 matrix, and he doesn't even try to show it in the book. Instead, he displays the t-statistics, which use only the diagonals. And if you check them out, you'll see that they're exactly what's already shown in the output, just in a slightly different order; RATS puts the constant last, while he puts them first.
Nabtheberliner
Posts: 33
Joined: Thu Apr 04, 2013 11:17 am

Re: maximum likelihood estimators

Unread post by Nabtheberliner »

Hi Tom,
Thanks a lot, it's clear.
May i come back to the PB3.13 from Lütkepohl:
Use the adjusted-mean form of the VAR(1) model and estimate the
*the coefficients .Assume that the data generation process is Gaussian and
*estimate covariance matrix of the asymptotic distribution of the ML estimators


You said:

You run the VAR by least squares on the mean-adjusted data without a CONSTANT in the regression. The full covariance matrix would be

Code: Select all

disp %kroneker(%sigma,%xx)
I'm fine with the demean problem,this is clear, but

Code: Select all

disp %kroneker(%sigma,%xx)
this code is about the estimate covariance matrix of the asymptotic distribution of the ML estimators ?
What i understand is that this cov matrix is ralated to the demean form of the VAR but still with the ols estimates,and as you said "You run the VAR by least squares on the mean-adjusted data without a CONSTANT in the regression". So OLS and ML Estimators are asymptoticaly identicals based on the subsection of the Lütkepohl 's book p.90 subsection 3.4.3 Properties of the ML Estimators
Is it correct?
Thanks for the clarification Tom
Post Reply