14 posts
• Page **1** of **1**

Dear forum members.

Could you give me any advice?

I tryed a textbook program below.

http://www.estima.com/textbooks/mwhp431.prg

But, unfortunately, it dose not work.

I think the reason is %%dlmsx0 is not defined in the source program.

How should I refine this program?

Best Regards,

T_Field

Could you give me any advice?

I tryed a textbook program below.

http://www.estima.com/textbooks/mwhp431.prg

But, unfortunately, it dose not work.

I think the reason is %%dlmsx0 is not defined in the source program.

How should I refine this program?

Best Regards,

T_Field

- T_FIELD
**Posts:**22**Joined:**Sat May 15, 2010 8:03 pm

Try this:

- Code: Select all
`*`

* Example 8/5/2 from pp 431-432

*

open data capital.dat

calendar(q) 1953

data(format=free,org=columns) 1953:1 1974:4 capital approp

*

set gcap = log(capital/capital{1})

set gapp = log(approp/approp{1})

diff(center) gcap / cgcap

diff(center) gapp / cgapp

*

source varmadlm.src

*

compute VARMADLMSetup(2,1,2)

vcv(matrix=sigma)

# gcap gapp

nonlin(parmset=varmaparms) phi(1) phi(2) theta(1) sigma

dlm(startup=varmadlminit(2,1,2),a=%%dlma,f=%%dlmf,c=%%dlmc,sw=sigma,$

y=||gcap,gapp||,presample=ergodic,$

parmset=varmaparms,$

pmethod=simplex,piters=10,method=bfgs,iters=200) 2 1974:12 states

- TomDoan
**Posts:**3617**Joined:**Wed Nov 01, 2006 5:36 pm

Thank you very much for your prompt reply.

It works well.

Then, can I make two additional questions?

(1) I obtained slightly different results between two codes below. Why?

(2) How should I adjust the codes below to obtain same results?

--Could you teach me the answer including how to estimate "usamodel" below with ML(not OLS)?

Thanking in advance for your trouble.

It works well.

Then, can I make two additional questions?

(1) I obtained slightly different results between two codes below. Why?

- Code: Select all
`compute VARMADLMSetup(1,1,1)`

nonlin(parmset=varmaparms) phi(1) theta(1) sigma MU

vcv(matrix=sigma)

# X1

SSTATS(MEAN) 200 400 X1>>MU

dlm(startup=varmadlminit(1,1,1),a=%%dlma,f=%%dlmf,c=%%dlmc,sw=sigma,$

y=X1-MU,presample=ergodic,$

parmset=varmaparms,$

pmethod=simplex,piters=10,method=bfgs,iters=200) 200 400 states

boxjenk(MAXL,METHOD=BFGS,constant,ar=1,ma=1) X1 200 400

(2) How should I adjust the codes below to obtain same results?

--Could you teach me the answer including how to estimate "usamodel" below with ML(not OLS)?

- Code: Select all

***DLM

compute VARMADLMSetup(2,0,2)

nonlin(parmset=varmaparms) phi(1) phi(2) sigma MU1 MU2

vcv(matrix=sigma)

# X1 X2

SSTATS(MEAN) 200 400 X1>>MU1

SSTATS(MEAN) 200 400 X2>>MU2

dlm(startup=varmadlminit(2,0,2),a=%%dlma,f=%%dlmf,c=%%dlmc,sw=sigma,$

y=||X1-MU1,X2-MU2||,presample=ergodic,$

parmset=varmaparms,$

pmethod=simplex,piters=10,method=bfgs,iters=200) 200 400 states

***ESTIMATE

system(model=usamodel)

variables X1 X2

lags 1 to 2

det constant

end(system)

estimate 200 400

Thanking in advance for your trouble.

- T_FIELD
**Posts:**22**Joined:**Sat May 15, 2010 8:03 pm

T_FIELD wrote:Thank you very much for your prompt reply.

It works well.

Then, can I make two additional questions?

(1) I obtained slightly different results between two codes below. Why?

If it's more than trivially different, please send the full example to support@estima.com. The coefficients should be quite close; the standard errors a bit less so. BOXJENK uses a different set of guess values, and BFGS standard errors will be somewhat different depending upon the path taken to get to the optimum.

T_FIELD wrote:--Could you teach me the answer including how to estimate "usamodel" below with ML(not OLS)?

- Code: Select all

***DLM

compute VARMADLMSetup(2,0,2)

nonlin(parmset=varmaparms) phi(1) phi(2) sigma MU1 MU2

vcv(matrix=sigma)

# X1 X2

SSTATS(MEAN) 200 400 X1>>MU1

SSTATS(MEAN) 200 400 X2>>MU2

dlm(startup=varmadlminit(2,0,2),a=%%dlma,f=%%dlmf,c=%%dlmc,sw=sigma,$

y=||X1-MU1,X2-MU2||,presample=ergodic,$

parmset=varmaparms,$

pmethod=simplex,piters=10,method=bfgs,iters=200) 200 400 states

***ESTIMATE

system(model=usamodel)

variables X1 X2

lags 1 to 2

det constant

end(system)

estimate 200 400

Thanking in advance for your trouble.

The VARMA code with the MA=0 should give you the VAR estimated by ML. The parameterizations of the constants are different between the VAR form and the state space form, but otherwise the models are the same.

- TomDoan
**Posts:**3617**Joined:**Wed Nov 01, 2006 5:36 pm

Thank you so much for your kind reply.

I can understand question 1 and what you mean on question 2.

But I had big differences of standard erroes of coefficients between two programs in question2.

(The difference was not so big when I used data for your example of mwhp431.prg for Prof.Makridakis' textbook, but once I exchange the data, I got a big differences of the standard errors.)

Comparing ratios of the standard errors in my first program(***DLM) to those in my second(***estimate) among coefficients, the ratios are not equal with eace other. So my quaetion is how the Rats derives the standard errors of coefficients when we use the DLM instruction. (Is the definition s**2*(X'X)**T ? If so, I think the ratios mentioned above should be equal with each other.)

Again, thanking you in advance for your trouble.

I can understand question 1 and what you mean on question 2.

But I had big differences of standard erroes of coefficients between two programs in question2.

(The difference was not so big when I used data for your example of mwhp431.prg for Prof.Makridakis' textbook, but once I exchange the data, I got a big differences of the standard errors.)

Comparing ratios of the standard errors in my first program(***DLM) to those in my second(***estimate) among coefficients, the ratios are not equal with eace other. So my quaetion is how the Rats derives the standard errors of coefficients when we use the DLM instruction. (Is the definition s**2*(X'X)**T ? If so, I think the ratios mentioned above should be equal with each other.)

Again, thanking you in advance for your trouble.

- T_FIELD
**Posts:**22**Joined:**Sat May 15, 2010 8:03 pm

The BFGS estimate of the (inverse) Hessian is constructed by looking at the changes to the gradient from iteration to iteration. If you have N parameters, you get N pieces of information on the Hessian at each iteration. You need a total of N^2 in order to fully determine the Hessian. The underlying result is that if the function itself is quadratic (and exact line search is used) you get exactly the Hessian at the end of N iterations. That would be true regardless of guess values. If the function is only locally quadratic, then you only get an approximation to the Hessian, and the approximation will be different for each set of starting values. (For instance, if you start too close to the optimum, you might not even get in the N iterations required at a minimum to estimate the curvature). You would not expect them to be scalings of each other.

- TomDoan
**Posts:**3617**Joined:**Wed Nov 01, 2006 5:36 pm

Thank you very much for your kind reply.

I can obtain almost same results as my 1st program(***estimate) using my 2nd program(***DLS).

Then, I change the 2nd program to VARMA(2,1,2), but I got the message as follows.

## DLM1. Rank of Prediction Error Variance < Number of Observables

What is the meaning of this message? Or, what part in the users guide should I read?

Though I am using 100 observation for this VARMA(2,1,2), do I need to add more observations?

I am looking forward to your advice.

I can obtain almost same results as my 1st program(***estimate) using my 2nd program(***DLS).

Then, I change the 2nd program to VARMA(2,1,2), but I got the message as follows.

## DLM1. Rank of Prediction Error Variance < Number of Observables

What is the meaning of this message? Or, what part in the users guide should I read?

Though I am using 100 observation for this VARMA(2,1,2), do I need to add more observations?

I am looking forward to your advice.

- T_FIELD
**Posts:**22**Joined:**Sat May 15, 2010 8:03 pm

Thank you very much for your kind reply.

I can obtain almost same results as my 1st program(***estimate) using my 2nd program(***DLS).

Then, I change the 2nd program to VARMA(2,1,2), but I got the message as follows.

## DLM1. Rank of Prediction Error Variance < Number of Observables

What is the meaning of this message? Or, what part in the users guide should I read?

Though I am using 100 observation for this VARMA(2,1,2), do I need to add more observations?

I am looking forward to your advice.

The message is actually the opposite of not having enough data. At some step in the Kalman filter (usually the first), the one-step ahead error covariance matrix has a lower rank than the number of observables. If you have two observables, it means the model is predicting a singularity (rank one error process). Missing data won't be a problem for that since it's only necessary that the rank of the error process be >= the actual size of the y vector. If the y vector is empty, that has to be true.

The example from the Makridakis book seems to work fine at the VARMA(2,1,2) setting. You'll have to post the whole example of what you're actually doing.

- TomDoan
**Posts:**3617**Joined:**Wed Nov 01, 2006 5:36 pm

I need to estimate a VARMA for 7 variables with one lag in both the var and ma components how do adjust the varmadlm code?

- hashem
**Posts:**16**Joined:**Sun Dec 12, 2010 11:11 am

hashem wrote:I need to estimate a VARMA for 7 variables with one lag in both the var and ma components how do adjust the varmadlm code?

The two procedures have the same basic structure:

* VARMADLMSetup(p,q,n) is called first. p and q are the standard AR and

* MA counts. n is the number of endogenous variables.

*

* VARMADLMInit(p,q,n) is used as a "STARTUP" formula for DLM. Add an

* option like STARTUP=VARMADLMInit(p,q,n) to the DLM instruction.

So for a vector ARMA(1,1) on seven variables you need to use

- Code: Select all
`compute VARMADLMSetup(1,1,7)`

nonlin(parmset=varmaparms) phi(1) theta(1) sigma

The estimation instruction is

- Code: Select all
`dlm(startup=varmadlminit(1,1,7),a=%%dlma,f=%%dlmf,c=%%dlmc,sw=sigma,$`

y=||<<list of your dependent variables separated by commas>>||,presample=ergodic,$

parmset=varmaparms,$

pmethod=simplex,piters=5,method=bfgs,iters=500) <<start>> <<end>> states

where anything in <<...>> needs to be adapted to your data set.

- TomDoan
**Posts:**3617**Joined:**Wed Nov 01, 2006 5:36 pm

thank you tom for your prompt response.

when i tried your code though i got theis message:

## OP3. This Instruction Does Not Have An Option F

>>>>(1,1,7),a=%%dlma,f=<<<<

regards

hashem

when i tried your code though i got theis message:

## OP3. This Instruction Does Not Have An Option F

>>>>(1,1,7),a=%%dlma,f=<<<<

regards

hashem

- hashem
**Posts:**16**Joined:**Sun Dec 12, 2010 11:11 am

You need a newer version of the software. The F option was added with RATS version 7.

- TomDoan
**Posts:**3617**Joined:**Wed Nov 01, 2006 5:36 pm

how can i add a DCC garch model to this varma in mean model?

- hashem
**Posts:**16**Joined:**Sun Dec 12, 2010 11:11 am

You don't. The VARMA mean with GARCH errors is fundamentally a GARCH model, not a VARMA model. The maximum likelihood state-space estimation of a VARMA model done by VARMADLM.SRC and the DLM instruction won't work with GARCH errors. You need to stick with what you are doing in your other thread, using a GARCH model with lagged residuals in the mean model.

- TomDoan
**Posts:**3617**Joined:**Wed Nov 01, 2006 5:36 pm

14 posts
• Page **1** of **1**

Return to Other Time Series Analysis

Users browsing this forum: No registered users and 0 guests