Problem with Time Varying VAR
Posted: Mon Mar 28, 2016 3:54 am
Dear all,
I have been writing a Time Varying VAR code in RATS by using Carter Kohn algorithm, but I am getting a really bad result from it. When I do the first step of the algorithm, drawing BT from a normal distribution, I use %modelsetcoeffs to put BT into a VAR in order to check the stability, I get that the largest root in the model is between 30 and 60, and it should not be higher tan one. I have been looking for an explanation, and I found that the Cholesky decomposition of the conditional variance necessary for drawing BT is causing the problem.
I think that the easiest route to solve my problem is using the Durbin Koopman algorithm instead, but I have not been able to find an example of such code in RATS -I know that it is used in DLM, but I would like to see how it works-. Please, if any of you have it, can you provide me with such code? Or any other idea to solve the problem?
Kind regards,
fructuoso
I have been writing a Time Varying VAR code in RATS by using Carter Kohn algorithm, but I am getting a really bad result from it. When I do the first step of the algorithm, drawing BT from a normal distribution, I use %modelsetcoeffs to put BT into a VAR in order to check the stability, I get that the largest root in the model is between 30 and 60, and it should not be higher tan one. I have been looking for an explanation, and I found that the Cholesky decomposition of the conditional variance necessary for drawing BT is causing the problem.
I think that the easiest route to solve my problem is using the Durbin Koopman algorithm instead, but I have not been able to find an example of such code in RATS -I know that it is used in DLM, but I would like to see how it works-. Please, if any of you have it, can you provide me with such code? Or any other idea to solve the problem?
Kind regards,
fructuoso