Gibbs Sampling with Markov Switching Models
Posted: Thu Jul 08, 2010 10:15 am
This is an example of Gibbs sampling with a Markov switching model. This does a three branch estimator for a Gaussian mixture model. In this case, the "regressors" are just the constant (that is, just means) though the analysis is written to generalize fairly easily to any regression equation where only the current state enters into the branches at time t. (A Hamilton-type model where the lagged states are also required is more complicated).
This section of code is going to be very similar in almost any Gibbs sampling on this type of model---even ones where the regression equations are more complicated. This chooses the states given all the other settings using the forward filtering-backwards sampling described in Chib(1996), "Calculating Posterior Distributions and Modal Estimates in Markov Mixture Models", J. of Econometrics, vol 95, 79-97. Those are then used to draw a new transition matrix. The only part of this that is depends upon the model is the calculation of the F vector.
The remaining calculations are fairly standard Bayesian inference procedures covered briefly in the User's Guide in chapter 13, and in much greater detail in the Bayesian Econometrics course book (http://www.estima.com/courses_completed.shtml).
Program file:
Data file:
This section of code is going to be very similar in almost any Gibbs sampling on this type of model---even ones where the regression equations are more complicated. This chooses the states given all the other settings using the forward filtering-backwards sampling described in Chib(1996), "Calculating Posterior Distributions and Modal Estimates in Markov Mixture Models", J. of Econometrics, vol 95, 79-97. Those are then used to draw a new transition matrix. The only part of this that is depends upon the model is the calculation of the F vector.
The remaining calculations are fairly standard Bayesian inference procedures covered briefly in the User's Guide in chapter 13, and in much greater detail in the Bayesian Econometrics course book (http://www.estima.com/courses_completed.shtml).
Code: Select all
*
* Compute the filtered probabilities. This is the same calculation as
* is done in maximum likelihood estimation. Start with the ergodic
* probability.
*
compute pstar=%mcergodic(p)
do time=rstart,rend
*
* Compute the likelihoods (not logged) of the states at <<time>>
* given the current settings
*
ewise f(i)=exp(%logdensity(1.0/(hscale*h(i)),sp500(time)-%dot(%eqnxvector(eqn,time),b(i))))
*
* Compute p(t|t) for the states. p*pstar is the predicted
* probability of the states given data through t-1.
*
compute pf=(p*pstar).*f
compute pstar=ptt(time)=pf/%sum(pf)
end do time
*
* Smooth and sample
*
compute state(rend)=%ranbranch(ptt(time))
do time=rend-1,rstart,-1
compute state(time)=%ranbranch(%xrow(p,state(time+1)).*ptt(time))
end do time
*
* Update transition probabilities
* Compute the sample transition matrix
*
compute psamp=%zeros(mstate,mstate)
do time=rstart+1,rend
compute psamp(state(time),state(time-1))+=1
end do time
*
* Draw from Dirichlet distribution for each column of P and stuff it
* back into P.
*
do i=1,mstate
ewise pcol(j)=%rangamma(psamp(i,j)+%if(j==i,rstay+1,rmove+1))
compute %psubmat(p,1,i,pcol/%sum(pcol))
end do i