var(aX+bY)=a^2 var(X)+2ab cov(X,Y)+b^2 var(Y) regardless of the distribution of X and Y. The issue is that you don't have cov(X,Y). The calculations in the Enders example are all based upon (in some form) the empirical distribution of the forecast errors during the training sample (2000:3 to 2012:4). If you do a weighted forecast, you can compute an estimate of the standard errors of forecast by computing...the standard errors of the weighted forecast over the training period. While you *can* compute an estimate of the variance of the forecasts by computing the full covariance matrix of the forecast errors and computing the variance of a linear combination using that, you can use the method of just computing statistics on forecast errors using any method of producing a forecast.ac_1 wrote:Thanks. I can do (1), and (3) Var(aX+bY) for a Normal, haven't attempted a Log-Normal or a Non-Central Chi-Squared.TomDoan wrote:Re (2). No. It's not a mixture distribution. Mixture has a specific meaning which I've explained. Equal averaging is (a).
(3). Compute using the training sample. It's a single methodology which can be applied to any method of computing and combining forecasts. And no. In general there is no other way. Standard errors of a linear combination require the covariances which in general don't exist from the estimation of the models.
The question is how to generate a mixture x% from model A and (1-x%) from model B: and forecast (i) analytically and (ii) bootstrapped?
Why would you want a probabilistic mixture of A and B? You aren't estimating models that way. You want averages, not mixtures. Stop confusing the two.