About forcedfactor.src

Questions and discussions on Vector Autoregressions
dennis0125hk
Posts: 15
Joined: Thu Apr 09, 2009 8:17 am

About forcedfactor.src

Unread post by dennis0125hk »

Hi all,

I used forcedfactor.src in WINRATS to get the decomposition factor of covariance matrix recently. I would like to know how the code is written based on theory. I looked at the code by myself but I do not have much idea why and how the code is written like this and SVD decomposition is involved.

Could anyone have ideas about forcedfactor.src?
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: About forcedfactor.src

Unread post by TomDoan »

dennis0125hk wrote:Hi all,

I used forcedfactor.src in WINRATS to get the decomposition factor of covariance matrix recently. I would like to know how the code is written based on theory. I looked at the code by myself but I do not have much idea why and how the code is written like this and SVD decomposition is involved.

Could anyone have ideas about forcedfactor.src?
This is based upon the observation (hard to call it a theorem) that if P is any factor of the symmetric matrix S, then F is a factor if and only if F=PQ where Q is unitary (Q'=inv(Q)). By working with an arbitrary factor (say the Cholesky factor) you can replace the task of searching across factors with the simpler case of searching across the space of unitary matrices. If you look at the calculation for forcing columns, you need A PI to be the first r columns of F, which means that inv(P)A PI are the first r columns of the Q matrix. If you take [inv(P)A|0] and take its SVD (UWV'), the U matrix gives us a proper "Q" if we divide its columns based upon the non-zero vs zero singular values. (%SVDECOMP has actually sorted the singular values in decreasing order since version 6, so the zero singular values will now always be the last ones; we just haven't updated the procedure to rely on that). The PI matrix is computed so that PI has the proper form---it must solve PI' A' inv(P)' inv(P) A PI = I with PI upper triangular. You don't actually use the first r columns of U, since we know what their span has to be anyway and we want the particular structure for the PI matrix. What the SVD gives us are the remaining columns needed to complete the factorization. You glue the inv(P)A PI together with the zero singular value columns from U to get the Q matrix and solve back for F.
dennis0125hk
Posts: 15
Joined: Thu Apr 09, 2009 8:17 am

Re: About forcedfactor.src

Unread post by dennis0125hk »

Thanks for TomDoan's reply.

Now my question is, why PI is needed to found? As I understand, forcedfactor.src is used to find out the decomposition factor F of sigma given that first r columns of F is A: (n x r), as the code requires. However, the code intends to find out F given the first columns of F is A PI: (n x r). Why it is the case?

Secondly, why taking [inv(P)A | 0] and taking its SVD can give U such that it is proper "Q"?

Sorry for posting other questions since my matrix algebra manipulation skills are not quite sophisicated.
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: About forcedfactor.src

Unread post by TomDoan »

dennis0125hk wrote:Thanks for TomDoan's reply.

Now my question is, why PI is needed to found? As I understand, forcedfactor.src is used to find out the decomposition factor F of sigma given that first r columns of F is A: (n x r), as the code requires. However, the code intends to find out F given the first columns of F is A PI: (n x r). Why it is the case?
No. You can't expect A to be the first r columns of the factor; A is simply supposed to span the first r columns of a factor, so you need an rxr matrix PI to transform A into part of the factor. It can be any full rank rxr matrix that has the correct properties; the upper triangular version is chosen because it makes the first column in the factor a scale of the first column in A.
dennis0125hk wrote: Secondly, why taking [inv(P)A | 0] and taking its SVD can give U such that it is proper "Q"?
That's what an SVD does. U is unitary, by construction (as is V) which means already that PU is a factor. inv(P)A has (one hopes) full column rank, while the block of 0's obviously has zero column rank. So the columns of U corresponding to the non-zero diagonal elements in W will span the same space as inv(P) A and the columns of U corresponding to the zero elements in W span the orthogonal complement of inv(P) A. That's what we need other than not having a specific form for the PI matrix.
Post Reply