p

p p+1 p p+1

= ±k ’ ± p+1 ± p’k ρ(k ’ j) + ± p+1 ρ( p + 1 ’ j) ’ ρ( j)

k=1

the application of (11.12) yields the ¬rst j . . . p lines of the Yule“Walker equations L p+1 [± p+1 ] =

γ p+1 :

p

p+1 p+1

±k ρ(k ’ j) + ± p+1 ρ( p + 1 ’ j) = ρ( j).

k=1

451

Appendix M: Some Proofs

452

To get the last line of the Yule“Walker equations, with row index p + 1, (11.13) is rewritten such that

p p

p+1 p p

± p+1 1’ ± p+1’k ρ( p + 1 ’ k) + ±k ρ( p + 1 ’ k) = ρ( p + 1).

k=1 k=1

Then

p

p p+1 p p+1

±k ’ ± p+1 ± p+1’k ρ( p + 1 ’ k) + ± p+1 ρ(0) = ρ( p + 1)

k=1

so that, by means of (11.12),

p

p+1 p+1

±k ρ(k ’ ( p + 1)) + ± p+1 ρ( p + 1 ’ ( p + 1)) = ρ( p + 1),

k=1

which is just the ( p + 1)th line of the Yule“Walker equations for the AR( p + 1) process.

Proof of [13.2.5]

Let us assume X T— X e = »e . Then

»X e = X (X T— X )e = (X X T— )(X e )

that is, X e is an eigenvector of X X T— if X e = 0. X e = 0 would imply X T— X e = 0 and thus »e = 0

contradicting the assumption of » = 0.

Let us now assume that e k and e j are two linearly independent eigenvectors to the same eigenvalue.

Then it has to be shown that X e k and X e j , j = k are linearly independent as well. If ± j , ±k are two

numbers with ±k X e k +± j X e j = 0, then 0 = ±k X T— X e k +± j X T— X e j = »(±k e k +± j e j ) since both

eigenvectors belong to the same eigenvalue. Since » = 0 : ±k e j + ± j e j = 0. Since two eigenvectors

e k and e j are linearly independent it follows that ±k = ± j = 0 so that X e k and X e j are linearly

independent.

Proof of Theorem [14.4.5]

We ¬rst restate the theorem: For any random vectors Y of dimension m Y and X of dimension m X , there

exists an orthonormal transformation A and a non-singular transformation B such that

= I

Σ B X,B X (M.4)

= D

Σ AY,B X (M.5)

where D is an m Y — m X matrix for which all entries are zero except for non-negative diagonal elements

d j j = » j , j ¤ min(m X , m Y ).

The theorem is proved in two steps. First, we derive two eigen-equations for the matrices A and B

and a linear link between these two matrices as necessary conditions. In the second step, we show that

the solutions of the eigen-equations satisfy equations (M.4)(M.5).

Let us assume that we have determined two matrices A and B that satisfy the theorem. Then equations

(M.5)(M.4) may be rewritten as

A T ΣY X B =D (M.6)

Σ’1 = BBT . (M.7)

XX

Multiplying (M.6) with itself leads to

AT ΣY X BBT Σ X Y A = AT Σ X Y Σ’1 Σ X Y A = DDT (M.8)

XX

Appendix M: Some Proofs 453

where DDT is a diagonal m Y — m Y matrix with positive entries d 2j = » j , j = min(m X , m Y ). Since A

j

is orthonormal, we can multiply (M.8) on the left by A to obtain the ¬rst eigen-equation

Σ X Y Σ’1 Σ X Y A = ADDT . (M.9)

XX

That is, the columns of A satisfy (14.43)

ΣY X Σ’1 Σ X Y a j = » j a j . (M.10)

XX

Equation (M.10) has min(m X , m Y ) positive eigenvalues » j = d 2j . Beginning again with the transpose

j

of (M.6), we ¬nd

B T Σ X Y AAT ΣY X B = B T Σ X Y ΣY X B = DT D. (M.11)

Re-expressing (M.7) as B T = B ’1 Σ’1 and substituting BT into (M.11), we obtain the second

XX

eigen-equation

Σ’1 Σ X Y ΣY X B = DT DB.

XX

That is, the columns of B satisfy (14.44)

Σ’1 Σ X Y ΣY X b j = » j b j . (M.12)

XX

This completes the ¬rst part of the proof.

We now de¬ne matrices A and B as the matrices of eigenvectors of ΣY X Σ’1 Σ X Y and XX

’1

Σ X X Σ X Y ΣY X , respectively, and show that A and B satisfy the requirements of the theorem.

1/2 1/2

Since Σ X X is positive-de¬nite symmetric, it may be written as Σ X X = (Σ X X )T Σ X X (see Appendix

B). Thus, vector b solves (M.12) with eigenvalue » if and only if

1/2

c = ΣX X b (M.13)

solves the eigen-equation

[C T C]c = »c (M.14)

where

’1/2

C = ΣY X Σ X X . (M.15)

Since C T C is Hermitian, all of its eigenvalues are non-negative reals, and it has m X orthonormal

’1/2

eigenvectors. Thus, eigenproblem (M.12) has m X linearly independent solutions b j = Σ X X c j that

satisfy (M.4):

’1/2 ’1/2

(b i )T Σ X X b j = (c i )T (Σ X X )T Σ X X Σ X X c j = δi j .

The eigenproblem (M.10) may be written as

[CC T ]a = »a, (M.16)

which has the same eigenvalues as C T C (see Theorem [13.2.5]). If c is an eigenvector of C T C with

eigenvalue », then

1

a = √ Cc (M.17)

»

is an eigenvector of CC T with the same eigenvalue.

It remains to be shown that these vectors ful¬l (M.5). Let r be the number of eigenvectors of C T C and

CC T that correspond to nonzero eigenvalues. For all indices j and i ¤ r , we ¬nd that

1 T ’1/2

(a i )T ΣY X b j = √ C c r ΣY X Σ X X c j

»i

1 ’1/2

= √ (c i )T C T Σ X Y Σ X X c j

»i

1

= √ (c i )T C T C c j = » j δi j .

»i

Appendix M: Some Proofs

454

When i > r

(a i )T ΣY X b j = (a i )T C c j = 0

because (a i )T C = 0. We can show this by contradiction. Suppose (a i )T = 0. Then we would have