The rest of this section is devoted to showing the convergence of this algorithm. The

idea of alternating projections is quite old and convergence results can be established in a

general form. To this end we have to introduce the angle between two subspaces of a Hilbert

space.

De¬nition 15.11 Let U1 and U2 be two closed subspaces of a Hilbert space H and denote

their intersection by U . The angle ± between U1 and U2 is de¬ned by

cos ± = sup{(u, v) : u ∈ U1 © U ⊥ , v ∈ U2 © U ⊥ , and u , v ¤ 1}.

With this de¬nition we can prove the main result on alternating projections. This is the

major part of proving the convergence of Algorithm 15.

272 Numerical methods

Theorem 15.12 Suppose that Q 1 , . . . , Q k are orthogonal projections onto closed sub-

spaces U1 , . . . Uk of a Hilbert space H . Let U = ©kj=1 U j and let Q : H ’ U be the or-

thogonal projection onto U . Finally, let ± j be the angle between U j and A j := ©i= j+1 Ui .

k

Then, for any f ∈ H ,

2

(Q k · · · Q 1 ) f ’ Q f ¤ c2 f ’ Qf ,

2

where

k’1

c2 ¤ 1 ’ sin2 ± j .

j=1

Proof Let us set R := Q k · · · Q 1 . Q f ∈ U implies that R Q f = Q f . Hence we have

(Q k · · · Q 1 ) f ’ Q f = R f ’ Q f = R ( f ’ Q f ).

Furthermore,

(Rv, u) = (Q k Q k’1 · · · Q 1 v, u) = (Q k’1 · · · Q 1 v, u) = · · · = (v, u)

for all v ∈ H and u ∈ U implies that Rv ∈ U ⊥ whenever v ∈ U ⊥ . Since f ’ Q f ∈ U ⊥ it

suf¬ces to show that

k’1

for all v ∈ U ⊥ .

¤ 1’ sin2 ± j v

2 2

Rv

j=1

This will be done by induction on k. If k = 1 then we have only one subspace and R is

the projection onto U = U1 . Hence v ∈ U ⊥ implies that Rv = 0 and Rv ¤ v . Now,

for the induction step we set R := Q k · · · Q 2 . We decompose an arbitrary v ∈ U ⊥ into

⊥

v = w + v1 with w ∈ U1 and v1 ∈ U1 . This gives in particular Rv = Rw + Rv1 = Rw.

Next we set w = w1 + w2 with w1 ∈ A1 = U2 © · · · © Uk and w2 ∈ A⊥ and conclude that

1

Rw = Rw1 + Rw2 = w1 + Rw2 . Since the last two elements are orthogonal we obtain

= w1 + Rw2 2 .

2 2

Rw

Moreover, induction gives

k’1

¤ 1’ sin2 ± j w2 2 .

2

Rw2

j=2

From the last two equations and from w = w1 + w2

2 2 2

we can conclude that

k’1 k’1

¤ 1’ sin2 ± j w + w1 sin2 ± j .

2 2 2

Rw

j=2 j=2

Now, w lies in U1 and is orthogonal to U = U1 © A1 and w1 lies in A1 and is also orthogonal

to U . Since the angle between U1 and A1 is ±1 , we have

w1 = (w1 , w) ¤ cos ±1 w1 w,

2

15.3 Alternating projections 273

giving w1 ¤ cos ±1 w . Finally, w ¤ v allows us to derive

= Rw

2 2

Rv

k’1 k’1

¤ 1’ sin ± j v +v cos ±1 sin2 ± j

2 2 2 2

j=2 j=2

k’1

= 1’ sin2 ± j v 2.

j=1

It is time to see how this general result applies to our speci¬c situation. We need to choose

⊥

Q j := I ’ P j , the projection from N ( ) onto VX j . Then we can express complete cycles

of Algorithm 15 for the residuals as

f (n) := f nk = (Q k . . . Q 1 )n f,

which has the desired form. Moreover, since the residuals and approximants are connected

by f ’ s j = f j , j = nk + r , convergence for the approximants follows by convergence of

the residuals. It is not our goal to recover the function f ∈ N ( ), however; this would be

doomed to failure. Instead we hope that the sequence {s j } converges to the unique interpolant

s f,X .

Finally, we have to make sure that the constant c in Theorem 15.12 is strictly less than

one, meaning that all angles ± j have to be positive. This is in general obviously wrong and

we have to make an additional assumption on the choice of the subsets X k of X . Fortunately

this additional assumption turns out to be very modest.

De¬nition 15.13 Let X 1 , . . . , X k be nonempty subsets of Rd and let Y j = ∪i= j X i , 1 ¤ j ¤ k.

k

The sets X 1 , . . . , X k will be called weakly distinct if X j = Y j and Y j+1 = Y j for each

1 ¤ j ¤ k ’ 1.

If each set X j contains a point that is not contained in any of the other sets, the collection

of sets is obviously weakly distinct.

Lemma 15.14 Let X 1 , . . . , X k be ¬nite subsets of Rd and let Y j = ∪i= j X i . Then we have

k

the following.

(1) VY⊥ = ©i= j VX i for 1 ¤ j ¤ k.

⊥

k

j

(2) If the sets X 1 , . . . , X k are weakly distinct then the subspaces VX j © VY j and VY⊥ © VY j contain

⊥

j+1

nonzero functions for 1 ¤ j ¤ k ’ 1.

⊥ ⊥

Proof First, VX j contains exactly those functions that vanish on X j . Hence, ©i= j VX i con-

k

⊥

sists exactly of those functions that vanish on Y j = ∪i= j X i . But this subspace is VY j . Now

k