we will change the method slightly, in a way that allows us to prove convergence almost

independently of the particular choice of the sets of centers.

For k = 1, . . . , N ’ q let us choose the set of indices such that Lk ⊆ {k, k + 1, . . . , N }.

Let us further assume that k ∈ Lk and that {x j : j ∈ Lk } is P-unisolvent. Finally, we

assume that the remaining points, i.e. the points corresponding to the indices in L— =

{N ’ q + 1, . . . , N }, are also P-unisolvent. This can be achieved by rearranging the points.

The difference from the iteration (15.4) is that now we do not add all the local cardinal func-

tions in one step to obtain the next iteration of our approximation. Instead, each iteration

( j)

is divided into three steps. Let us suppose that s ( j) =: s0 is the current approximation

to s f,X .

The ¬rst step consists of N ’ q stages. For k = 1, . . . , N ’ q, we want to add u k times

( j) ( j) ( j)

a certain factor to the current approximant, i.e. we want to form sk = sk’1 + θk u k . The

( j)

value of θk will be chosen such that the error decreases in a certain way.

In the second step we interpolate in the remaining points. Hence we compute an inter-

polant σ ( j) such that σ ( j) (xi ) = f i ’ s N ’q (xi ) for i ∈ L— . Then the new iteration is given

( j)

( j)

by s ( j+1) = s N ’q + σ ( j) .

In the ¬nal step we update the residuals, i.e. we compute r ( j+1) = [ f i ’ s ( j+1) (xi )]1¤i¤N

and stop when r ( j+1) ∞ < for a prescribed accuracy > 0.

Our choice of σ ( j) in the second step forces each iteration s ( j) to interpolate the data in

the remaining points, i.e. s ( j) (xi ) = f i for N ’ q + 1 ¤ i ¤ N and all j.

( j)

As mentioned before, the coef¬cient θk should cause a decrease in the residual. It is easy

to compute the optimal value for this coef¬cient if the residual is measured in the native

space (semi-)norm.

Lemma 15.8 Suppose that s and u are both functions from the native space N ( ). Suppose

further that u is not in the null space of the semi-norm. Then θ ’ |s ’ θ u|2 ( ) attains its

N

15.2 Approximation of Lagrange functions 267

minimum for

(s, u)N ( )

θ= .

|u|2 ( )

N

(θ) := |s ’ θ u|2 ( = |s|2 ’

Proof We have to minimize the parabola N N

) ()

2θ(s, u)N ( ) + θ 2 |u|2 ( ) . The minimum is found by solving (θ) = 0.

N

( j)

We will apply this lemma to our situation by setting s = s f,X ’ sk’1 and u = u k . The

numbers involved can be computed explicitly. Let us assume that u k has the representation

uk = »k j (·, x j ) + pk .

j∈Lk

Then, elementary properties of the semi-inner product yield

(u k , u k )N = »ki u k (xi ) = »kk

()

i∈Lk

and this is positive because {xi : i ∈ Lk } was assumed to be P-unisolvent. Moreover, we

have by the same arguments

( j) ( j)

(s f,X ’ sk’1 , u k )N = f i ’ sk’1 (xi ) »ki .

()

i∈Lk

Hence, if we choose

1

( j) ( j)

θk = f i ’ sk’1 (xi ) »ki

»kk i∈Lk

then Lemma 15.8 tells us that

( j) ( j)

s f,X ’ sk ¤ s f,X ’ sk’1

N() N()

for 1 ¤ k ¤ N ’ q.

( j)

After discussing the choice of θk we formulate the technique completely in Algorithm 14.

We assume that the sets Lk are already chosen and that the coef¬cients »k j are known.

It is astonishing that even in this weak setting the method converges. But obviously the

convergence rate depends on the choice of the sets Lk . In general, one should not expect

too much.

Theorem 15.9 The sequence s ( j) generated by Algorithm 14 converges in the native space

(semi-)norm to the radial basis function interpolant s f,X .

( j) ( j)

Proof We already know that |s f,X ’ sk |N ( ) ¤ |s f,X ’ sk’1 |N ( ) for all 1 ¤ k ¤ N ’

( j)

q. Moreover, σ ( j) interpolates s f,X ’ s N ’q for the remaining points. Hence, by Theorem

13.1, we ¬nd also that

( j) ( j)

s f,X ’ s ( j+1) = s f,X ’ s N ’q ’ σ ( j) ¤ s f,X ’ s N ’q ,

N() N() N()

268 Numerical methods

Algorithm 14 Iterative method based on approximate Lagrange functions

( j)

Set j = 0 and s ( j) ≡ s0 = 0.

( j)

Compute the residuals ri = f i ’ s ( j) (xi ), 1 ¤ i ¤ N , and the maximum error

( j)

e = maxi |ri |.

while e > do

for 1 ¤ k ¤ N ’ q do

( j) ( j)

Replace sk’1 by sk according to

1

( j) ( j) ( j)

sk = sk’1 + »ki f i ’ sk’1 (xi ) u k .

»kk i∈Lk

( j+1) ( j)

Generate s ( j+1) =: s0 by adding to s N ’q the solution σ ( j) of the interpolation

problem

( j)

σ ( j) (xi ) = f i ’ s N ’q (xi ), N ’ q + 1 ¤ i ¤ N.

Compute the new residuals r ( j+1) and the new error e.

Set j = j + 1.

( j)

showing that the sequence {|s f,X ’ sk |N ( ) } j,k is decreasing. Since obviously this se-

( j)

quence is also bounded from below, it has to converge and we want to show that s ( j) = s0

converges to s f,X .

( j)

All the functions sk and also s f,X come from the ¬nite-dimensional subspace

N N

VX = ±k (·, xk ) : ±k ∈ R with ±k p(xk ) = 0 for all p ∈ P + P.

k=1 k=1

Hence, all norms on VX are equivalent to the native space norm restricted to VX and it

suf¬ces to show convergence to zero in any of these norms. For this purpose we choose

:= max{|s(xi )| : 1 ¤ i ¤ N }, s ∈ VX .

s L ∞ (X )

This means that we have to show that s ( j) (xi ) tends to s f,X (xi ) for 1 ¤ i ¤ N .

( j)

The choice of θk gives us