= W (x0 , R) be a cube in Rd . There exist constants c0 , c2 > 0

Theorem 11.21 Let

such that for every ∈ N and every X = {x1 , . . . , x N } ⊆ with

depending only on

h X, ¤ c0 / we can ¬nd functions u j : ’ R satisfying

N

u j (x) p(x j ) = p(x) for all x ∈ and all p ∈ π (Rd ),

(1) j=1

+1)

N

|u j (x)| ¤ e2dγd ( for all x ∈

(2) ,

j=1

(3) u j (x) = 0 if x ’ x j > c2 h X, .

2

The constant γd is de¬ned in Proposition 11.20.

Proof Let q := γd ( + 1) and h := h X, . De¬ne

2R 2R c0

= =: .

h 0 :=

3γd ( + 1)

3q

If h ¤ h 0 then we can ¬nd for every x ∈ a cube Wx of side length 3hq that is completely

contained in and has x as one of its corners. This cube can be divided into q d subcubes

of side length 3h. Hence, the interior of each of these subcubes contains a ball with radius

h, namely the ball centered at the center of the subcube. But this means that each of the

subcubes contains at least one point from X . Proposition 11.20 gives, with Y = X © Wx =

{y1 , . . . , y M },

+1)

¤ e2dγd ( L ∞ (Y ) , p ∈ π (Rd ).

p p

L ∞ (Wx )

This allows us to invoke Theorem 3.4 to ¬nd functions u x : Wx ’ R, 1 ¤ j ¤ M, such

j

that

M

u x (y) p(y j ) = p(y), y ∈ Wx , p ∈ π (Rd ),

j

j=1

and

M

|u x (y)| ¤ e2dq , y ∈ Wx .

j

j=1

’ R, 1 ¤ k ¤ N , by

Hence, if we de¬ne the functions u k :

if xk = y j ,

u x (x),

j

u k (x) :=

0 otherwise,

190 Error estimates for radial basis function interpolation

we see that the ¬rst two stated properties are satis¬ed. For the third property we only have

to remark that u k (x) = 0 means that xk = y j for a √

certain index j. But since x and y j lie

both in the cube Wx they have a separation at most dq3h =: c2 h.

With this result at hand it is easy to establish spectral convergence for Gaussians and

multiquadric-like functions.

be a cube in Rd . Suppose that = φ( · 2 ) is a conditionally

Theorem 11.22 Let

√

positive de¬nite function such that f := φ( ·) satis¬es | f ( ) (r )| ¤ !M for all integers

≥ 0 and all r ∈ [0, ∞), where M > 0 is a ¬xed constant. Then there exists a constant

c > 0 such that the error between a function f ∈ N ( ) and its interpolant s f,X can be

bounded by

¤ e’c/ h X, | f |N

f ’ s f,X (11.10)

L∞( ) ()

for all data sites X with suf¬ciently small h X, .

Moreover, if f satis¬es even | f ( ) (r )| ¤ M , the error bound can be improved to

f ’ s f,X ¤ ec log(h X, | f |N ( ),

)/ h X,

(11.11)

L∞( )

whenever h X, is suf¬ciently small.

Proof From the estimate (11.6) we have the bound

P 2 ,X (x) ¤ [1 + c1 (2n)]2 f ’ p L ∞ [0, 4[c2 (2n)]2 h 2 ]

for x ∈ if h = h X, and p ∈ πn (R). We have already indicated that the constants c1 , c2

from the local polynomial reproduction depend on = 2n. If h ¤ c0 /(2n) then we can use

the constants c1 (2n) = e2dγd (2n+1) , and c2 (2n) = 2c2 n from Theorem 11.21. Moreover, if

we take p to be the Taylor polynomial of f around zero of degree n then the ¬rst assumption

made on f gives

| f (n+1) (ξ )| n+1

| f (t) ’ p(t)| ¤ ¤ (Mt)n+1 .

t

(n + 1)!

Hence, we can further estimate that

2

P 2 ,X (x) ¤ 1 + e2dγd (2n+1) f’p 2

L ∞ [0,16c2 n 2 h 2 ]

¤ 4e4dγd (2n+1) (C1 n 2 h 2 )n+1

¤ elog 4+4dγd (2n+1) (C1 n 2 h 2 )n+1

n+1

¤ eC2 (C1 n 2 h 2 )n+1

= (C3 n 2 h 2 )n+1 ,

where the Ci denote the appropriate constants. The latter estimate holds uniformly of course,

√

for x ∈ . Next, de¬ne c4 := min{c0 /2, 1/ eC3 }. Then we choose n so that c4 /(n + 1) ¤

h ¤ c4 /n, which gives

P 2 ,X (x) ¤ e’(n+1) ¤ e’c4 / h .

11.5 Improved error estimates 191

This establishes (11.10) with c = c4 /2. To see (11.11) we just have to remark that the second

assumption on f now leads to

(C3 n 2 h 2 )n+1

P 2 ,X (x) ¤ .

(n + 1)!

Stirling™s formula from Proposition 5.2 yields

n+1

1 e

¤ ,

(n + 1)! n+1

so that

P 2 ,X (x) ¤ (eC3 nh 2 )n+1 .

Hence, if we now de¬ne c4 := min{c0 /2, 1/(eC3 )} and choose n in the same manner as

before we see that

P 2 ,X (x) ¤ h n+1 ¤ h c4 / h ,

which ¬nishes the proof in this case.

It is easy to apply this theorem to Gaussians and multiquadrics. For example, if φ(r ) =

’±r 2

, ± > 0, we have f (r ) = e’±r and f ( ) (r ) = (’1) ± e’±r , ∈ N0 , so that the second

e

assumption on f holds with M = ± in this case. Thus, for Gaussians we have error bounds

of the form (11.11).

In the case φ(r ) = (1 + r 2 )β with β < 0 or β > 0, β ∈ N, we have f (r ) = (1 + r )β , so

that

f ( ) (r ) = β(β ’ 1) · · · (β ’ + 1)(1 + r )β’ .

From

β’ j j + 1 ’ (β + 1) |β + 1|

= ¤1+ ¤ 1 + |β + 1| =: M

j +1 j +1 j +1

we can conclude that