| f ( ) (r )| = ! ··· ¤ !M

1 2

provided ≥ β . Hence, (inverse) multiquadrics satisfy the ¬rst assumption on f , which

leads to error bounds of the form (11.10).

11.5 Improved error estimates

After establishing the basic theory on error estimates for interpolation by radial basis func-

tions we now turn to the question how the previously derived approximation orders can be

improved. There are at least two ways. The ¬rst way restricts the space of functions to be

interpolated by assuming more smoothness than the native space provides. This is rather

192 Error estimates for radial basis function interpolation

natural and the reader might think of the theory on univariate splines. The second way

weakens the norm in which the error is estimated. Replacing the L ∞ -norm by a weaker

L p -norm should result in a better approximation order. The methods we have in mind lead

to an algebraic improvement of this order; hence they are only interesting in the case of

basis kernels that have ¬nite smoothness. For basis functions such as multiquadrics and

Gaussians, where the convergence order is spectral, they are almost pointless. The latter

improvement type will be discussed in the next section.

The basic idea of the ¬rst improvement technique is the following. If f ∈ N ( ) is

given and s f,X denotes its interpolant then the function g := f ’ s f,X is also a member of

the native space. Moreover, since it vanishes on X it has the zero function as its unique

interpolant. This means that the standard error estimate from Theorem 11.4 becomes

| f (x) ’ s f,X (x)| ¤ P f ’ s f,X |N

,X (x)| (11.12)

()

and we will squeeze out the additional h X, terms of | f ’ s f,X |N ( ) by means of certain

assumptions on f .

The easiest way to achieve this is for positive de¬nite kernels : — ’ R de¬ned

on a compact set ⊆ Rd . In (10.7) we introduced an integral operator

T v(x) = (x, y)v(y)dy

that maps L 2 ( )-functions to functions from the native space. In Corollary 10.30 we de-

scribed its range in terms of its eigenfunctions, when considered as an operator from L 2 ( )

to L 2 ( ).

Theorem 11.23 Suppose that is a symmetric positive de¬nite kernel on a compact set

⊆ Rd . Then for every f ∈ T (L 2 ( )) we have

T ’1 f

| f (x) ’ s f,X (x)| ¤ P L2( ), x∈ .

,X (x) P ,X L 2 ( )

Proof Let f = T v, v ∈ L 2 ( ). Taking the L 2 ( )-norm of (11.12) yields

f ’ s f,X ¤P f ’ s f,X N ( ).

,X L 2 ( )

L2( )

Using the orthogonality relation from Lemma 10.24 together with (10.8) leads to

f ’ s f,X = ( f ’ s f,X , f )N

2

N() ()

= ( f ’ s f,X , T v)N ()

= ( f ’ s f,X , v) L 2 ( )

¤ f ’ s f,X v

L2( ) L2( )

¤P f ’ s f,X v L2( ).

,X L 2 ( ) N()

Canceling one f ’ s f,X factor and inserting the result back into (11.12) proves the

N()

result.

11.5 Improved error estimates 193

This result means for example in the case of the compactly supported functions φd,k that

the L ∞ -order can be improved to 2k + 1 from k + 1/2 provided that the functions come

from the restricted space. But techniques that we that will learn soon allow an additional

improvement to 2k + 1 + d/2.

Instead of treating the general case of conditionally positive de¬nite kernels we will

discuss only the most important example, namely the thin-plate splines φd, de¬ned in

(10.11). This time, the approximated functions come from the Sobolev space H 2 (Rd ),

which is the intersection of all Beppo Levi spaces BLk (Rd ) with k ¤ 2 .

Theorem 11.24 Suppose that = d, denotes the thin-plate spline with > d/2, con-

sidered as being conditionally positive de¬nite of order . Let ⊆ Rd be bounded and

satisfy an interior cone condition. Then for every f ∈ H 2 (Rd ) with support in we have

| f (x) ’ s f,X (x)| ¤ P L2( ), x∈ ,

,X (x) P f

,X L 2 ( )

Ch 2 ’d

¤ L2( ),

f

X,

where the last inequality holds for all suf¬ciently dense sets X .

Proof The proof is based on the same ideas as the previous one. We only have to replace

the estimate on the native space norm. Remember that if we use the thin-plate splines as

conditionally positive de¬nite functions of order , their native space is the Beppo Levi

space BL (Rd ). This time we use

| f ’ s f,X |2 (Rd ) = ( f ’ s f,X , f )BL(Rd )

BL

!

D ± ( f ’ s f,X )(x)D ± f (x)d x

=

±! Rd

|±|=

!

= ( f ’ s f,X )(x)D 2± f (x)d x

(’1)

±! Rd

|±|=

= (’1) ( f ’ s f,X )(x) f (x)d x

Rd

= (’1) ( f ’ s f,X )(x) f (x)d x,

so that

| f ’ s f,X |2 (Rd ) ¤ f ’ s f,X L2( ).

f

L2( )

BL

The partial integration we have just carried out can be justi¬ed using density arguments

similar to those employed in the proof of Theorem 10.40. Remember that f is compactly

supported with support in .

This result can further be improved if the L 2 ( )-error is estimated more dexterously. To

this end a localization is necessary, which we will describe in the next section in a more

general setting.

194 Error estimates for radial basis function interpolation

11.6 Sobolev bounds for functions with scattered zeros

Suppose that the native space N ( ) of a radial basis function is a Sobolev space H k ( )

or that it is continuously embedded into such a Sobolev space. In this situation, we can

derive more accurate error estimates than before. These estimates use the whole range of

L p -norms and do not need the power function approach at all. They are based only upon

the fact that the error u := f ’ s f,X is a function from the Sobolev space which has zeros at

X and which is, by Corollary 10.25, bounded in the native space norm by the native space

norm of f .

Since these results hold in a more general setting, we ¬rst introduce for 1 ¤ p ¤ ∞ the

Sobolev space W p ( ) as the set of all functions f ∈ L p ( ) having weak derivatives D ± f

k