· 2 )’k , k ∈ N, k > d/2. Suppose that ⊆ Rd has a Lipschitz boundary. Then N ( ) =

2

H k ( ) with equivalent norms.

Proof Every f ∈ N ( ) has an extension E f ∈ N (Rd ) = H k (Rd ). Thus we have f =

E f | ∈ H k ( ) and f H k ( ) ¤ E f H k (Rd ) ¤ c E f N (Rd ) = c f N ( ) . However, for

a Lipschitz-bounded region it is well known (see Brenner and Scott [31]) that every

function f ∈ H k ( ) has an extension E f ∈ H k (Rd ) = N (Rd ) satisfying E f H k (Rd ) ¤

˜ ˜

C f H k ( ) . Thus f = E f | ∈ N ( ) and

˜

¤ Ef ¤c Ef ¤c f H k ( ).

˜ ˜

f N() N (Rd ) H k (Rd )

Note that we also have extensions for functions from N ( ) for more general regions ,

including regions with corners and even ¬nite regions. This stands in sharp contrast with

the Sobolev case, where regions exists that do not allow an extension to Rd . In this sense

we have found another reason why native spaces are generalizations of classical Sobolev

spaces.

10.8 Notes and comments

The concept of reproducing-kernel Hilbert spaces is well established in numerical analysis.

Apparently the ¬rst deep investigation goes back to Aronszajn [2] in 1950. Another good

source is the book [129] of Meschkowski from 1962. More recently, in particular in the

context of radial basis functions, the overview articles [167, 168] from Schaback and the

inventive work [112, 113] by Madych and Nelson have given much help in clarifying

the theory. Another valuable resource on native spaces is the diploma thesis [98] by Klein.

The results on the native spaces of compactly supported functions were initially given

by the present author [191, 192].

There is a huge number of publications on Beppo Levi spaces, which turn out to be the

native spaces for thin-plate splines. The interested reader might have a look at the work of

10.8 Notes and comments 171

Duchon [47“49], of Meinguet [122“126], of Deny and Lions [45], and of Mizuta [135,136].

Despite these numerous publications, the approach given in the present text seems to be

new.

The fact that the smoothness of a given kernel is inherited by its native space, as pointed

out in the Section 10.6, will be of some importance later on, when we try to solve partial

differential equations using radial basis functions. We will also see that the radial basis

function interpolant approximates not only the function but also its derivatives.

The extension theorems presented here are rather simple but suf¬cient in many situations.

But when the best approximation order has to be found, it seems that deeper results are

necessary. First steps in this direction can be found in Light and Vail [107].

11

Error estimates for radial basis function interpolation

The goal of this chapter is to derive error estimates for the interpolation process based on

(conditionally) positive de¬nite kernels. As in the case of classical univariate spline interpo-

lation, it is possible to show that convergence takes place not only for the function itself but

also for its derivatives. The error estimates are again expressed in terms of the ¬ll distance

h X, = sup min x ’ x j 2,

x j ∈X

x∈

so that convergence is studied for h X, ’ 0. We will concentrate on error estimates for

functions coming from the associated native space of the basis function of interest.

11.1 Power function and ¬rst estimates

In this section we will be concerned with estimating the difference or error between an

(unknown) function f coming from the native Hilbert space N ( ) of a (conditionally)

positive de¬nite kernel and its interpolant s f,X . Once again, we will assume the ker-

nel to be real-valued and symmetric throughout the entire chapter. The starting point

for error estimates is to rewrite the interpolant in its Lagrangian form. To this end we

use the following notation. Let A = ( (xi , x j )) ∈ R N —N and P = ( p j (xi )) ∈ R N —Q where

p1 , . . . , p Q form a basis of P. Furthermore, let R(x) = ( (x, x1 ), . . . , (x, x N ))T ∈ R N

and S(x) = ( p1 (x), . . . , p Q (x))T ∈ R Q . Finally, let e( j) ∈ R N denote the jth unit vector.

If X = {x1 , . . . , x N } is P-unisolvent then the linear system

± ( j) e( j)

A P

=

β ( j)

PT 0 0

is uniquely solvable. The associated functions

Q

N

u—

( j) ( j)

= ±i (·, xi ) + βk p k

j

i=1 k=1

obviously satisfy u — (xi ) = δi j and belong to the space

j

N N

VX := P + ± j (·, x j ) : ± j p(x j ) = 0, p ∈ P .

j=1 j=1

172

11.1 Power function and ¬rst estimates 173

Since every function f from VX is uniquely determined by f |X , we must have f =

f (x j )u — for such a function. This gives the ¬rst part of the following theorem.

j

Theorem 11.1 Suppose that is a conditionally positive de¬nite kernel with respect to P

on ⊆ Rd . Suppose that X = {x1 , . . . , x N } ⊆ is P-unisolvent. Then there exist functions

u — ∈ VX such that u — (xk ) = δ jk . Moreover, there exist functions v — , 1 ¤ j ¤ Q, such that

j j j

u — (x) R(x)

A P

= . (11.1)

v — (x)

PT 0 S(x)

Proof It remains to prove the existence of v — (x), so that u — (x) and v — (x) together sat-

p(x j )u — for all p ∈ P, or equivalently

isfy (11.1). Since P ⊆ VX , we must have p = j

P T u — (x) = S(x). Hence we are left with showing that Au — (x) ’ R(x) ∈ P(R Q ), because

this guarantees the existence of v — (x). As (11.1) has a unique solution, this ¬nishes the

proof. Since the orthogonal complement of P(R Q ) is given by the null space of P T , it

suf¬ces to show that γ T (Au — (x) ’ R(x)) = 0 for all γ ∈ R N with P T γ = 0. But P T γ = 0

means that γ is admissible, i.e. γ T R(x) ∈ VX . This means in particular that

N N N

u — (x)γ T u — (x) γ j (xi , x j ) = γ T Au — (x)

γ R(x) = R(x j ) =

T

j j

j=1 j=1 i=1

or, equivalently, that γ T (Au — (x) ’ R(x)) = 0.

Note that the functions v — (x) have the remarkable property vk (x ) = 0. As a consequence

—

j

of Theorem 11.1, we are now able to rewrite an interpolant as

N

f (x j )u — (x),

s f,X (x) = (11.2)

j

j=1

which will be very useful later on. Furthermore, we see that the function s f,X is as smooth

as the functions u — and these functions inherit by (11.1) the smoothness of with respect

j

to the ¬rst argument and that of P. Thus if is in C k with respect to the ¬rst argument

and P ⊆ C k ( ) then so is s f,X . Of course, this also follows immediately from the standard

representation of s f,X .