±

Finally, since D2 G(·, x) ∈ N ( ) ⊆ C k ( ), Theorem 10.45 allows the representation

D1 D2 G(y, x) = D ± (

±± ± ± ±

+ D2 G(·, x), D2 G(·, y) .

P D2 G(·, x))(y) N()

Hence, after some manipulation we get

± ±

D2 G(·, x), D2 G(·, x) N()

Q Q

±± ± ±

D ± p (x)D2 (ξ , x)

±

= (x, x) ’ (x, ξn ) ’

D1 D2 D pn (x)D1

=1

n=1

Q

D ± p (x)D ± pn (x) (ξ , ξn ).

+

n, =1

176 Error estimates for radial basis function interpolation

± ±

Summing up all the terms and using D1 (x, y) = D2 (y, x), we derive the stated

result.

The reader should have noticed that the technical intrincacy in the proof of the preceding

lemma was caused by two things. If, however, one uses a positive de¬nite kernel and is only

interested in the error of the pure function and not its derivatives, one immediately has the

representation

2

N

Q(u) = (·, x) ’ u j (·, x j )

j=1 N()

N N

= (x, x) ’ 2 u j (x, x j ) + u i u j (xi , x j ).

j=1 i, j=1

After this preparatory step it is easy to show how the power function is involved in ¬nding

error estimates for our approximation scheme.

Theorem 11.4 Let ⊆ Rd be open. Suppose that ⊆ C 2k ( — ) is a conditionally

with respect to P ⊆ C k ( ). Suppose further that X =

positive de¬nite kernel on

{x1 , . . . , x N } ⊆ is P-unisolvent. Denote the interpolant of f ∈ N ( ) by s f,X . Then

for every x ∈ and every ± ∈ Nd with |±| ¤ k the error between f and its interpolant can

0

be bounded by

|D ± f (x) ’ D ± s f,X (x)| ¤ P (±) (x)| f |N ( ).

,X

Proof Using representation (11.2), the Taylor formula from Theorem 10.17, and the re-

production property of the coef¬cients, which follows from (11.3), we see that

N

±

f (x j )D ± u — (x)

D s f,X (x) = j

j=1

N

D ± u — (x)

= f (x j ) + ( f, G(·, x j ))N

P ()

j

j=1

N

±

D ± u — (x)G(·, x j )

=D ( f )(x) + .

f,

P j

j=1 N()

However, Theorem 10.45 allows us to write

D ± f (x) = D ± ( ±

f )(x) + ( f, D2 G(·, x))N ( ),

P

which, together with the previous equation, yields

N

|D ± ( f ’ s f,X )(x)| = ±

D ± u — (x)G(·, x j )

f, D2 G(·, x) ’ j

j=1 N()

f |N ( ) P (±) (x)

¤| ,X

by Lemma 11.3.

11.2 Error estimates in terms of the ¬ll distance 177

This theorem allows us to split the error between the unknown function f from the native

space and its interpolant into two terms, one term (the power function) being independent

of f and the other independent of the centers X . Our further investigation of the error will

be done by bounding the power function in an appropriate way. Therefore, we regard the

power function again as a function of the coef¬cients u — (x).

j

Theorem 11.5 Let ⊆ Rd be open. Suppose that ∈ C 2k ( — ) is a conditionally

with respect to P ⊆ C k ( ). Suppose further that X =

positive de¬nite kernel on

{x1 , . . . , x N } ⊆ is P-unisolvent. De¬ne for x ∈ and ± ∈ Nd with |±| ¤ k the func-

0

tion Q : R N ’ R :

N N N

±± ±

Q(u) := (x, x) ’ 2 (x, x j ) + u i u j (xi , x j ).

D1 D2 u j D1

j=1 i=1 j=1

The minimum of this function on the set

N

u j p(x j ) = D ± p(x) for all p ∈ P

M = u∈R : N

j=1

is given by the vector D ± u — (x), where u— (x) is found in Theorem 11.1:

Q(D ± u — (x)) ¤ Q(u) for all u ∈ M.

Proof If we adopt the notation of the paragraph before Theorem 11.1 we see that the

±±

function Q takes the form Q(u) = D1 D2 (x, x) ’ 2u T D ± R(x) + u T Au and has to be

minimized over M = {u : P T u = D ± S(x)}. Since M is nonempty and A is positive de¬nite

on M0 := {u : P T u = 0}, Lemma 4.2 yields that this quadratic minimization problem has

a unique solution that can be computed using Lagrange multipliers. Doing so, we derive

the equations

Au + Pv = D ± R(x),

P T u = D ± S(x),

where v denotes the Lagrange multiplier. This system is uniquely solved by the functions

u = D ± u — (x) and v = D ± v — (x).

Having this minimal property in mind, the idea is to bounding the power function by

plugging into Q an appropriate vector u (±) (x) instead of the optimal vector D ± u — (x).

11.2 Error estimates in terms of the ¬ll distance

As indicated in the last section we now want to bound the power function by replacing the

optimal vector D ± u — (x) appropriately.

We will do this ¬rst for conditionally positive de¬nite functions and then for arbitrary

symmetric conditionally positive de¬nite kernels ∈ C 2k ( — ). In both situations we

assume the general ¬nite-dimensional subspace P to be πm’1 (Rd ).

178 Error estimates for radial basis function interpolation

In the following, the region ⊆ Rd is always assumed to be open. But this is only

necessary for estimates on the derivatives. In the non-derivative case has only to satisfy

an interior cone condition. Moreover, if is not open, the estimates on the derivatives hold

in every interior point.

For our estimates we will employ local polynomial reproductions as we have studied

them in Chapter 4, in particular in the form of Theorem 3.14. But here we need a more

general version covering also derivatives. Again, norming sets are the key ingredient. To

use them, we ¬rst have to derive a Bernstein inequality for multivariate polynomials.