In what follows, we will write D ± R(x) for (D1 (x, x1 ), . . . , D1 (x, x N ))T , where D1

again denotes the derivative with respect to the ¬rst argument. We use D ± S(x) in the same

way, component-wise.

A formal differentiation of (11.1) then gives

D ± u — (x) D ± R(x)

A P

= . (11.3)

D ± v — (x) D ± S(x)

PT 0

Under the assumptions that ∈ C 2k ( — ), that P ⊆ C k ( ), and that ⊆ Rd is open,

we know that both f ∈ N ( ) and s f,X are in C k ( ). Thus it seems to be natural to ask for

error bounds not only on f ’ s f,X but also on the derivatives D ± ( f ’ s f,X ) for |±| ¤ k.

174 Error estimates for radial basis function interpolation

De¬nition 11.2 Suppose that ⊆ Rd is open and that ∈ C 2k ( — ) is a conditionally

positive de¬nite kernel on with respect to P ⊆ C k ( ). If X = {x1 , . . . , x N } ⊆ is P-

unisolvent then for every x ∈ and ± ∈ Nd with |±| ¤ k the power function is de¬ned by

0

N

2

±±

D ± u — (x)D1 (x, x j )

±

P (±) (x) (x, x) ’ 2

:= D1 D2

,X j

j=1

N

D ± u i— (x)D ± u — (x) (xi , x j ).

+ j

i, j=1

This function plays an important role in our estimates, as we shall see very soon, but ¬rst

we will have another look at the power function. If then we keep x, X, , and ± ¬xed then

we can replace the constant vector D ± u — (x) ∈ R N by an arbitrary vector u ∈ R N . Thus let

us de¬ne the quadratic form Q : R N ’ R by

N

±± ±

Q(u) = D1 D2 (x, x) ’ 2 u j D1 (x, x j )

j=1

N

+ u i u j (xi , x j ), u ∈ RN .

i, j=1

If necessary, we will also write Q (u) = Q(u). With this de¬nition the power function

becomes

2

= Q(D ± u — (x)),

P (±) (x)

,X

and we will exploit this fact later on. But to do this we need a different representation of

the quadratic form Q.

Lemma 11.3 Suppose that ∈ C 2k ( — ) is a conditionally positive de¬nite kernel with

respect to P ⊆ C k ( ). Fix x ∈ . Now suppose that u (±) ∈ R N is a vector that satis¬es

±

(±)

j u j p(x j ) = D p(x) for all p ∈ P. Then the quadratic form Q has the representation

2

N

±

u (±) G(·, x j )

Q(u ) = ’ ,

(±)

D2 G(·, x) (11.4)

j

j=1 N()

where G is the modi¬ed kernel from (10.4).

Proof The proof involves some simple, straightforward, but unfortunately also lengthy

and tedious computations. The right-hand side of (11.4) can be expressed as

N

± ±

2

u (±) D2 G(·, x), G(·, x j )

’2

D2 G(·, x) N ( ) N()

j

j=1

N

u i(±) u (±) G(·, xi ), G(·, x j )

+ .

N()

j

i, j=1

Thus we have to compute these three types of inner products. Since G(·, x) =

11.1 Power function and ¬rst estimates 175

Q

(·, x) ’ pn (x) (·, ξn ) we have immediately

n=1

Q

= (xi , x j ) + pn (xi ) p (x j ) (ξ , ξn )

G(·, xi ), G(·, x j ) N()

n, =1

Q Q

’ pn (xi ) (x j , ξn ) ’ p (x j ) (ξ , xi ).

=1

n=1

u (±) p(x j ) = D ± p(x) gives

Moreover, using j

N

u i(±) u (±) G(·, xi ), G(·, x j ) N()

j

i, j=1

Q

N N

D ± pn (x)u (±) (x j , ξn )

u i(±) u (±)

= (xi , x j ) ’ 2

j j

i, j=1 j=1 n=1

Q

D ± pn (x)D ± p (x) (ξ , ξn ).

+

n, =1

±

Next, from Lemma 10.44 we know that D2 G(·, x) is in the native space and has therefore

a representation

Q

± ± ±

D2 G(y, x) = D2 G(ξ , x) p (y) + (D2 G(·, x), G(·, y))N ()

=1

by Theorem 10.17. This allows us to compute the second term in our initial sum. The

de¬nition of G(·, ·) and the reproduction property of the coef¬cients u (±) yield

j

N

±

u (±) D2 G(·, x), G(·, x j ) N()

j

j=1

Q

N N

±

u (±) D ± pn (x) (x j , ξn )

u (±) D2

= (x j , x) ’

j j

j=1 j=1 n=1

Q Q

D ± p (x)D2 (ξ , x) +

±

D ± p (x)D ± pn (x) (ξ , ξn ).

’