This is a well-known idea in approximation theory. For example, the success of using splines

in the univariate setting goes hand in hand with ¬nding the B-spline basis. Obviously, if we

choose the cardinal basis {u — } as a basis for VX then the interpolation matrix becomes the

j

identity matrix. Unfortunately, ¬nding the cardinal basis is at least as dif¬cult as solving

the linear problem itself.

216 Stability

In this section we want to discuss other bases for this ¬nite-dimensional space. Particularly

in the case of thin-plate splines we will see that it is possible to ¬nd a basis which leads to

an interpolation matrix that is independent of the separation distance q X but dependent on

the number of centers.

We will restrict ourselves here to the case of conditionally positive de¬nite kernels. But

the reader should keep in mind that a positive de¬nite kernel can also act as a conditionally

positive de¬nite kernel.

In Section 10.3 we introduced two further kernels associated with the initial kernel .

Remember that we had previously chosen a set = {ξ1 , . . . , ξ Q } ⊆ that is P-unisolvent

and a cardinal basis p1 , . . . , p Q for P satisfying p (ξk ) = δ ,k . Then we de¬ned the kernels

Q Q

κ(x, y) := (x, y) ’ pk (x) (ξk , y) ’ p (y) (x, ξ )

=1

k=1

Q Q

+ pk (x) p (y) (ξk , ξ ) (12.4)

=1 k=1

and

Q

K (x, y) = κ(x, y) + p (x) p (y).

=1

Another way of describing the relation between K and κ is by the projection operator

Q

Pf = =1 f (ξ ) p :

Q Q Q Q

y) = κ(ξ , y) p + pk (ξ ) pk (y) p = p (y) p ,

P K (·,

=1 =1 k=1 =1

which implies that

κ(·, y) = K (·, y) ’ P K (·, y). (12.5)

Note that if ⊆ X then it is true that K (·, x j ) and κ(·, x j ) both lie in VX . Thus the question to

be answered here is whether we can use {K (·, x1 ), . . . , K (·, x N )} or {κ(·, x1 ), . . . , κ(·, x N )}

as a basis for VX . Obviously, the second family is doomed to fail since κ(·, ξk ) = 0 for

1 ¤ k ¤ Q. Thus in this situation we have at least to add P again.

Theorem 12.9 The kernel K : — ’ R is positive de¬nite on . Moreover, if =

\ then κ : — ’ R is positive de¬nite on . Both kernels are conditionally positive

de¬nite with respect to P on .

Proof First of all, if a set of distinct points X = {x1 , . . . , x N } ⊆ is given and if we have

an ± ∈ R N that satis¬es

N

± j p(x j ) = 0 for p ∈ P (12.6)

j=1

12.3 Change of basis 217

then obviously

N N N

±i ± j K (xi , x j ) = ±i ± j κ(xi , x j ) = ±i ± j (xi , x j ),

i, j=1 i, j=1 i, j=1

showing that both K and κ are conditionally positive de¬nite on with respect to P. Let

us have a closer look at K for arbitrary ± ∈ R N . From Theorem 10.20 we know that K is

the reproducing kernel for the native space N ( ) with respect to the inner product

Q

( f, g) = ( f, g)N + f (ξ )g(ξ ).

()

=1

Together with Theorem 10.3 this means that

2

N N

±i ± j K (xi , x j ) = ± j K (·, x j ) ≥ 0,

i, j=1 j=1

showing K to be at least positive semi-de¬nite. But since we have a norm now the quadratic

form is zero if and only if N ± j K (x, x j ) = 0 for all x ∈ . Setting x = ξ and using

j=1

κ(ξ , ·) = 0 shows that actually ± satis¬es (12.6). Thus the ¬rst part of our proof gives

± = 0.

Now let us look at κ. We start with centers X = {x1 , . . . , x N } ⊆ , which means that

X © = …. Thus Y = X ∪ consists of N + Q distinct points. Let y j = x j for 1 ¤ j ¤

N and y N + j = ξ j for 1 ¤ j ¤ Q. Next suppose that ± ∈ R N \{0} is given. If we de¬ne

β ∈ R N +Q by β j = ± j for 1 ¤ j ¤ N and β N + j = ’ i=1 ±i p j (xi ) for 1 ¤ j ¤ Q then

N

we have

N +Q Q

N N

β j pk (y j ) = ± j pk (x j ) ’ ±i p (xi ) pk (ξ ) = 0

=1 i=1

j=1 j=1

for 1 ¤ k ¤ Q. Thus β satis¬es (12.6) for Y instead of X . Moreover, it is now easy to see

that

N +Q

N

±i ± j κ(xi , x j ) = βi β j (yi , y j ) > 0,

i, j=1 i, j=1

which proves the result for κ.

Thus we can restate our initial interpolation problem in two new ways. Let us start with

the simpler one.

⊆ X then the interpolant s f,X can be written as

Corollary 12.10 If

N

s f,X = ± j K (·, x j ),

j=1

where the coef¬cients are determined by s f,X (x j ) = f j , 1 ¤ j ¤ N .

218 Stability

When using κ we have to be more careful, since ⊆ X does not lead to linearly inde-

pendent functions κ(·, x j ). But we need ⊆ X to ensure that we get the same interpolant.

So assume that x j = ξ j for 1 ¤ j ¤ Q. Then we know at least that the matrix

C = (κ(xi , x j )) Q+1¤i, j¤N

is positive de¬nite. Or, in other words, the family {κ(·, x j ) : Q + 1 ¤ j ¤ N } is linearly

independent. Since κ(x j , ·) = 0 for 1 ¤ j ¤ Q, we immediately have that {κ(·, x j ) : Q +

1 ¤ j ¤ N } ∪ { pk : 1 ¤ k ¤ Q} is a basis for VX .

Thus we can restate the interpolation problem using this basis.

⊆ X satis¬es x j = ξ j for 1 ¤ j ¤ Q then the interpolant can be

Corollary 12.11 If

written as

Q N

s f,X (x) = βk pk (x) + ± j κ(x, x j )

j=1 j=Q+1

and the coef¬cients are again determined by s f,X (x j ) = f (x j ), 1 ¤ j ¤ N .

Since the { p } form a Lagrangian basis for and since κ vanishes if one of its arguments

is an element from , the interpolation conditions lead to the matrix equation

β

I O

= f |X

±

P C

with the R Q—Q identity matrix I and the R(N ’Q)—Q matrix P = ( p j (xi )), where i runs over