|s f,X |N = min{|s|N : s ∈ N ( ) with s(x j ) = f j , 1 ¤ j ¤ N }.

() ()

Proof The interpolant has a representation s f,X = »x (·, x) + q, where » = N ± j δx j

j=1

satis¬es »( p) = 0 for all p ∈ P and where q ∈ P. Moreover, any s ∈ N ( ) can be ex-

pressed by

s(x) = + (s, G(·, x))N ( ),

P s(x)

where G is the function from (10.4). Since »x (·, x) = »x G(·, x) we have

N

= v, ± j (·, x)

(v, s f,X )N ()

j=1 N()

= (v, » x

(·, x))N ()

= »( P v) + (v, » G(·, x))N

x

()

N

= »(v) = ± j v(x j ) = 0

j=1

13.1 Minimal properties of radial basis functions 225

for every v ∈ N ( ) with v(x j ) = 0 for 1 ¤ j ¤ N . But this means that

|s f,X |2 = (s f,X , s f,X ’ s + s)N = (s f,X , s)N

N () ()

()

¤ |s f,X |N ( ) |s|N ( )

for all s ∈ N ( ) with s(x j ) = f j for 1 ¤ j ¤ N .

It is interesting to see what this implies for the thin-plate splines, in particular for those

speci¬ed in (10.11). For example, in the bivariate case we could consider the function

φ(r ) = r 2 log r , which is conditionally positive de¬nite of order 2. Hence, the radial basis

function interpolant

N

s f,X (x) = c j φ( x ’ x j + p(x),

2)

j=1

where p is a bivariate linear polynomial, minimizes the semi-norm

1/2

2 2 2

‚2 f ‚2 f ‚2 f

| f |BL2 (R2 ) = (x) + 2 (x) + (x) d x

‚x1 ‚x2

‚x2 ‚x2

Rd 1 2

under all interpolants from the Beppo Levi space BL2 (R2 ). This was the initial starting

point for investigating thin-plate splines. Note that for space dimensions d ≥ 3 one actually

has to consider the thin-plate splines (10.11) as functions of order instead of order m =

’ d/2 + 1, in order to state the minimization problem in the Beppo Levi space B L (Rd ).

For example, the function φ(r ) = r is conditionally positive de¬nite of order m = 1. Hence,

only a constant has to be added to the radial sum to guarantee a unique interpolant. But

if the interpolant should minimize the Beppo Levi semi-norm | · | B L 2 ( R3 ) under all Beppo

Levi functions it has to be formed with an additional linear polynomial.

Finally, let us come to the third minimal property of the radial basis function interpolant.

This one is connected with the power function.

Remember that we can rewrite the interpolant s f,X using the cardinal functions {u — } as

j

N

u — (x) f (x j ).

s f,X (x) = j

j=1

Thus we could ask the question whether these coef¬cients {u — (x)} are the best possible, or,

j

equivalently, what is the solution of

N

f (x) ’ u j f (x j ) .

inf sup (13.2)

u∈R N : u j p(x j )= p(x), p∈P f ∈N ( ) ,| f |N ( ) =1 j=1

Theorem 13.3 Suppose that ∈ C( — ) is a conditionally positive de¬nite kernel with

respect to the ¬nite-dimensional space P ⊆ C( ). Suppose further that X is P-unisolvent

and x ∈ is ¬xed. Then the solution vector to (13.2) is given by the cardinal functions

226 Optimal recovery

u — (x), 1 ¤ j ¤ N , i.e. for f ∈ N ( ) we have

j

N

| f (x) ’ s f,X (x)| ¤ f (x) ’ uj f (x j )

j=1

for all choices u 1 , . . . , u N ∈ R with uj p(x j ) = p(x), p ∈ P.

Proof Using again the representation for f ∈ N ( ) from Theorem 10.17 leads us to

N

f (x) ’ u j f (x j ) = f (x) + ( f, G(·, x))N

P ()

j=1

N N

’ f (x j ) ’

uj uj ( f, G(·, x j ))N

P ()

j=1 j=1

N

= f, G(·, x) ’ .

uj G(·, x j )

j=1 N()

This shows that the norm of the pointwise error functional is given by

N N

f (x) ’ u j f (x j ) = G(·, x) ’ = Q1/2 (u),

sup u j G(·, x j )

f ∈N ( ) ,| f |N ( ) =1 j=1 j=1 N()

where Q is the quadratic form (here for ± = 0) from Lemma 11.3 and Theorem 11.5. But

Theorem 11.5 simply states that Q(u — (x)) ¤ Q(u) for all admissible u ∈ R N .

Obviously, a stronger result holds as well when derivatives are included. We leave the

details, which are simple, to the reader.

Later on, the results of this section will be generalized to a setting where functionals

other than point evaluations are involved. The more general setting can be described as

follows. Suppose that we know of an unknown function f ∈ N ( ) only the values » j ( f ),

1 ¤ j ¤ N , where » j is a continuous linear functional on the native space. Suppose further

that we have another functional » and that we are interested in ¬nding an unknown value

»( f ) using only the values » j ( f ). We will call such a problem a generalized interpolation

problem. It will be discussed extensively in Chapter 16.

13.2 Abstract optimal recovery

The idea of optimally reconstructing functions can be put in a much more general framework,

which we will now describe. Moreover, we will show how this general framework applies

to radial basis function interpolation.

Let U, V, and W be three normed linear spaces. Let K be a subset of U. We assume that we

have some information on the elements of K , given by the linear mapping J : U ’ V. The

mapping J is called the information operator. Moreover, we have another linear operator

T : U ’ W, which we want to call the target operator.

13.2 Abstract optimal recovery 227

Our task is to reconstruct for every x ∈ K the target T x ∈ W from the information

J x ∈ V. This can be done by a mapping A : J (K ) ⊆ V ’ W, which cannot be linear.

Such a mapping A is in this context called an algorithm. The whole situation can be

visualized in the following diagram.

J

K⊆ U J (K ) ⊆ V

T

A