j=1

where φ(r ) = r 3 , r ≥ 0, and p ∈ π1 (R). The coef¬cients {± j } have to satisfy (1.6). Vice

versa, for every set X = {x1 , . . . , x N } ⊆ R of pairwise distinct points and for every f ∈ R N

1.4 Learning from splines 11

there exists a function s of the form (1.7) with (1.6) that interpolates the data, i.e. s(x j ) = f j ,

1 ¤ j ¤ N.

This is our ¬rst result on radial basis functions. The resulting interpolant is up to a

low-degree polynomial a linear combination of shifts of a radial function = φ(| · |). The

function is called radial because it is the composition of a univariate function with the

Euclidean norm on R. In this sense it generalizes to Rd , where the name “radial” becomes

even more apparent. A radial function is constant on spheres in Rd . A straightforward

generalization is to build interpolants of the form

N

s(x) = ± j φ( x ’ x j + p(x), x ∈ Rd ,

2) (1.8)

j=1

where φ : [0, ∞) ’ R is a univariate ¬xed function and p ∈ πm’1 (Rd ) is a low-degree

d-variate polynomial. The additional conditions on the coef¬cients now become

N

± j q(x j ) = 0 for all q ∈ πm’1 (Rd ). (1.9)

j=1

In many cases it is possible to get along without the additional polynomial in (1.8), so that

one does not need the additional conditions (1.9). In this particular case the interpolation

problem boils down to the question whether the matrix Aφ,X = (φ( xk ’ x j 2 ))1¤k, j¤N is

nonsingular. To be more precise we could even ask:

Does there exist a function φ : [0, ∞) ’ R such that for all d ∈ N, for all N ∈ N, and for

all pairwise distinct x1 , . . . , x N ∈ Rd the matrix

Aφ,X := (φ( xi ’ x j 2 ))1¤i, j¤N

is nonsingular?

Astonishingly, the answer is af¬rmative. Examples are the Gaussians φ(r ) = e’±r , ± >

2

√ √

0, the inverse multiquadric φ(r ) = 1/ c2 + r 2 , and the multiquadric φ(r ) = c2 + r 2 ,

c > 0. In the ¬rst two cases it is even true that the matrix Aφ,X is always positive de¬nite.

We will give characterizations for φ to ensure this property.

Having the interpolant (1.8) with a huge N in mind, it would be even more useful to have a

compactly supported basis function in order to reduce the number of necessary evaluations.

Hence, almost impudently we reformulate the previous question as:

Does there exist a function φ : [0, ∞) ’ R which satis¬es all properties mentioned in the

last question and which has in addition a compact support?

This time the answer is negative. But if we sacri¬ce the dimension, i.e. if the function

does not have to work for all dimensions but only for a ¬xed one then there exist also

compactly supported functions yielding a positive de¬nite interpolation matrix Aφ,X .

12 Applications and motivations

Much of the theory we have to develop to prove the statements just made is independent

of the form of the basis function. Several results hold if we replace φ( x ’ x j 2 ) in (1.8)

by (x ’ x j ) with : Rd ’ R or even by (x, x j ) with : — ’ R. Of course, the

latter case can only work if X ⊆ .

Even though we started with natural cubic splines, we have not yet used the minimal

norm property. Instead, we have used the idea of shifting or translating a single function,

which is also motivated by splines.

So it remains to shed some light on how the minimal norm property can be carried over to

a multivariate setting. We start by computing the L 2 [a, b] inner product of the second-order

derivatives of two natural cubic splines. Obviously, the polynomial part does not play a role,

because differentiating it twice annihilates it.

Proposition 1.2 Let φ(r ) = r 3 , r ≥ 0. Suppose the functions s X = N ± j φ(| · ’ x j |) +

j=1

M

p1 and sY = k=1 βk φ(| · ’ yk |) + p2 are two natural cubic splines on [a, b]. Then

N M

(s X , sY ) L 2 [a,b] = 12 ± j βk φ(|x j ’ xk |).

j=1 k=1

Proof Using |x| = 2x+ ’ x and (1.6) leads to

N N

s X (x) = 6 ± j |x ’ x j | = 12 ± j (x j ’ x)+ (1.10)

j=1 j=1

and a corresponding result for sY . Next, on the one hand we can employ Taylor™s formula

for a function f ∈ C 2 [a, b] in the form

b

f (x) = f (a) + f (a)(x ’ a) + f (t)(x ’ t)+ dt.

a

Setting f (x) = (y ’ x)3 with a ¬xed y ∈ [a, b] yields

+

b

(y ’ x)3 = (y ’ a)3 ’ 3(y ’ a)2 (x ’ a) + 6 (y ’ t)+ (x ’ t)+ dt.

+

a

On the other hand, we can use the representation (y ’ x)3 = (|y ’ x|3 + (y ’ x)3 )/2 to

+

derive

b

|y ’ x| = 2(y ’ a) ’ 6(y ’ a) (x ’ a) ’ (y ’ x) + 12 (y ’ t)+ (x ’ t)+ dt,

3 3 2 3

a

which leads to

N M

± j βk φ(|x j ’ yk |) = 2 ± j βk (yk ’ a)3 ’ ± j βk (yk ’ x j )3

j,k j,k

j=1 k=1

’6 ± j βk (x j ’ a)(yk ’ a)2

j,k

b

+ 12 ± j (x j ’ t)+ βk (yk ’ t)+ dt.

a j k

1.5 Approximation and approximation orders 13

All sums on the right-hand side except those under the integral are zero because of

the annihilation effect (1.6) of the coef¬cients. Hence, recalling (1.10) gives the stated

result.

This result has the interesting consequence that we can introduce the linear space

N

± j φ(| · ’ x j |) : N ∈ N, ± ∈ R N , X = {x j } ⊆ [a, b],

Fφ [a, b] :=

j=1

N

± j p(x j ) = 0 for all p ∈ π1 (R) .

with

j=1

This space is linear if the sum of two different functions based on the point sets X and Y is

de¬ned on the re¬ned point set X ∪ Y . Moreover, Fφ [a, b] carries the inner product

N M N M

± j φ(| · ’ x j |), βk φ(| · ’ yk |) ± j βk φ(|x j ’ yk |).

:= (1.11)

j=1 k=1 j=1 k=1

φ

This reason that this is an inner product is that (s, s)φ = 0 means that the linear spline s

has to be zero, which is only the case if all coef¬cients are already zero. Of course, we

assume as usual that the x j are pairwise distinct. Finally, completing the space Fφ [a, b]

with respect to · φ means completing the space of piecewise linear functions with respect

to the classical L 2 [a, b] inner product. Hence, standard arguments, which we will discuss

thoroughly in Chapter 10, give the following characterization of H 2 [a, b].

Corollary 1.3 Let φ(r ) = r 3 , r ≥ 0. The Sobolev space H 2 [a, b] coincides with

Fφ [a, b] + π1 (R).

clos · φ

Moreover, (1.11) de¬nes a semi-inner product on H 2 [a, b].