At this point it is not necessary that the data values { f j } actually stem from a function f ,

but we will keep this possibility in mind for reasons that will become clear later.

In the univariate case it is well known that s can be chosen to be a polynomial p of degree at

most N ’ 1, i.e. p ∈ π N ’1 (R). Or, more generally, if a Haar space S ⊆ C(R) of dimension

N is ¬xed then it is always possible to ¬nd a unique interpolant s ∈ S. In this context the

space S has the remarkable property that it depends only on the number of points in X

and not on any other information about the data sites, let alone about the data values. Thus

it would be reasonable to look for such spaces also in higher dimensions. Unfortunately,

a famous theorem of Mairhuber and Curtis (Mairhuber [115], see also Chapter 2) states

that this is impossible. Thus if working in space dimension d ≥ 2 it is impossible to ¬x an

N -dimensional function space beforehand that is appropriate for all sets of N distinct data

sites. However, probably no one with any experience in approximation theory would, even

in the univariate case, try to interpolate a hundred thousand points with a polynomial.

The bottom line here is that for a successful interpolation scheme in Rd either conditions

on the involved points have to be worked out, in such a way that a stable interpolation with

polynomials is still possible, or the function space has to depend on the data sites. The last

concept is also well known in the univariate case. It is a well-established fact that a large

data set is better dealt with by splines than by polynomials. In contrast to polynomials, the

accuracy of the interpolation process using splines is not based on the polynomial degree

but on the spacing of the data sites. Let us review brie¬‚y properties of univariate splines in

the special case of cubic splines. The set of cubic splines corresponding to a decomposition

(1.2) is given by

S3 (X ) = {s ∈ C 2 [a, b] : s|[xi , xi+1 ] ∈ π3 (R), 0 ¤ i ¤ N }, (1.3)

where x0 := a, x N +1 := b. It consists of all twice differentiable functions that coincide

with cubic polynomials on the intervals given by X . The space S3 (X ) has dimension

dim(S3 (X )) = N + 4, so that the interpolation conditions s(xi ) = f i , 1 ¤ i ¤ N , do not

suf¬ce to guarantee a unique interpolant. Different strategies are possible to enforce unique-

ness and one of these is given by the concept of natural cubic splines. The set of natural

cubic splines

N S3 (X ) = {s ∈ S3 (X ) : s|[a, x1 ], s|[x N , b] ∈ π1 (R)}

consists of all cubic splines that are linear polynomials on the outer intervals [a, x1 ] and

[x N , b]. It is easy to see that a cubic spline s is a natural cubic spline if and only if it satis¬es

s (x1 ) = s (3) (x1 ) = 0 and s (x N ) = s (3) (x N ) = 0. Since we have imposed four additional

conditions it is natural to assume that the dimension of N S3 (X ) is dim(N S3 (X )) = N ,

which is indeed true. Even more, it can be shown that the initial interpolation problem has a

unique solution in N S3 (X ). For this and all the other results on splines we refer the reader

to Greville™s article [75] or to the books by Schumaker [175] and de Boor [43].

1.4 Learning from splines 9

This is not the end of the story, however; splines have several important properties and

we state some of them for the cubic case.

(1) They are piecewise polynomials.

(2) An interpolating natural cubic spline satis¬es a minimal norm property. This can be

formulated as follows. Suppose f comes from the Sobolev space H 2 [a, b], i.e. f ∈

C[a, b] has weak ¬rst- and second-order derivatives also in L 2 [a, b]. (We will give a

precise de¬nition of this later on). Assume further that f satis¬es f (x j ) = f j , 1 ¤ j ¤

N . If s f,X denotes the natural cubic spline interpolant then

( f ’ s f,X , s f,X ) L 2 [a,b] = 0.

This leads immediately to the Pythagorean equation

f ’ s f,X + s f,X =f L 2 [a,b] ,

2 2 2

L 2 [a,b] L 2 [a,b]

which means that the natural cubic spline interpolant is that function from H 2 [a, b] that

minimizes the semi-norm f L 2 [a,b] under the conditions f (x j ) = f j , 1 ¤ j ¤ N .

(3) They possess a local basis (B-splines). These basis functions can be de¬ned in various

ways: by recursion, by divided differences, or by convolution.

Of course, this list gives only a few properties of splines. For more information, we refer

the interested reader to the previously cited sources on splines.

The most dominant feature of splines, which has contributed most to their success,

is that they are piecewise polynomials. This feature together with a local basis not only

allows the ef¬cient computation and evaluation of spline functions but also is the key

ingredient for a simple error analysis. Hence, the natural way of extending splines to the

multivariate setting is based on this property. To this end, a bounded region ⊆ Rd is

partitioned into essentially disjoint subregions { j } N . Then the spline space consists

j=1

simply of those functions s that are piecewise polynomials on each patch j and that have

smooth connections on the boundaries of two adjacent patches. In two dimensions the most

popular partition of a polygonal region is based on a triangulation. Even in this simplest case,

however, the dimension of the spline space is in general unknown (see Schumaker [176]).

Moreover, when coming to higher dimensions it is not at all clear what an appropriate

replacement for the triangulation would be. Hence, even if substantial progress has been

made in the two-dimensional setting, the method is not suited for general dimensions.

Another possible generalization to the multivariate setting is based on the third property.

In particular a construction based on convolution has led to the theory of Box splines (see

de Boor et al. [44]). Again, even the two-dimensional setting is tough to handle, not to speak

of higher-dimensional cases.

Hence, we want to take the second property as the motivation for a framework in higher

dimensions. This approach leads to a remarkably beautiful theory, where all space dimen-

sions can be handled in the same way. Since the resulting approximation spaces no longer

consist of piecewise polynomials, we do not want to call the functions splines. The buzz

phrase, which has become popular in this ¬eld, is radial basis functions.

10 Applications and motivations

To get an idea of radial basis functions let us stick a little longer with natural cubic splines.

It is well known that the set S3 (X ) has the basis (· ’ x j )3 , 1 ¤ j ¤ N , plus an arbitrary

+

basis for π3 (R). Here, x+ takes the value of x for nonnegative x and zero in the other case.

Hence, every s ∈ N S3 (X ) has a representation of the form

N 3

s(x) = ± j (x ’ x j )3 + βjx j, x ∈ [a, b]. (1.4)

+

j=1 j=0

Because s is a natural spline we have the additional information that s is linear on the two

outer intervals. On [a, x1 ] it has the representation s(x) = 3 β j x j so that necessarily

j=0

β2 = β3 = 0. Thus, (1.4) becomes

N

s(x) = ± j (x ’ x j )3 + β0 + β1 x, x ∈ [a, b]. (1.5)

+

j=1

To derive the representation of s on [x N , b] we simply have to remove all subscripts + on

the functions (· ’ x j )3 in (1.5). Expanding these cubics and rearranging the sums leads to

+

3 N

3

s(x) = ± j x 3’ x + β0 + β1 x, x ∈ [x N , b].

(’1)3’ j

=0 j=1

Thus, for s to be a natural spline, the coef¬cients of s have to satisfy

N N

±j = ± j x j = 0. (1.6)

j=1 j=1

This is a ¬rst characterization of natural cubic splines. But we can do more. Using the

identity x+ = (|x|3 + x 3 )/2 leads, because of (1.6), to

3

±j ±j

N N

s(x) = |x ’ x j |3 + (x ’ x j )3 + β0 + β1 x

2 2

j=1 j=1

±j

N 3 N

13

= |x ’ x j |3 + ± j x 3’ x + β0 + β1 x

(’1)3’ j

2 2

=0

j=1 j=1

N

= ± j |x ’ x j |3 + β0 + β1 x,

j=1

with ± j = 1 ± j , 1 ¤ j ¤ N , and β0 = β0 ’ ± j x 3 , β1 = β1 + ± j x 2.

1 3

j j

2 2 2

Proposition 1.1 Every natural cubic spline s has a representation of the form

N