1.7 Notes and comments

The examples shown in this chapter use data sets that have courteously been provided by

the following persons and institutions. The glacier data set can be found at Richard Franke™s

homepage. The dragon comes from the homepage of Stanford University. The aircraft model

and data is due to the European Aeronautic Defence and Space Company (EADS).

It seems that the implicit reconstruction of a surface from unorganized points goes hand

in hand with radial basis function interpolation and approximation. The ¬rst contributions

in this direction dealt with Gaussians and are known under the name blobby surfaces or

metaballs. They were invented independently by Blinn [27] and Hishimura et al. [84] in

the early eighties. Then Turk and O™Brien together with various coauthors used thin-plate

splines for modeling implicit surfaces; see for example [184“186,206]. Owing to the global

character of these basis functions, however, all these examples were restricted to rather

small data sets. The remedy to this problem has been twofold. On the one hand, Morse

et al. [137] used the compactly supported radial basis functions devised by the present

author [190]. On the other hand, fast evaluation methods based on far-¬eld expansions have

been developed for most of the globally supported radial basis functions. Results based on

these methods can be found in a paper by Carr et al. [37]. Other more recent contributions

are by Ohtake et al. [151, 152]. The idea of computing and orienting surface normals goes

back to Hoppe; see [86, 87]. In [153], Pasko and Savchenko describe smooth ways for

several CSG operations.

The idea of treating a ¬‚uid“structure interaction problem in a coupled-¬eld formula-

tion, so that the interaction is restricted to the exchange of boundary conditions, has been

1.7 Notes and comments 17

employed by Farhat [52] and Kutler [100]. Among others, Farhat and coworkers [51, 52],

Beckert [19], and Cebral and L¨ hner [38] have used different interpolation strategies to

o

exchange the boundary conditions in an iterative or staggered procedure. Hounjet and Mei-

jer [88] were the ¬rst to use thin-plate splines in this context. Beckert and Wendland [20]

extended their approach to general radial basis functions, in particular to compactly sup-

ported ones.

The theory of semi-Lagrangian methods for the numerical solution of advection schemes

was described and analyzed in [50] by Falcone and Ferretti. Behrens and Iske [21, 76] were

the ¬rst to use radial basis functions in the spatial reconstruction process. Other methods

based on (generalized) scattered data interpolation for hyperbolic conservation laws have

been investigated by Sonar [179] and Lorentz et al. [109].

2

Haar spaces and multivariate polynomials

Every book on numerical analysis has a chapter on univariate polynomial interpolation. For

given data sites x1 < x2 < · · · < x N and function values f 1 , . . . , f N there exists exactly one

polynomial p f ∈ π N ’1 (R1 ) that interpolates the data at the data sites. One of the interesting

aspects of polynomial interpolation is that the space π N ’1 (R1 ) depends neither on the

data sites nor on the function values but only on the number of points. While the latter is

intuitively clear, it is quite surprising that the space can also be chosen independently of

the data sites. In approximation theory the concept whereby the space is independent of the

data has been generalized and we will shortly review the necessary results. Unfortunately,

it will turn out that the situation is not as favorable in the multivariate case.

2.1 The Mairhuber“Curtis theorem

Motivated by the univariate polynomials we give

De¬nition 2.1 Suppose that ⊆ Rd contains at least N points. Let V ⊆ C( ) be an N -

dimensional linear space. Then V is called a Haar space of dimension N on if for

arbitrary distinct points x1 , . . . , x N ∈ and arbitrary f 1 , . . . , f N ∈ R there exists exactly

one function s ∈ V with s(xi ) = f i , 1 ¤ i ¤ N .

In the sense of this de¬nition, V = π N ’1 (R1 ) is an N -dimensional Haar space for any

subset of R that contains at least N points. Haar spaces, which are sometimes also called

Chebychev spaces, can be characterized in many ways. Let us collect some alternative

characterizations straight away.

Theorem 2.2 Under the conditions of De¬nition 2.1 the following statements are equiva-

lent.

(1) V is an N -dimensional Haar space.

(2) Every u ∈ V \ {0} has at most N ’ 1 zeros.

(3) For any distinct points x1 , . . . , x N ∈ and any basis u 1 , . . . , u N of V we have that

det (u j (xi )) = 0.

18

2.2 Multivariate polynomials 19

Proof Suppose that V is an N -dimensional Haar space and u ∈ V \ {0} has N zeros, say

x1 , . . . , x N . In this case u and the zero function both interpolate zero on these N points.

From the uniqueness we can conclude that u ≡ 0 in contrast with our assumption.

Next, let us assume that the second property is satis¬ed. If det A = 0 with A = (u j (xi ))

then there exists a vector ± ∈ R N \ {0} with A± = 0, i.e. N ± j u j (xi ) = 0 for 1 ¤ i ¤ N .

j=1

This means that the function u := ± j u j has N zeros and must therefore be identically

zero. This is impossible since ± = 0.

Finally, if the third property is satis¬ed then we can make the Ansatz u = ± j u j for

the interpolant. Obviously, the interpolation conditions become

N

± j u j (xi ) = f i , 1 ¤ i ¤ N.

j=1

Now the coef¬cient vector, and hence u, is uniquely determined because A = (u j (xi )) is

nonsingular.

After this characterization of Haar spaces we turn to the question whether Haar spaces

exist in higher space dimensions. The next result shows that this is the case only in simple

situations.

Theorem 2.3 (Mairhuber“Curtis) Suppose that ⊆ Rd , d ≥ 2, contains an interior point.

Then there exists no Haar space on of dimension N ≥ 2.

Proof Suppose that U = span{u 1 , . . . , u N } is a Haar space on . As contains an interior

point there exists a ball B(x0 , δ) ⊆ with radius δ > 0 and we can ¬x pairwise distinct

x3 , . . . , x N ∈ B(x0 , δ). Next we choose two continuous curves x1 (t), x2 (t), t ∈ [0, 1], such

that x1 (0) = x2 (1), x1 (1) = x2 (0) and such that the curves neither have any other points of

intersection nor have any common points with {x3 , . . . , x N }. This is possible since d ≥ 2.

Then on the one hand, since U is assumed to be a Haar space on , the function

D(t) := det((u j (xk ))1¤ j,k¤N )

is continuous and does not change sign. On the other hand D(1) = ’D(0) because only the

¬rst two rows of the involved matrices are exchanged. Thus D must change signs, which is

a contradiction.

2.2 Multivariate polynomials

So far we know that, in contrast with their univariate sibling, multivariate polynomials

cannot form a Haar space. Hence, it is impossible to interpolate all kinds of data at any set of

data sites X = {x1 , . . . , x Q }, Q = dim πm (Rd ), by polynomials from πm (Rd ). Nonetheless,

polynomial interpolation plays an important role even in the multivariate setting. But we

have to restrict the sets of data sites for which interpolation is possible. This together

with some other elementary facts about multivariate polynomials is the subject of this

section.

20 Haar spaces and multivariate polynomials

Lemma 2.4 For d = 1, 2, 3, . . . and m = 0, 1, 2, . . . we have

m

k+d m+1+d

= .

d +1

d

k=0

Proof We use induction on m. For m = 0 there is nothing to prove. Now suppose that we

have proven already the assertion for m and want to conclude it for m + 1. This is done by

writing

m+1

k+d m

k+d m+1+d

= +

d d d

k=0 k=0

m+1+d m+1+d

= +

d +1 d

(m + 1 + d)! (m + 1 + d)!

= +

m!(d + 1)! d!(m + 1)!

(m + 1 + d)! (m + d + 2)