Hence, if we ¬x a γ ∈ (0, 1) and use the Chebychev norm on X we then have to choose

X j in such a way that the maximum of f j’1 on X j is at least γ times the maximum of

f j’1 on the whole set X . Unfortunately, convergence is not linear in this γ but rather in

γ = 1 ’ (c1 γ /C1 )2 , and the values of c1 and C1 can cause this constant to be rather close

to one, making convergence quite slow, at least theoretically.

But if we use orthogonality again, we ¬nd

k

’ fk = ’ fj

2 2 2 2

f f j’1

N() N() N() N()

j=1

k

= N ( ),

2

P j f j’1

j=1

showing that P j f j’1 2 ( ) converges to zero at least as fast as 1/j, and thus also

N

f j’1 2 ( ) and f j’1 2 p (X ) . This does not amount to much, but it does at least allow

N L

us to give more accurate estimates on the number of steps necessary to bring the approxi-

mation error below a certain level.

Since we have to solve a linear system at each step we want to keep the number of

points in X j small. It is interesting to see that Proposition 15.24 and its corollary give linear

convergence, even if we choose X j = {xk j } to consist of only one point, if we simply make

sure that

| f j’1 (xk j )| ≥ γ f j’1 L ∞ (X ) .

For example, we can choose xk j to be the point where f j’1 attains its maximum. Moreover,

in this situation we do not have even a linear system to solve; the new interpolant is known

explicitly from

f j’1 (xk j )

P j f j’1 = .

(·, xk j )

(xk j , xk j )

286 Numerical methods

We summarize this simple approach in Algorithm 16. Our analysis made so far ensures

linear convergence and we are going to investigate the complexity of Algorithm 16.

Algorithm 16 Greedy one-point algorithm

Set X of centers, data values f |X , accuracy > 0.

Input:

Output: Approximation s.

Set f 0 = f |X , s0 = 0, max = ∞, j = 1.

Choose xk1 ∈ X .

while max > do

Set β = f j’1 (xk j )/ (xk j , xk j ).

for 1 ¤ i ¤ N do

Replace f j’1 (xi ) by f j (xi ) = f j’1 (xi ) ’ β (xi , xk j ).

Replace s j’1 by s j = s j’1 + β (·, xk j ).

Keep track of the maximum max of the new residual f j and the data site xk j+1 where

it is attained.

Replace j by j + 1.

It is obvious that one iteration of the “while” loop takes O(N ) time if no additional

information such as compact support or a far-¬eld expansion is used. With this information,

however, we can reduce the time needed to O(1) or O(log N ), making the algorithm very

fast. Examples show that it produces reasonable approximations even when the number of

points is much less than the initial number N . Hence, the algorithm can also be used for

compressing data.

We end this section with a short discussion on how additional bene¬t can be drawn from

using a compactly supported basis function with different support radii.

To this end we ¬x a radial function = φ( · 2 ) having support in the unit ball and

scale it via δ = (·/δ). Next we ¬x certain constants ±, > 0 for recording the accuracy,

further constants 0 < γ < β < 1 for in¬‚uencing the support radius, and a constant σ > 0

for counting points. Finally, we ¬x a discrete norm on X .

In what follows, a successful try in K steps is de¬ned as a run of K steps of Algorithm

16 at a ¬xed scale δ such that the discrete norm is reduced by a factor of at least ±.

The new algorithm can be described as follows. There is an outer loop that runs over suc-

cessful tries until the norm of the residuals falls under . The middle loop uses larger

and larger numbers of iterations K = σ, σ 2 , . . . and the inner loop uses support radii

c, cβ, cβ 2 , . . . , cγ until a successful try is found.

Since we know that Algorithm 16 converges, a suf¬ciently large number K will lead

to a successful try. Hence the new algorithm has to converge. Note that it aims to use as

few points as possible in each successful try. Moreover, it uses the largest possible support

radius.

15.7 Notes and comments 287

Fig. 15.6 Greedy reconstruction of the Beethoven relief.

Using a large initial radius c and small values for 1 ’ β and 1 ’ σ leads to a time-

consuming optimization, which tries to reconstruct the data with as few centers as possible.

Hence, to speed up the procedure certain improvements have to be implemented. Numerical

tests show that often the sequence of support radii is decreasing while at the same time the

number of necessary iterations is increasing. Hence, instead of starting with the initial values

for each middle and inner loop one could start the new outer loop with the successful values

for c and K from the previous step.

Figure 15.6 shows some results for the Beethoven data set. The initial support radius was

the diameter of the data set. From left to right we show reconstructions using 10, 100, 1000,

and 10 000 points. The ¬nal picture shows the distribution of the 10 000 points employed.

15.7 Concluding remarks

The right choice of basis function depends mainly on additional information on the target

function, such as its smoothness. The right choice of reconstruction method depends also

upon the application. For example, if on the one hand exact interpolation is necessary then

any method based on a far-¬eld expansion is of only limited use, since such a method has

only an approximate character in the far-¬eld. On the other hand, if the data set has holes

which have to be ¬lled (e.g. for mesh repair) then any purely local method would be the

wrong choice; even the partition-of-unity approach would have to be handled with care here.

A global method such as the approximate-Lagrangian method, the alternating-projection

method, or the multilevel method would be more suitable in this case. In all other cases, it

seems that, particularly for very large data sets, the partition-of-unity method is fastest.

15.8 Notes and comments

Fast multipole methods have been promulgated mainly by Beatson and various collabora-

tors; see for example [11“14, 16, 17]. They are crucial for almost every ef¬cient algorithm

employing globally supported basis functions. The representation given here is based par-

ticularly upon [13, 16]. The idea of using the fast Gauss transform for other basis func-

tions comes from Roussos [158], whereas the fast Gauss transform itself was devised by

288 Numerical methods

Greengard and Strain [74] in 1991. Unfortunately the estimate on the truncation error given

in that paper is erroneous, as was recently pointed out by Roussos [158] and Baxter and Rous-

sos [10]. In Theorem 15.6 we give the corrected version. The actual inequality of Cramer

can be stated in a stronger version; see for example Hille [83] and, for a thorough discussion,

Roussos [158]. However, the stronger version has not been necessary in our analysis.

The method of approximation of Lagrange functions goes back to Beatson and Powell

[18]. The theoretical background was provided later, by Faul and Powell [59] and Faul [58].

The alternating-projection method given in the third section is based on a general result by

Smith et al. [178], given here in Theorem 15.12. The adaptation to the radial basis function

setting comes from Beatson et al. [15]. They also call their method a domain decomposition

method, even if actually not a domain but the set of centers is decomposed.

The partition-of-unity idea is also quite old. Possibly Maude [118] was the ¬rst to use

this idea, in 1973, in the context of interpolation, at least in the univariate case. Franke [63]

extended the method in 1977 to the multivariate case. He also tested different weight

functions and different local approximation processes. One of these local approximation

schemes was thin-plate spline interpolation (see [64]). Convergence investigation for the

general class of radial basis functions was done by the present author in 2002 [198]. Partition-

of-unity methods have recently gained attention again, in the context of meshless methods

for partial differential equations; see Babuska and Melenk [7].

One can ¬nd the idea of the multilevel method in Schaback™s paper [164] but it became

publicly known through the work of Floater and Iske [60].

Finally, the greedy algorithm of the last section was initiated by Schaback and Wendland

in [169, 170].

16