determined by searching a ¬rst-level tree, which takes O(log N ) time, and querying a

logarithmic number of (d ’ 1)-dimensional range trees. Hence, it satis¬es

Td (N ) = O(log N ) + O(Td’1 (N ) log N ).

Since T1 (N ) = O(log N ) this solves to give O(logd N ). Adding the time for reporting the

points contained yields the following result.

Theorem 14.15 The range query algorithm for range trees needs, for rectangular query

ranges, O((log N )d + k) time, where k denotes the number of points reported.

This query time can be improved further to O((log N )d’1 + k) if a technique called

fractional cascading is used. Details may be found in the book [42] by de Berg et al.

14.5 Notes and comments

This chapter differs from all the others since it is related more to computer science than

mathematics. But the results are crucial for the development of ef¬cient algorithms for

radial basis functions, which are the subject of Chapter 15. Moreover, it becomes in general

more and more obvious that the success of any numerical method depends strongly on

an ef¬cient implementation and hence on the underlying data structure and the associated

algorithms. This chapter gives further strong arguments for scattered data methods that are

based merely upon point sets.

In computer science, a run-time analysis of an algorithm is in general done in the worst-

case or average case-setting. Here, however, the distribution of the points was taken more

into account and most of the analysis was done for quasi-uniform data sets.

The kd-trees that were the subject of the second section were created by Bentley (see for

example [24,67]) in 1975 for nearest-neighbor searching. They are an improvement over the

classical quadtrees and octrees since they allow a dimensional-independent implementation.

A good source for all kinds of multidimensional tree is the book by Samet [161]. The

presentation here borrows from the book [42] of de Berg et al. and especially from the

work of Mount, Arya and others [3“5, 139]. The examples in the cases of the kd- and bd-

trees have been produced by using their approximate nearest neighbors (ANN) library. The

252 Data structures

description of the bd-trees in the third section is based on these papers just mentioned of

Arya and Mount, who also created the bd-trees in the context of approximate range queries

and nearest-neighbor searching.

Finally, range trees were independently discovered by various authors around 1980. For

more details on them we refer the reader to the book by de Berg et al. [42] that has already

been mentioned and the comments and references therein.

15

Numerical methods

Now it is time to discuss ef¬cient numerical methods for scattered data interpolation and

approximation. So far, we have encountered the moving least squares approximation scheme

as one of them. Combined with the data structures of the last chapter, it needs O(d N ) space

and time to build a data structure of N points in Rd provided that the points are uniformly

distributed. Moreover, it takes a constant time to evaluate the approximant at a single

point.

But what about radial basis functions? The naive approach leads to the problem of solving

an N — N system that costs O(N 3 ) time and O(N 2 ) space. Furthermore, every evaluation

needs another O(N ) time. For a very large N this is de¬nitely too expensive. Thus more

ef¬cient methods are necessary. We will review some of the most promising approaches in

this chapter.

15.1 Fast multipole methods

In the case of a globally supported basis function and a large number of centers it is

impossible to use direct methods for solving the interpolation equations. Instead, iterative

methods have to be employed. A popular choice would be the conjugate gradient method,

or more generally a generalized minimum residual (GMRES) method. We will not describe

these standard algorithms here explicitly; It is only necessary to note that the main operation

in these methods is a matrix-by-vector multiplication that takes in general O(N 2 ) operations.

In our situation, the matrix“vector multiplication boils down to the evaluation of N sums

of the form

N

s(x) = ± j (x, x j ) (15.1)

j=1

and we have to ¬nd ef¬cient algorithms that are able to perform this evaluation in less than

O(N ) time, preferably in O(log N ) or even constant time. This is the subject of this section.

Obviously we cannot achieve this goal if we want to reproduce the exact value of s at x.

But since s is already an approximation to an unknown function, an additional error might

be acceptable. Hence, we try only to approximate s up to a certain accuracy, say > 0.

253

254 Numerical methods

Source region

Source region

t0

Evaluation rregion

X

Fig. 15.1 Evaluation and source regions.

In the following, we will call t in (x, t) a source point and x an evaluation point. The

idea of multipole expansions is based on a far-¬eld expansion of . Suppose that all sources

are situated in a certain region, called a panel, which is centered at a point t0 . Suppose further

that we want to evaluate the function (15.1) at a point x that is suf¬ciently far from the source

panel. Figure 15.1 illustrates the situation.

If we can expand in the form

p

(x, t) = φk (x)ψk (t) + R(x, t) (15.2)

k=1

with a remainder R that tends to zero for x ’ t0 2 ’ ∞, or for p ’ ∞ if x ’ t0 2 is

suf¬ciently large, then we call (15.2) a far-¬eld expansion for around the source t0 . If we

use (15.2) to evaluate (15.1) then we can compute

N

s(x) = ± j (x, x j )

j=1

p

N N

= ±j φk (x)ψk (x j ) + ± j R(x, x j )

j=1 k=1 j=1

p N N

= φk (x) ± j ψk (x j ) + ± j R(x, x j )

k=1 j=1 j=1

p N

=: βk φk (x) + ± j R(x, x j ).

k=1 j=1

p

Hence, if we use the approximation s(x) = βk φk (x) we have an error bound

k=1

|s(x) ’ s(x)| ¤ ± max |R(x, x j )|,

1

1¤ j¤N

which is small if x is far enough away from the sources x j . Moreover, each coef¬cient βk

can be computed in advance in linear time. Hence if p is much smaller than N , we can

15.1 Fast multipole methods 255

[0,1]

[1/2, 1]

[0,1/2)

[3/4, 1]