have to solve a system like (8.10). This shows that ± T A ,X ± = ± T ( f |X ) and hence

1

± ¤ f |X 2 .

2

»min (A ,X )

So, we know more about the accuracy of our solution vector ± if we know more about lower

bounds for »min (A ,X ).

206

Stability 207

Fig. 12.1 The power function (dotted line) and the smallest eigenvalue (solid line) for integer n,

1 ¤ n ¤ 20.

Moreover let us brie¬‚y discuss the condition number as a whole. If is a symmetric

positive de¬nite kernel, »min (A ,X ) coincides with the norm A’1 2 and the condition

,X

number of A ,X is given by

»max (A ,X )

A’1

=A = ,

cond(A ,X ) ,X 2 ,X 2

»min (A ,X )

where »max denotes the maximum eigenvalue. The condition number of an interpolation

matrix gives information on the numerical stability of the interpolation process. Hence we

actually have to investigate both the maximum and the minimum eigenvalue. Fortunately

»max behaves nicely compared to the smallest eigenvalue; to see this, we invoke Gershgorin™s

theorem, which gives for our matrix an index j ∈ {1, . . . , N } with

N

|»max (A ’ (x j , x j )| ¤ | (x j , xk )|,

,X )

k=1

k= j

so that

»max (A ¤N (·, ·) L ∞ (X —X ) ,

,X )

which becomes, in the case of a positive de¬nite function,

»max (A ¤ N (0).

,X )

Hence if X is quasi-uniformly distributed then »max (A ,X ) grows at most like h ’d , and

X,

this upper bound can also be established, in the case of the Gaussians and the compactly

supported radial functions, for non-quasi-uniform data sets. Even if this upper bound appears

already to be worse than expected, numerical tests show that the maximum eigenvalue

indeed causes no problems. The same is true in the case of a conditionally positive de¬nite

function.

208 Stability

We will see that the minimum eigenvalue, as a function of the number of data sites or

their separation distance, grows much faster, in the case of multiquadrics and Gaussians

even exponentially. This is the reason for the badly conditioned interpolation matrices.

But before we deal with this problem let us return to the connection with the power

function mentioned in the opening example. This connection will be made in the next

section.

12.1 Trade-off principle

Our main result in this section can be formulated for general conditionally positive def-

inite kernels. As pointed out in the introduction to the chapter, there is a connection

between the smallest eigenvalue of the main part of the interpolation matrix given by

A ,X = ( (xi , x j ))1¤i, j¤N and the squared power function P 2 ,X (x). But since the power

function depends on the point x ∈ at which it is evaluated, and the smallest eigenvalue

of A ,X does not, the setting has to be slightly modi¬ed. This is done by adding the point

x to the set of centers X . Let us de¬ne x0 = x.

Theorem 12.1 If u — (x), 1 ¤ j ¤ N , denotes the cardinal functions from Theorem 11.1 then

j

we have for all x ∈ X

N

’1 2

[u — (x)]2 .

,X ∪{x} )] P ,X (x) ≥ 1 +

[»min (A j

j=1

Proof De¬ne u — (x) = ’1 and x0 = x. Then, the de¬nition of the power function imme-

0

diately gives

N N

u — (x)u — (x) (x j , xk ) ≥ »min (A [u — (x)]2 .

P 2 ,X (x) = ,X ∪{x} )

j k j

j,k=0 j=0

This theorem can be interpreted in at least two different ways. On the one hand we

have

»min (A ¤ min P 2 ,X \{xk } (xk ),

,X )

1¤k¤N

giving both lower bounds for the power function and upper bounds for the eigenvalue. On

the other hand we have

P 2 ,X (x)

N

u — (x)

2

1+ ¤ , x ∈ X,

»min (A

j

,X ∪{x} )

j=1

which is an upper bound for the Lebesgue functions.

12.2 Lower bounds for »min 209

12.2 Lower bounds for »min

For »min we use the following idea. Suppose that is the conditionally positive de¬nite

kernel of interest. Suppose further that there exists a positive de¬nite kernel such that

N N

± j ±k (x j , xk ) ≥ ± j ±k (x j , xk ) ≥ » ± 2 .

j,k=1 j,k=1

Then » is obviously a lower bound for »min . But what do we gain from this? Now we have

to do the estimates for instead of and of course has to depend on . So how does

depend on ? Before we answer these questions, we want to discuss the terms in which

the lower bounds have to be expressed.

The approximation error between function and interpolant was expressed in terms of the

¬ll distance h X, , because the ¬ll distance is a measure of how well the centers X cover

the region . But it is not a good measure of the stability. A point set X might have quite a

big ¬ll distance and the interpolation process is nonetheless badly conditioned. The reason

for this is simply that only two points from X have to be very close. Thus it would seem to

be more natural to express lower bounds on »min in terms of the separation distance, which

has already been de¬ned to be

xi ’ x j 2.

1

q X := min

2 i= j

The separation distance gives the maximum radius r > 0 such that all balls {x ∈ Rd :