Next let us have a look at (inverse) multiquadrics. We know from Theorem 8.15 that

(x) = (c2 + x 2 )β , β ∈ R\N0 , has up to a sign the generalized Fourier transform

2

’β’d/2

ω

21+β 2

(ω) = K d/2+β (c ω 2 ), ω = 0.

(’β) c

Moreover, we know from Corollary 5.12 that r ’ r ’β’d/2 K d/2+β (r ) is nonincreasing and

that

e’r

K d/2+β (r ) ≥ C(d, β) √

r

for r ≥ 1 with an explicitly known constant C(d, β). The restriction r ≥ 1 is only necessary

if |d + 2β| < 1. In any case we have

e’2cM

•0 (M) ≥ C(d, c, β) .

M β+(d+1)/2

(x) = (c2 + x 2 )β , β ∈ R\N0 , as the basis function results in

Corollary 12.5 Using 2

β’d/2+1/2 ’2cMd /q X

»min (A ≥ C(d, β, c)q X

,X ) e

with an explicitly known constant C(d, β, c).

After this more complicated example we turn to functions of ¬nite smoothness. Thin-

plate splines, powers, and compactly supported functions of minimal degree can all be

treated in the same way. Let us start with thin-plate splines (x) = (’1)k+1 x 2k log x 2 .

2

According to Theorem 8.17 they have the generalized Fourier transform

’d’2k

(ω) = ck ω 2

with a constant ck speci¬ed in that theorem. Hence our theory yields for thin-plate splines

214 Stability

Corollary 12.6 In the case where (x) = (’1)k+1 x 2k

log x 2 , we can bound the min-

2

imum eigenvalue »min (A ,X ) as follows:

≥ Cd ck (2Md )’d’2k q X .

»min (A 2k

,X )

β

Next, the powers (x) = (’1) β/2 x 2 , x ∈ Rd , with β > 0, β ∈ 2N, have the gener-

’d’β

alized Fourier transform (ω) = cβ ω 2 by Theorem 8.16. Again, the constant cβ is

speci¬ed in the theorem. This Fourier transform leads to

β

β/2

(x) = (’1) β > 0, β ∈ 2N, we have

Corollary 12.7 In the case where x 2,

β

≥ Cd cβ (2Md )’d’β q X .

»min (A ,X )

Finally, let us have a look at the compactly supported radial basis functions d,k =

φd,k ( · 2 ) of minimal degree de¬ned in De¬nition 9.11. Even if we do not know the

Fourier transform of these functions explicitly, we know from Theorem 10.35 that

’d’2k’1

≥C ω

d,k (ω) 2

for suf¬ciently large ω 2 , with the possible exception of d = 1, 2 in the case k = 0. The

constant C depends only on d and k. Since d,k is continuous and positive on Rd we obtain

Corollary 12.8 In the case of the compactly supported radial basis functions of minimal

degree of Section 9.4, the smallest eigenvalue of the interpolation matrix can be bounded

as follows:

»min (A ≥ Cq X .

2k+1

,X )

As in the case of error estimates we summarize our results in the following form. For

every basis function we have found a function G such that

»min (A ≥ G(q X ).

,X )

Table 12.1 contains the functions G for the various basis functions up to a constant factor

that depends only on and d but not on X .

Let us come back to the trade-off principle. If we use the function G that we have just

introduced and the function F from Table 11.1, which gave a bound on the squared power

function, we see that

G(q X ) ¤ »min (A X, ) ¤ P 2 ,X \{x j } (x j ) ¤ F(h X \{x j }, )

for every x j ∈ X . Hence, if X and X \ {x j } are quasi-uniform then the separation distance

q X and the ¬ll distance h X \{x j }, are the same size. Since in the case of basis functions of

¬nite smoothness the functions F and G differ only by a constant factor and have the same

exponent, this means in particular that the estimates of both the upper bounds for the power

function and the lower bounds for the smallest eigenvalue are sharp concerning the order.

We see also that in the case of Gaussians there is a substantial gap between G and F, while

12.3 Change of basis 215

Table 12.1 Lower bounds on »min in terms of q

(x) = φ(r ), r = x G(q)

2

q ’d e’Md /(±q )

2 2

e’±r ,

2

±>0

Gaussians

q ’d e’40.71d /(±q )

2 2

(’1) β/2 (c2 + r 2 )β , q β’(d’1)/2 e’2Md c/q

multiquadrics

q β’(d’1)/2 e’12.76dc/q

β ∈ R\N0

inverse MQ

(’1) β/2 r β , qβ

powers

β > 0, β ∈ 2N

(’1)k+1 r 2k log r , q 2k

thin-plate splines

k∈N

φd,k (r ) q 2k+1

compactly supported functions

in the case of the (inverse) multiquadrics, better results concerning the involved constants

are all that is necessary.

12.3 Change of basis

We have seen that expressing the radial basis function interpolant s f,X of a function f in the

standard basis can lead to badly conditioned interpolation matrices. The condition number

depends more on the separation distance than on the number N of centers.

Of course, if in particular the basis function is positive de¬nite, leading to a positive

de¬nite interpolation matrix, we have all the preconditioning methods known from classical

linear algebra to hand. The most promising methods seem to be the preconditioned conjugate

gradient method and the incomplete Cholesky factorization. But since these methods are

described in good books on numerical linear algebra we will skip the details here. There is

a lack of preconditioning methods specially tailored to the radial basis function situation.

The few existing methods seem to be inferior even to the classical preconditioner just

mentioned.

In the ¬rst place, a bad condition number is a result of the naturally chosen basis, namely

(·, x1 ), . . . , (·, x N ) (plus a basis for P), and we might be interested in ¬nding a better

basis for the subspace