s= c j (·, x j ).

j=1

The interpolation condition would then lead to an interpolation matrix with entries of the

form

± ()

(x , x j ).

D1

This matrix is obviously not symmetric; moreover, it may not be invertible. It is possible

to construct a counter-example that leads to a singular matrix; interestingly, though such

counter examples are found rather seldom and with dif¬culty. Nonetheless, this easier

approach is doomed to fail in the general setting. To save it, one could start to discuss

conditions for the functionals to ensure invertibility of the generalized interpolation matrix,

but we do not want to pursue this further here.

Instead, we want to discuss the use of conditionally positive de¬nite kernels. As mentioned

earlier we could use the kernel K (·, ·) from Theorem 10.20 instead of the initial kernel (·, ·).

Since K is positive de¬nite we can form an interpolant using (16.5) with interpolation matrix

(16.6), simply by replacing by K .

But in the case of these speci¬c functionals we want to introduce another possibility,

which might be more natural. We start by forming an interpolant of the form

Q

N

± ( j)

s= (·, x j ) + d p,

c j D2 (16.8)

=1

j=1

where { p1 , . . . , p Q } denotes the usual basis of P. The interpolation conditions

D ± s(xi ) = f i ,

(i)

1 ¤ i ¤ N, (16.9)

together with the additional conditions

N

c j D ± p (x j ) = 0,

( j)

1¤ ¤ Q, (16.10)

j=1

lead to a system

c f

A (P)

,

= , (16.11)

T

(P) 0 d 0

16.2 Hermite“Birkhoff interpolation 295

with matrices A , = (»x » (x, y)) ∈ R N —N and (P) = (»i ( p )) ∈ R N —Q . This system

y

j

is very similar to the classical interpolation system (8.10) and we have to check whether or

when it is uniquely solvable.

Theorem 16.5 Suppose that ∈ C 2k ( — ) is a conditionally positive de¬nite kernel

with respect to P ⊆ C k ( ). Suppose further that the functionals » j = δx j —¦ D ± are linearly

( j)

independent and that » j ( p) = 0 for all 1 ¤ j ¤ N and p ∈ P implies that p = 0. Then

there exists exactly one interpolant of the form (16.8) that satis¬es the conditions (16.9)

and (16.10).

Proof Since we are working in a ¬nite-dimensional setting it suf¬ces to prove that the

matrix in (16.11) is injective. So let us assume that

c+ (P)d = 0,

A ,

(P)T c = 0.

c + c T (P) =

Multiplying the ¬rst equation by c T and using the second yields c T A ,

c T A , c = 0. Now it is easy to see that

N

± ±

( j) ()

c= (x j , x )

cT A c j c D1 D2

,

j, =1

N

± ±

( j) ()

= c j c D1 D2 K (x j , x )

j, =1

for all c with c T (P) = 0. But we already know that the last quadratic form is positive

de¬nite and hence equals zero if and only if c = 0. The additional conditions imposed on

the » j lead now to d = 0 also.

In the case of pure interpolation (i.e. all ± ( j) = 0) we know that the classical interpolant

and the interpolant formed using the kernel K are the same if ⊆ X (see Corollary 12.10).

This is no longer true if derivatives come into play. Since

Q Q

K (x, y) = (x, y) ’ p j (x) (ξ j , y) ’ p (y) (x, ξ )

=1

j=1

Q Q

+ p j (x) p (y) (ξ j , ξ ) + p j (x) p j (y),

j, =1 j=1

±

the interpolant built from the basis functions D2 K (·, y) is contained in

± ( j)

span {D2 (·, x j ) : 1 ¤ j ¤ N } ∪ { (·, ξ ) : 1 ¤ ¤ Q} + P,

whereas the interpolant (16.8) obviously comes from the space

± ( j)

(·, x j ) : 1 ¤ j ¤ N + P.

span D2

296 Generalized interpolation

16.3 Solving PDEs by collocation

The previous section allows us to consider a general class of boundary-value problems.

Suppose that ⊆ Rd is an open and bounded region. We want to solve a problem of the

type

Lu = f ,

in

Bu = g ‚.

on

Here, the right-hand sides f, g, the partial differential operator L, and the boundary

operator B are given and we are looking for the unknown solution u.

De¬nition 16.6 An operator L : C k ( ) ’ C( ) is called a linear differential operator of

order k if it has the the form

c± D ±

L=

|±|¤k

with c± ∈ C( ). L is said to be a linear differential operator with constant coef¬cients if

additionally all the functions c± are constant.

Note that a linear differential operator is indeed linear by de¬nition.

The ideas of the last section suggest the following approach. Suppose that L is a linear

differential operator of order k; then we choose a positive de¬nite function ∈ C 2k ( — )

such that L : N ( ) ’ C( ). We choose points X = X 1 ∪ X 2 with X 1 = {x1 , . . . , xn } ⊆

and X 2 = {xn+1 , . . . , x N } ⊆ ‚ and use the functionals

δx j —¦ L , 1 ¤ j ¤ n,

»j =

δx j —¦ B, n + 1 ¤ j ¤ N.

Even if this has not been speci¬ed so far, we are assuming that the boundary operator

B behaves nicely in the sense that it is well de¬ned on N ( ). The reader might think of

Dirichlet boundary conditions, i.e. Bu = u, as a prototype. But more complicated boundary

operators given by Neumann or mixed boundary conditions are possible.