Up to now we have dealt only with the problem of recovering an unknown function from

certain known function values. But sometimes it might be desirable to recover the function

also from other types of data. For example, the function™s derivatives might be known at

certain points, but not the function itself. This becomes interesting if partial differential

equations are considered.

In this chapter we deal with a more general problem than those we have discussed so far.

Our approach includes in particular collocation and Galerkin methods for solving partial

differential equations. But the methods we will derive are at the present time only able to

compete with classical methods to a certain extent. In any case, whenever large data sets

are considered one has to combine the methods introduced below with the fast-evaluation

ideas of Chapter 15.

In this sense, this chapter should be seen as a uni¬ed introduction to a general class of

recovery problems.

16.1 Optimal recovery in Hilbert spaces

We start by generalizing results from Chapters 11 and 13. In this section we restrict ourselves

for simplicity to the Hilbert space setting, even though everything works in the case of semi-

Hilbert function spaces also.

Let H be a Hilbert space and denote its dual by H — . Suppose that = {»1 , . . . , » N } ⊆ H —

is a set of linearly independent functionals on H and that f 1 , . . . , f N ∈ R are certain given

values. Then a generalized recovery problem would seek to ¬nd a function s ∈ H such that

» j (s) = f j , 1 ¤ j ¤ N . We will call s a generalized interpolant.

The optimal-recovery problem searches for the norm-minimal generalized interpolant,

i.e. we have to ¬nd s — ∈ H such that

s — = min{ s : s ∈ H, » j (s) = f j , 1 ¤ j ¤ N }. (16.1)

Obviously, this problem is a straightforward generalization of the recovery problem dealt

with in Theorem 13.2. There, the solution was given by the (radial) basis function interpolant.

Here, the solution can also be given explicitly. To this end let us recall Riesz™ representation

289

290 Generalized interpolation

theorem, which allows us to represent each functional » j by a unique element v j ∈ H via

» j = (·, v j ).

Theorem 16.1 Let H be a Hilbert space, »1 , . . . , » N ∈ H — linearly independent function-

als with Riesz representers v j , 1, ¤ j ¤ N , and let f 1 , . . . , f N ∈ R be given. The unique

solution of (16.1) is given by

N

s— = ±jvj, (16.2)

j=1

where the coef¬cients {± j } are determined by the interpolation conditions

» j (s — ) = f j , 1 ¤ j ¤ N. (16.3)

Proof The proof is very similar to the proof of Theorem 13.2. First of all note that s —

from (16.2) is well de¬ned, by the conditions (16.3). Since the » j are supposed to be

linearly independent so also are the v j , showing that the interpolation matrix with entries

(»i (v j )) = ((vi , v j )) = ((»i , » j ) H — ) is positive de¬nite. Next, s — thus de¬ned is indeed a

solution of the optimal-recovery problem (16.1). To see this, assume that v ∈ H satis¬es

» j (v) = 0, 1 ¤ j ¤ N , which implies

N N

(s — , v) = ± j (v, v j ) = ± j » j (v) = 0.

j=1 j=1

This shows that

s— = (s — , s — ’ s + s) = (s — , s) ¤ s —

2

s

for every s ∈ H with » j (s) = f j , 1 ¤ j ¤ N .

The solution of our optimal-recovery problem is unique by a classical argument from

approximation theory. If s is any solution of (16.1) then necessarily (s, s) = 0 for all

s ∈ H with » j (s) = 0 for 1 ¤ j ¤ N ; otherwise we could form t = s + ±s and would

see that t 2 = s 2 + ± 2 s 2 + 2±( s, s) < s 2 for an appropriate choice of ±. Thus, if

s1 and s2 are both solutions to the recovery problem then we have (s1 , s1 ’ s2 ) = 0 and

(s2 , s1 ’ s2 ) = 0, showing that s1 ’ s2 = 0 or s1 = s2 .

Now let us assume that the data values { f j } come from an unknown function f ∈ H , i.e.

f j = » j ( f ), 1 ¤ j ¤ N (it might also be possible to assume that other functionals de¬ne

the values, i.e. f j = μ j ( f ), but we do not want to pursue this any further).

Corollary 16.2 If in addition f j = » j ( f ), 1 ¤ j ¤ N , for an f ∈ H , then s — is the best

approximation from V = span{v1 , . . . , v N } to f , i.e.

f ’ s — = min{ f ’ s : s ∈ V }.

Proof This follows directly from 0 = » j ( f ) ’ » j (s — ) = ( f ’ s — , v j ).

16.1 Optimal recovery in Hilbert spaces 291

So far, we have restated two of the three optimal properties of the (radial) basis function

interpolant in this more general setting. The third was a reformulation of error estimates

with power functions or, to be more precise, of the optimality of the power function and

we will generalize this reformulation now. It answers the question how well the optimal

recovery s — matches the initial function f .

Theorem 16.3 Set := span{»1 , . . . , » N } and V := span{v1 , . . . , v N }. Then for every lin-

ear and continuous functional » : H ’ R the estimate

|»( f ’ s — )| ¤ inf » ’ μ inf f ’ s (16.4)

H—

μ∈ s∈V

is satis¬ed.

Proof Since every » j obviously satis¬es » j ( f ’ s — ) = 0 so does every μ ∈ . Thus

|»( f ’ s — )| = |(» ’ μ)( f ’ s — )| ¤ » ’ μ f ’ s— .

H—

The rest follows from Corollary 16.2.

It is interesting to analyze the term on the right-hand side of (16.4). The ¬rst factor,

P (») = inf » ’ μ H— ,

μ∈

is a generalized power function that describes how well the evaluation functional » can be

approximated by the given functionals from . Furthermore, the second factor in (16.4)

describes how well the unknown function f can be approximated by the functions from V ,

and it is obviously bounded by f .

We end this general section by showing that in the case of interpolation by positive

de¬nite kernels we have done nothing new.

Suppose that ∈ C( — ) is a positive de¬nite kernel. For the set of distinct points

X = {x1 , . . . , x N } ⊆ let = {δx j : x j ∈ X } and for x ∈ let » = δx . Finally let H be

the native Hilbert space N ( ). Since this space is a reproducing-kernel Hilbert space, the

functionals δx j have the Riesz representer (·, x j ). Thus the optimal-recovery problems

and their solutions from Theorems 13.2 and 16.1, respectively, coincide. Moreover, since

any μ ∈ has a representation μ = u j δx j , the new power function

N N

P (») = min δx ’ u j δx j = min (·, x) ’ u j (·, x j )

u∈R N u∈R N

j=1 j=1

H— N()

coincides with the old one, P ,X (x), by Section 11.1.

Let us point out once more what we have to do to recover a function f that is known

only from » j ( f ), 1 ¤ j ¤ N , in the case where we are given a positive de¬nite kernel

∈ C( — ) and where » j ∈ N ( )— . We start by assuming an interpolant of the form

N

y

s= ± j » j (·, y).

j=1

292 Generalized interpolation

Then, the interpolation conditions »k (s ) = f k , 1 ¤ k ¤ N , lead to the interpolation matrix

y

= (»k » j (x, y))1¤ j,k¤N ,

x

A ,