The pattern for the rest of the terms should be obvious, as should the proof.

As observed above, the inverse of a matrix is the reciprocal of the deter-

minant of the matrix multiplied by the transposed matrix of the co-factors.

So, if Dµν is the co-factor of the term in D(») associated with Kνµ , then the

solution of the equation

(I + »K)x = b (9.118)

is

Dµ1 b1 + Dµ2 b2 + · · · + Dµn bn

xµ = . (9.119)

D(»)

If µ = ν we have

Kµν Kµi1 Kµi2

31

Kµν Kµi

2

Ki 1 i 2 + · · · .

Dµν = »Kµν + » +» Ki 1 ν Ki 1 i 1

Kiν Kii 2! i1 i2

Ki 2 ν Ki 2 i 1 Ki 2 i 2

i

(9.120)

278 CHAPTER 9. INTEGRAL EQUATIONS

When µ = ν we have

˜

Dµν = δµν D(») (9.121)

˜

where D(») is the expression analogous to D(»), but with the µ™th row and

column deleted.

These elementary results suggests the de¬nition of the Fredholm determi-

nant of the integral kernel K(x, y) a < x, y < b, as

∞

»m

D(») = Det |I + »K| ≡ Am , (9.122)

m!

m=0

b

where A0 = 1, A1 = Tr K ≡ K(x, x) dx,

a

bb K(x1 , x1 ) K(x1 , x2 )

A2 = dx1 dx2 ,

K(x2 , x1 ) K(x2 , x2 )

aa

K(x1 , x1 ) K(x1 , x2 ) K(x1 , x3 )

bbb

A3 = K(x2 , x1 ) K(x2 , x2 ) K(x2 , x3 ) dx1 dx2 dx3 . (9.123)

aaa

K(x3 , x1 ) K(x3 , x2 ) K(x3 , x3 )

etc.. We also de¬ne

b

K(x, y) K(x, ξ)

2

D(x, y, ») = »K(x, y) + » dξ

a K(ξ, y) K(ξ, ξ)

K(x, y) K(x, ξ1 ) K(x, ξ2 )

1 bb

3

K(ξ1 , y) K(ξ1 , ξ1) K(ξ1 , ξ2 ) dξ1dξ2 + · · · ,

+»

2! aa

K(ξ2 , y) K(ξ2 , ξ1) K(ξ2 , ξ2 )

(9.124)

and then

1 b

•(x) = f (x) + D(x, y, »)f (y) dy (9.125)

D(») a

is the solution of the equation

b

•(x) + » K(x, y)•(y) dy = f (x). (9.126)

a

If |K(x, y)| < M in [a, b] — [a, b], the Fredholm series for D(») and D(x, y, »)

converge for all », and de¬ne entire functions. In this it is unlike the Neumann

series, which has a ¬nite radius of convergence.

9.7. SERIES SOLUTIONS 279

The proof of these claims follows from the identiy

b

D(x, y, ») + »D(»)K(x, y) + » D(x, ξ, »)K(ξ, y) dξ = 0, (9.127)

a

or, more compactly with G(x, y) = D(x, y, »)/D(»),

(I + G)(I + »K) = I. (9.128)

For details see Whitaker and Watson §11.2.

Example: The equation

1

•(x) = x + » xy•(y) dy (9.129)

0

gives us

1

D(») = 1 ’ », D(x, y, ») = »xy (9.130)

3

and so

3x

•(x) = . (9.131)

3’»

(We have seen this equation and solution before)

Exercise: Show that the equation

1

(xy + y 2 )•(y) dy

•(x) = x + »

0

gives

2 1

D(») = 1 ’ » ’ »2

3 72

and

1 1 1 1

D(x, y, ») = »(xy + y 2 ) + »2 ( xy 2 ’ xy ’ y 2 + y).

2 3 3 4

280 CHAPTER 9. INTEGRAL EQUATIONS

Appendix A

Elementary Linear Algebra

In solving the di¬erential equations of physics we have to work with in¬nite

dimensional vector spaces. Navigating these spaces is much easier if you

have a sound grasp of the theory of ¬nite dimensional spaces. Most physics

students have studied this as undergraduates, but not always in a systematic

way. In this appendix we gather together and review those parts of linear

algebra that we will ¬nd useful in the main text.

A.1 Vector Space

A.1.1 Axioms

A vector space V over a ¬eld F is a set with two binary operations, vector

addition which assigns to each pair of elements x, y ∈ V a third element

denoted by x + y, and scalar multiplication which assigns to an element

x ∈ V and » ∈ F a new element »x ∈ V . There is also a distinguished

element 0 ∈ V such that the following axioms are obeyed1 :

1) Vector addition is commutative: x + y = y + x.

2) Vector addition is associative: (x + y) + z = x + (y + z).

3) Additive identity: 0 + x = x.

4) Existence of additive inverse: ∀x ∈ V, ∃(’x) ∈ V , such that x +

(’x) = 0.

5) Scalar distributive law i) »(x + y) = »x + »y.

6) Scalar distributive law ii) (» + µ)x = »x + µx.