conjugating the entries.

Exercise: When the basis is not orthonormal, show that

(A† )ρσ = (gσµ Aµν g νρ )— . (A.34)

A.4 Inhomogeneous Linear Equations

Suppose we wish to solve the system of linear equations

a11 y1 + a12 y2 + · · · + a1n yn = b1

a21 y1 + a22 y2 + · · · + a2n yn = b2

. .

. .

. .

am1 y1 + am2 y2 + · · · + amn yn = bm

or, in matrix notation,

Ay = b, (A.35)

where A is the n — m matrix with entries aij . Faced with such a problem,

we should start by asking ourselves the questions:

i) Does a solution exist?

ii) If a solution does exist, is it unique?

These issues are best addressed by considering the matrix A as a linear

operator A : V ’ W , where V is n dimensional and W is m dimensional.

The natural language is then that of the range and nullspaces of A. There

is no solution to the equation Ay = b when Im A is not the whole of W

and b does not lie in Im A. Similarly, the solution will not be unique if

there are distinct vectors x1 , x2 such that Ax1 = Ax2 . This means that

A(x1 ’ x2 ) = 0, or (x1 ’ x2 ) ∈ Ker A. These situations are linked, as we

have seen, by the range null-space theorem:

dim (Ker A) + dim (Im A) = dim V. (A.36)

Thus, if m > n there are bound to be some vectors b for which no solution

exists. When m < n the solution cannot be unique.

290 APPENDIX A. ELEMENTARY LINEAR ALGEBRA

Suppose V ≡ W (so m = n and the matrix is square) and we chose an

inner product, x, y , on V . Then x ∈ Ker A implies that, for all y

0 = y, Ax = A† y, x , (A.37)

or that x is perpendicular to the range of A† . Conversely, let x be perpen-

dicular to the range of A† ; then

x, A† y = 0, ∀y ∈ V, (A.38)

which means that

∀y ∈ V,

Ax, y = 0, (A.39)

and, by the non-degeneracy of the inner product, this means that Ax = 0.

The net result is that

Ker A = (Im A† )⊥ . (A.40)

Similarly

Ker A† = (Im A)⊥ . (A.41)

Now

dim (Ker A) + dim (Im A) = dim V,

dim (Ker A† ) + dim (Im A† ) = dim V, (A.42)

but

dim (Ker A) = dim (Im A† )⊥

= dim V ’ dim (Im A† )

= dim (Ker A† ).

Thus, for ¬nite-dimensional square matrices, we have

dim (Ker A) = dim (Ker A† )

In particular, the row and column rank of a square matrix coincide.

Example: Consider the matrix

«

123

¬ ·

A= 1 1 1

234

A.4. INHOMOGENEOUS LINEAR EQUATIONS 291

Clearly, the number of linearly independent rows is two, since the third row

is the sum of the other two. The number of linearly independent columns is

also two ” although less obviously so ” because

« « «

1 2 3

¬· ¬· ¬·

’1 + 21 = 1.

2 3 4

Warning: The equality dim (Ker A) = dim (Ker A† ), need not hold in in¬-

nite dimensional spaces. Consider the space with basis e1 , e2 , e3 , . . . indexed

by the positive integers. De¬ne Ae1 = e2 , Ae2 = e3 , and so on. This op-

erator has dim (Ker A) = 0. The adjoint with respect to the natural inner

product has A† e1 = 0, A† e2 = e1 , A† e3 = e2 . Thus Ker A† = {e1 }, and

dim (Ker A† ) = 1. The di¬erence dim (Ker A) ’ dim (Ker A† ) is called the in-

dex of the operator. The index of an operator is often related to topological

properties of the space on which it acts, and in this way appears in physics

as the origin of anomalies in quantum ¬eld theory.

A.4.1 Fredholm Alternative

The results of the previous section can be summarized as saying that the

Fredholm Alternative holds for ¬nite square matrices. The Fredholm Alter-

native is the set of statements

I. Either

i) Ax = b has a unique solution,

or

ii) Ax = 0 has a solution.

II. If Ax = 0 has n linearly independent solutions, then so does A† x = 0.

III. If alternative ii) holds, then Ax = b has no solution unless b is perpen-

dicular to all solutions of A† x = 0.

It should be obvious that this is a recasting of the statements that

dim (Ker A) = dim (Ker A† ),

and

(Ker A† )⊥ = Im A. (A.43)

Notice that ¬nite-dimensionality is essential here. Neither of these statement

is guaranteed to be true in in¬nite dimensional spaces.

292 APPENDIX A. ELEMENTARY LINEAR ALGEBRA

A.5 Determinants

Skew-symmetric n-linear Forms

A.5.1

You should be familiar with the elementary de¬nition of the determinant of

an n-by-n matrix A having entries aij . We have

a11 a12 . . . a1n

a21 a22 . . . a2n

det A ≡ . .= i1 i2 ...in a1i1 a2i2 . . . anin . (A.44)

. ..

. . .

.

. . .

an1 an2 . . . ann

Here, i1 i2 ...in is the Levi-Civita symbol, which is skew-symmetric in all its

indices and 12...n = 1. From this de¬nition we see that the determinant

changes sign if any pair of its rows are interchanged, and that it is linear in

each row. In other words

»a11 + µb11 »a12 + µb12 . . . »a1n + µb1n

c21 c22 ... c2n

. . .

..

. . .

.

. . .

cn1 cn2 ... cnn

a11 a12 . . . a1n b11 b12 . . . b1n

c21 c22 . . . c2n c21 c22 . . . c2n

=» . . +µ . ..

. .