. . . . . .

. .

. . . . . .

cn1 cn2 . . . cnn cn1 cn2 . . . cnn

If we consider each row as being the components of a vector in an n-dimensional

vector space V , we may regard the determinant as being a skew-symmetric

n-linear form, i.e. a map

n factors

ω : V — V — ...V ’ F (A.45)

which is linear in each slot,

ω(»a + µb, c2 , . . . , cn ) = » ω(a, c2, . . . , cn ) + µ ω(b, c2, . . . , cn ), (A.46)

and changes sign when any two arguments are interchanged,

ω(. . . , ai , . . . , aj , . . .) = ’ω(. . . , aj , . . . , ai , . . .). (A.47)

A.5. DETERMINANTS 293

We will denote the space of skew-symmetric n-linear forms on V by the

symbol n (V — ). Let ω be an arbitrary skew-symmetric n-linear form in

n

(V — ), and let {e1 , e2 , . . . , en } be a basis for V . If ai = aij ej (i = 1, . . . , n)

is a collection of n vectors5 , we compute

ω(a1 , a2 , . . . , an ) = a1i1 a2i2 . . . anin ω(ei1 , ei2 , . . . , ein )

= a1i1 a2i2 . . . anin i1 i2 ...,in ω(e1 , e2 , . . . , en ). (A.48)

In the ¬rst line we have exploited the linearity of ω in each slot, and in going

from the ¬rst to the second line we have used skew-symmetry to rearrange

the basis vectors in their canonical order. We deduce that all skew-symmetric

n-forms are proportional to the determinant

a11 a12 . . . a1n

a21 a22 . . . a2n

ω(a1 , a2 , . . . , an ) ∝ . .,

. ..

. . .

.

. . .

an1 an2 . . . ann

and that the proportionality factor is the number ω(e1 , e2 , . . . , en ). When

the number of its slots is equal to the dimension of the vector space, there is

therefore essentially only one skew-symmetric multilinear form and n (V — )

is a one-dimensional vector space.

Exercise: Let ω be a skew-symmetric n-linear form on an n-dimensional

vector space. Assuming that ω does not vanish identically, show that a set

of n vectors x1 , x2 , . . . , xn is linearly independent, and hence forms a basis,

if, and only if, ω(x1 , x2 , . . . , xn ) = 0.

Now we use the notion of skew-symmetric n-linear forms to give a pow-

erful de¬nition of the determinant of an endomorphism, i.e. a linear map

A : V ’ V . Let ω be a non-zero skew-symmetric n-linear form. The object

ωA (x1 , x2 , . . . , xn ) = ω(Ax1 , Ax2 , . . . , Axn ). (A.49)

is also a skew-symmetric n-linear form. Since there is only one such object

up to multiplicative constants, we must have

ω(Ax1 , Ax2 , . . . , Axn ) ∝ ω(x1, x2 , . . . , xn ). (A.50)

5

The index j on aij should really be a superscript since a ij is the j-th contravariant

component of the vector ai . We are writing it as a subscript only for compatibility with

other equations in this section.

294 APPENDIX A. ELEMENTARY LINEAR ALGEBRA

We de¬ne “det A” to be the constant of proportionality. Thus

ω(Ax1 , Ax2 , . . . , Axn ) = det (A)ω(x1 , x2 , . . . , xn ). (A.51)

By writing this out in a basis where the linear map A is represented by the

matrix A, we easily see that

det A = det A. (A.52)

The new de¬nition is therefore compatible with the old one. The advantage

of this more sophisticated de¬nition is that it makes no appeal to a basis, and

so shows that the determinant of an endomorphism is a basis-independent

concept. A byproduct is an easy proof that det (AB) = det (A)det (B), a

result that is not so easy to establish with the elementary de¬nition. We

write

det (AB)ω(x1 , x2 , . . . , xn ) = ω(ABx1 , ABx2 , . . . , ABxn )

= ω(A(Bx1 ), A(Bx2 ), . . . , A(Bxn ))

= det (A)ω(Bx1 , Bx2 , . . . , Bxn )

= det (A)det (B)ω(x1 , x2 , . . . , xn ).

(A.53)

Cancelling the common factor of ω(x1 , x2 , . . . , xn ) completes the proof.

A.5.2 The Adjugate Matrix

Given a matrix «

a11 a12 . . . a1n

¬ a21 a22 . . . a2n ·

¬ ·

A=¬ . (A.54)

. .·

..

. . .

.

. . .

an1 an2 . . . ann

and an element aij , we de¬ne the corresponding minor Mij to be the deter-

minant of the (n ’ 1) — (n ’ 1) matrix constructed by deleting from A the

row and column containing aij . The number

Aij = (’1)i+j Mij (A.55)

is then called the co-factor of the element aij . (It is traditional to use up-

percase letters to denote co-factors.) The basic result involving co-factors is

that

aij Ai j = δii det A. (A.56)

j

A.5. DETERMINANTS 295

When i = i , this is simply the elementary de¬nition of the determinant

(although some signs need checking if i = 1). We get zero when i = i

because we are e¬ectively expanding out a determinant with two equal rows.

We now de¬ne the adjugate matrix6 , Adj A, to be the transposed matrix of

the co-factors:

(Adj A)ij = Aji . (A.57)

In terms of this we have

A(Adj A) = (det A)I. (A.58)

In other words

1

A’1 = Adj A. (A.59)

det A

Each entry in the adjugate matrix is a polynomial of degree n ’ 1 in the

entries of the original matrix. Thus, no division is required to form it, and

the adjugate matrix exists even if the inverse matrix does not.

Cayley™s Theorem

You should be familiar with the observation that the possible eigenvalues of

the n — n matrix A are given by the roots of its characteristic equation

0 = det (A ’ »I) = (’1)n »n ’ tr (A)»n’1 + · · · + (’1)n det (A) , (A.60)

and with Cayley™s Theorem which asserts that every matrix obeys its own

characteristic equation.

An ’ tr (A)An’1 + · · · + (’1)n det (A)I = 0. (A.61)

The proof of Cayley™s theorem involves the adjugate matrix. We write

det (A ’ »I) = (’1)n »n + ±1 »n’1 + · · · + ±n (A.62)

and observe that

det (A ’ »I)I = (A ’ »I)Adj (A ’ »I). (A.63)

Now Adj (A ’ »I) is a matrix-valued polynomial in » of degree n ’ 1, and it

can be written

Adj (A ’ »I) = C0 »n’1 + C1 »n’2 + · · · + Cn’1 , (A.64)

6

Some authors rather confusingly call this the adjoint matrix .

296 APPENDIX A. ELEMENTARY LINEAR ALGEBRA

for some matrix coe¬cients Ci . On multiplying out the equation

(’1)n »n + ±1 »n’1 + · · · + ±n I = (A ’ »I)(C0 »n’1 + C1 »n’2 + · · · + Cn’1 )

(A.65)

and comparing like powers of », we ¬nd the relations