In this list 1, », µ, ∈ F and x, y, 0 ∈ V .

281

282 APPENDIX A. ELEMENTARY LINEAR ALGEBRA

7) Scalar multiplicative associativity: (»µ)x = »(µx).

8) Multiplicative identity: 1x = x.

The elements of V are called vectors.

In the sequel, we will only consider vector spaces over the ¬eld of the real

numbers, F = R, or the complex numbers, F = C.

A.1.2 Bases and Components

Let V be a vector space over F . For the moment, this space has no additional

structure beyond that of the previous section ” no inner product and so no

notion of what it means for two vectors to be orthogonal. There is still much

that can be done, though. Here are the most basic concepts and properties

that you should understand:

i) A set of vectors {e1 , e2 , . . . , en } is linearly dependent i¬ there exist

»µ ∈ F , not all zero, such that

»1 e1 + »2 e2 + · · · + »n en = 0. (A.1)

ii) A set of vectors {e1 , e2 , . . . , en } is linearly independent i¬

»1 e1 + »2 e2 + · · · + »n en = 0 »µ = 0, ∀µ.

’ (A.2)

iii) A set of vectors {e1 , e2 , . . . , en } is a spanning set i¬ for any x ∈ V there

are numbers xµ such that x can be written (not necessarily uniquely)

as

x = x1 e1 + x2 e2 + · · · + xn en . (A.3)

A vector space is ¬nite dimensional i¬ a ¬nite spanning set exists.

iv) A set of vectors {e1 , e2 , . . . , en } is said to be a basis if it is a maximal

linearly independent set (i.e. adding any other vector makes the set

linearly dependent). An alternative de¬nition declares a basis to be a

minimal spanning set (i.e. deleting any vector destroys the spanning

property). Exercise: Show that these two de¬nitions are equivalent.

v) If {e1 , e2 , . . . , en } is a basis then any x ∈ V can be written

x = x1 e1 + x2 e2 + . . . xn en , (A.4)

where the xµ , the components of the vector, are unique in that two

vectors coincide i¬ they have the same components.

A.2. LINEAR MAPS 283

vi) Fundamental Theorem: If the sets {e1 , e2 , . . . , en } and {f1 , f2 , . . . , fm }

are both bases for the space V then m = n. This invariant number is

the dimension, dim (V ), of the space. For a proof (not di¬cult) see

a mathematics text such as Birkho¬ and McLane™s Survey of Modern

Algebra, or Halmos™ Finite Dimensional Vector Spaces.

Suppose that {e1 , e2 , . . . , en } and {e1 , e2 , . . . , en } are both bases, and that

eν = aµ eµ , (A.5)

ν

where the spanning properties and linear independence demand that aµ be an

ν

invertable matrix. (Note that we are, as usual, using the Einstein summation

convention that repeated indices are to be summed over.) The components

x µ of x in the new basis are then found from

x = x µ eµ = xν eν = (xν aµ ) eµ (A.6)

ν

as x µ = aµ xν , or equivalently, xν = (a’1 )ν x µ . Note how the eµ and the xµ

ν µ

transform in opposite directions. The components xµ are therefore said to

transform contravariantly.

A.2 Linear Maps

Let V and W be vector spaces. A linear map, or linear operator, A is a

function A : V ’ W with the property that

A(»x + µy) = »A(x) + µA(y). (A.7)

It is an object that exists independently of any basis. Given bases {eµ } for

V and {fν } for W , however, it may be represented by a matrix . We obtain

this matrix A, having entries Aν µ , by looking at the action of the map on

the basis elements:

A(eµ ) = fν Aν µ . (A.8)

The “backward” wiring of the indices is deliberate2 . It is set up so that if

y = A(x), then

y ≡ y ν fν = A(x) = A(xµ eµ ) = xµ A(eµ ) = xµ (fν Aν µ ) = (Aν µ xµ )fν . (A.9)

2

You will have seen this “backward” action before in quantum mechanics. If we use

Dirac notation |n for an orthonormal basis, and insert a complete set of states, |m m|,

then A|n = |m m|A|n , and so the matrix m|A|n representing the operator A naturally

appears to the right of the vector on which it acts.

284 APPENDIX A. ELEMENTARY LINEAR ALGEBRA

Comparing coe¬cients of fν , we have

y ν = Aν µ xµ , (A.10)

which is the usual matrix multiplication y = Ax.

A.2.1 Range-Nullspace Theorem

Given a linear map A : V ’ W , we can de¬ne two important subspaces:

i) The kernel or nullspace is de¬ned by

Ker A = {x ∈ V : A(x) = 0}. (A.11)

It is a subspace of V .

ii) The range or image space is de¬ned by

Im A = {y ∈ W : y = A(x), x ∈ V }. (A.12)

It is a subspace of the target space W .

The key result linking these spaces is the range-nullspace theorem which

states that

dim (Ker A) + dim (Im A) = dim V

It is proved by taking a basis, nµ , for Ker A and extending it to a basis for the

whole of V by appending (dim V ’ dim (Ker A)) extra vectors, eν . It is easy

to see that the vectors A(eν ) are linearly independent and span Im A ⊆ W .

Note that this result is meaningless unless V is ¬nite dimensional.

If dim V = n and dim W = m, then the linear map will represented by an

n — m matrix. The number dim (Im A) is the number of linearly independent

columns in the matrix, and is often called the (column) rank of the matrix.

A.2.2 The Dual Space

Associated with the vector space V is its dual space, V — , which is the set

of linear maps f : V ’ F . In other words the set of linear functions f ( )

that take in a vector and return a number. These functions are often called

covectors. (Mathematicians often stick the pre¬x co in front of a word to

indicate a dual class of objects, which is always the set of structure-preserving

maps of the objects into the ¬eld over which they are de¬ned.)

A.2. LINEAR MAPS 285

Using linearity we have

f (x) = f (xµ eµ ) = xµ f (eµ ) = xµ fµ . (A.13)

The set of numbers fµ = f (eµ ) are the components of the covector f ∈ V — .

If eν = aµ eµ then

ν

fν = f (eν ) = f (aµ eµ ) = aµ f (eµ ) = aµ fµ . (A.14)

ν ν ν

Thus fν = aµ fµ and the fµ components transform in the same direction as

ν

the basis. They are therefore said to transform covariantly.

Given a basis eµ of V , we can de¬ne a dual basis for V — as the set of

covectors e—µ ∈ V — such that

e—µ (eν ) = δν .

µ

(A.15)

It is clear that this is a basis for V — , and that f can be expanded

f = fµ e—µ . (A.16)