algorithm, we can simply “read o¬” bases for both the image and the kernel,

as well as the inverse (if it exists), of a linear map represented as a matrix

with respect to some ordered bases. Also note that this procedure provides

a “constructive” version of Theorem 14.29.

Example 15.5. Continuing with Examples 15.3 and 15.4, we see that the

vectors ([1], [0], [2]) and ([0], [1], [1]) form a basis for the row space of A, while

the vector ([2], [2], [1]) is a basis for the row null space of A. 2

Solving linear systems of equations

Suppose that in addition to the matrix A, we are given w ∈ W , and want

to ¬nd a solution v (or perhaps describe all solutions v), to the equation

vA = w. (15.1)

Equivalently, we can phrase the problem as ¬nding an element (or describing

all elements) of the set ρ’1 (w).

Now, if there exists a solution at all, say v ∈ V , then since ρ(v) = ρ(˜) v

if and only if v ≡ v (mod ker(ρ)), it follows that the set of all solutions to

˜

(15.1) is equal to the coset v + ker(ρ). Thus, given a basis for ker(ρ) and

any solution v to (15.1), we have a complete and concise description of the

set of solutions to (15.1).

As we have discussed above, the last m ’ r rows of M give us a basis for

ker(ρ), so it su¬ces to determine if w ∈ img(ρ), and if so, determine a single

pre-image v of w.

Also as we discussed, img(ρ), that is, the row space of A, is equal to the

row space of B, and because of the special form of B, we can quickly and

easily determine if the given w is in the row space of B, as follows. By

de¬nition, w is in the row space of B i¬ there exists a vector v ∈ V such

¯

that v B = w. We may as well assume that all but the ¬rst r entries of v

¯ ¯

are zero. Moreover, v B = w implies that for i = 1, . . . , r, the ith entry if v

¯ ¯

is equal to the pi th entry of w. Thus, the vector v , if it exists, is completely

¯

15.5 Applications of Gaussian elimination 331

determined by the entries of w at positions p1 , . . . , pr . We can construct v

¯

satisfying these conditions, and then test if v B = w. If not, then we may

¯

conclude that (15.1) has no solutions; otherwise, setting v := v M , we see

¯

that vA = (¯M )A = v (M A) = v B = w, and so v is a solution to (15.1).

v ¯ ¯

One easily veri¬es that if we implement the above procedure as an algo-

rithm, the work done in addition to running the extended Gaussian elimi-

nation algorithm amounts to O(m(n + m)) operations in F .

A special case of the above procedure is when m = n and A is invertible,

in which case (15.1) has a unique solution, namely, v := wM , since in this

case, M = A’1 .

The rank of a matrix

De¬ne the row rank of A to be the dimension of its row space, which is

dimF (img(ρ)), and de¬ne the column rank of A to be the dimension of its

column space, that is, the space spanned by the columns of A.

Now, the column space A may not be the same as the column space of

B, but from the relation B = M A, and the fact that M is invertible, it

easily follows that these two subspaces are isomorphic (via the isomorphism

that sends v to M v), and hence have the same dimension. Moreover, by

Theorem 15.5, the column rank of B is r, which is the same as the row rank

of A.

So we may conclude: The column rank and row rank of A are the same.

Because of this, we de¬ne the rank of a matrix to be the common value

of its row and column rank.

The orthogonal complement of a subspace

So as to give equal treatment to rows and columns, one can also de¬ne

the column null space of A to be the kernel of the linear map de¬ned

by multiplication on the left by A. By applying the results above to the

transpose of A, we see that the column null space of A has dimension n ’ r,

where r is the rank of A.

¯

Let U ⊆ W denote the row space of A, and let U ⊆ W denote the set of

all vectors u ∈ W whose transpose u belong to the column null space of

¯ ¯

¯

A. Now, U is a subspace of W of dimension r and U is a subspace of W of

dimension n ’ r.

¯

Moreover, if U © U = {0V }, then by Theorem 14.13 we have an isomor-

¯ ¯ ¯

phism of U — U with U + U , and since U — U has dimension n, it must be the

332 Matrices

¯

case that U + U = W . It follows that every element of W can be expressed

¯¯

uniquely as u + u, where u ∈ U and u ∈ U .

¯

Now, all of the conclusions in the previous paragraph hinged on the as-

¯ ¯

sumption that U © U = {0V }. The space U consists precisely of all vectors

u ∈ W which are “orthogonal” to all vectors u ∈ U , in the sense that the

¯

¯

“inner product” u¯ is zero. For this reason, U is sometimes called the

u

¯

“orthogonal complement of U .” The condition U © U = {0V } is equivalent

to saying that U contains no non-zero “self-orthogonal vectors” u such that

uu = 0F . If F is the ¬eld of real numbers, then of course there are no

non-zero self-orthogonal vectors, since uu is the sum of the squares of the

entries of u. However, for other ¬elds, there may very well be non-zero self-

orthogonal vectors. As an example, if F = Z2 , then any vector u with an

even number of 1-entries is self orthogonal.

So we see that while much of the theory of vector spaces and matrices car-

ries over without change from familiar ground ¬elds, like the real numbers,

to arbitrary ground ¬elds F , not everything does. In particular, the usual

decomposition of a vector space into a subspace and its orthogonal comple-

ment breaks down, as does any other procedure that relies on properties

speci¬c to “inner product spaces.”

For the following three exercises, as above, A is an arbitrary m — n matrix

over a ¬eld F , and M A = B, where M is an invertible m — m matrix, and

B is in reduced row echelon form.

Exercise 15.9. Show that the column null space of A is the same as the

column null space of B.

Exercise 15.10. Show how to compute a basis for the column null space of

A using O(r(n ’ r)) operations in F , given A and B.

Exercise 15.11. Show that the matrix B is uniquely determined by A;

more precisely, show that if M A = B , where M is an invertible m — m

matrix, and B is in reduced row echelon form, then B = B.

In the following two exercises, the theory of determinants could be used;

however, they can all be solved directly, without too much di¬culty, using

just the ideas developed so far in the text.

Exercise 15.12. Let p be a prime. A matrix A ∈ Zm—m is called invertible

modulo p if and only if there exists a matrix X ∈ Zm—m such that AX ≡

XA ≡ I (mod p), where I is the m — m integer identity matrix. Here, two

matrices are considered congruent with respect to a given modulus if and

15.5 Applications of Gaussian elimination 333

only if their corresponding entries are congruent. Show that A is invertible

modulo p if and only if

• A is invertible over Q, and

• the entries of A’1 lie in Q(p) (see Example 9.23).

Exercise 15.13. You are given a matrix A ∈ Zm—m and a prime p such

that A is invertible modulo p. Suppose that you are also given w ∈ Z1—m .

(a) Show how to e¬ciently compute a vector v ∈ Z1—m such that vA =

w (mod p), and that v is uniquely determined modulo p.

(b) Given a vector v as in part (a), along with an integer e ≥ 1, show

how to e¬ciently compute v ∈ Z1—m such that v A = w (mod pe ), and