§£

%

}{

| ¢

¡

This section discusses the reduction of square matrices into matrices that have simpler

forms, such as diagonal, bidiagonal, or triangular. Reduction means a transformation that

preserves the eigenvalues of a matrix.

¤q ¥

U SRP I#GDE C

H QF F HF

T

˜

Two matrices and are said to be similar if there is a nonsingular

matrix such that

0

0 ¨˜0

0 &

¤˜

The mapping is called a similarity transformation.

It is clear that similarity is an equivalence relation. Similarity transformations preserve

˜

the eigenvalues of matrices. An eigenvector of is transformed into the eigenvector ¤V

£

of . In effect, a similarity transformation amounts to representing the matrix

0

A¨V ¥V

£

˜ in a different basis.

We now introduce some terminology.

An eigenvalue of has algebraic multiplicity , if it is a root of multiplicity

@ ¦ ¦

of the characteristic polynomial.

If an eigenvalue is of algebraic multiplicity one, it is said to be simple. A nonsimple

eigenvalue is multiple.

'#

The geometric multiplicity of an eigenvalue of is the maximum number of @

§

independent eigenvectors associated with it. In other words, the geometric multi-

B

)

@3

plicity is the dimension of the eigenspace .

§

C

¨

A matrix is derogatory if the geometric multiplicity of at least one of its eigenvalues

is larger than one.

'©

An eigenvalue is semisimple if its algebraic multiplicity is equal to its geometric

multiplicity. An eigenvalue that is not semisimple is called defective.

)11)A6 0 ˜

Often, (¡ ) are used to denote the distinct eigenvalues of . It is

#

@ @ @

#

easy to show that the characteristic polynomials of two similar matrices are identical; see

Exercise 9. Therefore, the eigenvalues of two similar matrices are equal and so are their

˜

algebraic multiplicities. Moreover, if is an eigenvector of , then is an eigenvector 0

uY u d¥ £ ¡ ¨ ¢¤¥

n ¦¢¥

¤¤§ ¥ §¢

¡

£ ¡© ¥ ¡ © ¥ ¡ ¥ © ¥ ¥

¥ ¨

0

˜

¨ ¨

of and, conversely, if is an eigenvector of then is an eigenvector of . As 0 &

a result the number of independent eigenvectors associated with a given eigenvalue is the

same for two similar matrices, i.e., their geometric multiplicity is also the same.

¦¥¤¢

¡ £¡ ¥!5420)' &¨$#"!¢¥§

©¨

6§ 3 1 ' ( © %

The simplest form in which a matrix can be reduced is undoubtedly the diagonal form.

Unfortunately, this reduction is not always possible. A matrix that can be reduced to the

diagonal form is called diagonalizable. The following theorem characterizes such matrices.

„ §CjG b¤

¥¦ ¥¡

Q U

T

˜ ˜

A matrix of dimension is diagonalizable if and only if it has line-

arly independent eigenvectors.

§ § ¥¦£

T

A matrix is diagonalizable if and only if there exists a nonsingular matrix 0

0

0

and a diagonal matrix such that , or equivalently , where is

0 80 &0

7 7 7 7

&

˜ ¦ @

a diagonal matrix. This is equivalent to saying that linearly independent vectors exist ”

˜ 9

¦

the column-vectors of ” such that . Each of these column-vectors is an

0

eigenvector of .

A matrix that is diagonalizable has only semisimple eigenvalues. Conversely, if all the

˜

eigenvalues of a matrix are semisimple, then has eigenvectors. It can be easily

shown that these eigenvectors are linearly independent; see Exercise 2. As a result, we

have the following proposition.

q¤ G S£

£¦

SRFP cbP` Q`

HQ F Q U

T

A matrix is diagonalizable if and only if all its eigenvalues are

semisimple.

Since every simple eigenvalue is semisimple, an immediate corollary of the above result

˜

is: When has distinct eigenvalues, then it is diagonalizable.

AB¥¤¢

¡ £¡ ¥!5342'P#!IGHG'F¥!ED%

© § C ¨

1 '

6§

From the theoretical viewpoint, one of the most important canonical forms of matrices is

the well known Jordan form. A full development of the steps leading to the Jordan form

is beyond the scope of this book. Only the main theorem is stated. Details, including the