towards ±j . In this case 2 is is the maximum integer such that ±i + k±j is a

root. In all other cases, this maximum integer k is one if the nodes are connected

(and zero it they are not).

3.6 Low dimensional coincidences.

We have already seen that o(4) ∼ sl(2) • sl(2). We also have

o(6) ∼ sl(4).

3.6. LOW DIMENSIONAL COINCIDENCES. 57

............

• • A

......

• • • ≥3

>• B

......

• • • • ≥2

< C

¨•

......

• • •¨ ≥4

D

rr

•

Figure 3.1: Dynkin diagrams of the classical simple algebras.

Both algebras are ¬fteen dimensional and both are simple. So to realize this

isomorphism we need only ¬nd an orthogonal representation of sl(4) on a six

dimensional space. If we let V = C4 with the standard representation of sl(4),

we get a representation of sl(4) on §2 (V ) which is six dimensional. So we must

describe a non-degenerate bilinear form on §2 V which is invariant under the

action of sl(4). We have a map, wedge product, of

§2 V — §2 V ’ §4 V.

Furthermore this map is symmetric, and invariant under the action of gl(4).

However sl(4) preserves a basis (a non-zero element) of §4 V and so we may

identify §4 V with C. It is easy to check that the bilinear form so obtained is

non-degenerate

We also have the identi¬cation

sp(4) ∼ o(5)

both algebras being ten dimensional. To see this let V = C4 with an antisym-

metric form ω preserved by Sp(4). Then ω — ω induces a symmetric bilinear

form on V — V as we have seen. Sitting inside V — V as an invariant subspace

is §2 V as we have seen, which is six dimensional. But §2 V is not irreducible as

a representation of sp(4). Indeed, ω ∈ §2 V — is invariant, and hence its kernel is

a ¬ve dimensional subspace of §2 V which is invariant under sp(4). We thus get

a non-zero homomorphism sp(4) ’ o(5) which must be an isomorphism since

sp(4) is simple.

58 CHAPTER 3. THE CLASSICAL SIMPLE ALGEBRAS.

These coincidences can be seen in the diagrams. If we were to allow = 2 in

the diagram for B it would be indistinguishable from C2 . If we were to allow

= 3 in the diagram for D it would be indistinguishable from A3 .

3.7 Extended diagrams.

It follows from Jacobi™s identity that in the decomposition (3.10), we have

[g± , g± ] ‚ g±+± (3.15)

with the understanding that the right hand side is zero if ± + ± is not a root.

In each of the cases examined above, every positive root is a linear combination

of the simple roots with non-negative integer coe¬cients. Since the algebra is

¬nite, there must be a maximal positive root β in the sense that β + ±i is not

a root for any simple root. For example, in the case of An = sl(n + 1), the root

β := L1 ’Ln+1 is maximal. The corresponding gβ consists of all (n+1)—(n+1)

matrices with zeros everywhere except in the upper right hand corner. We can

also consider the minimal root which is the negative of the maximal root, so

±0 := ’β = Ln+1 ’ L1

in the case of An . Continuing to study this case, let

h0 := hn+1 ’ h1 .

Then we have

±i (hi ) = 2, i = 0, . . . n

and

±0 (h1 ) = ±0 (hn ) = ’1, ±0 (hi ) = 0, i = 0, 1, n.

This means that if we write out the (n + 1) — (n + 1) matrix whose entries are

±i (hj ), i, j = 0, . . . n we obtain a matrix of the form

2I ’ M

where Mij = 1 if and only if j = ±1 with the understanding that n + 1 = 0 and

’1 = n, i.e we do the subscript arithmetic mod n. In other words, M is the

adjacency matrix of the cyclic graph with n + 1 vertices labeled 0, . . . n. Also,

we have

h0 + h1 + · · · + hn = 0.

If we apply ±i to this equation for i = 0, . . . n we obtain

(2I ’ M )1 = 0,

where 1 is the column vector all of whose entries are 1. We can write this

equation as

M 1 = 21.

3.7. EXTENDED DIAGRAMS. 59

In other words, 1 is an eigenvector of M with eigenvalue 2.

In the chapters that follow we shall see that any ¬nite dimensional simple Lie

algebra has roots, simple roots, maximal roots etc. giving rise to a matrix M

with integer entries which is irreducible (in the sense of non-negative matrices -

de¬nition later on) and which has an eigenvector with positive (integer) entries

with eigenvalue 2. This will allow us to classify the simple (¬nite dimensional)

Lie algebras.

60 CHAPTER 3. THE CLASSICAL SIMPLE ALGEBRAS.

Chapter 4

Engel-Lie-Cartan-Weyl

We return to the general theory of Lie algebras. Many of the results in this

chapter are valid over arbitrary ¬elds, indeed if we use the axioms to de¬ne

a Lie algebra over a ring many of the results are valid in this generality. But

some of the results depend heavily on the ring being an algebraically closed ¬eld

of characteristic zero. As a compromise, throughout this chapter we deal with

¬elds, and will assume that all vector spaces and all Lie algebras which appear

are ¬nite dimensional. We will indicate the necessary additional assumptions on

the ground ¬eld as they occur. The treatment here follows Serre pretty closely.

4.1 Engel™s theorem

De¬ne a Lie algebra g to be nilpotent if:

∃n| [x1 , [x2 , . . . xn+1 ] . . . ] = 0 ∀ x1 , . . . , xn+1 ∈ g.

Example: n+ := n+ (gl(d)) := all strictly upper triangular matrices. Notice

that the product of any d + 1 such matrices is zero.

The claim is that all nilpotent Lie algebras are essentially like n+ .

We can reformulate the de¬nition of nilpotent as saying that the product of

any n operators ad xi vanishes. One version of Engel™s theorem is

Theorem 3 g is nilpotent if and only if ad x is a nilpotent operator for each

x ∈ g.

This follows (taking V = g and the adjoint representation) from