numbers, experience shows that the classical algorithms are de¬nitely not

the best ”Karatsuba™s multiplication algorithm, and related algorithms for

division, start to perform signi¬cantly better than the classical algorithms

on inputs of a thousand bits or so (the exact crossover depends on myriad

implementation details). The even “faster” algorithms discussed above are

typically not interesting unless the numbers involved are truly huge, of bit

length around 105 “106 . Thus, the reader should bear in mind that for serious

computations involving very large numbers, the faster algorithms are very

important, even though this text does not discuss them at great length.

For a good survey of asymptotically fast algorithms for integer arithmetic,

see Chapter 9 of Crandall and Pomerance [30], as well as Chapter 4 of Knuth

[54].

4

Euclid™s algorithm

In this chapter, we discuss Euclid™s algorithm for computing greatest com-

mon divisors. It turns out that Euclid™s algorithm has a number of very nice

properties, and has applications far beyond that purpose.

4.1 The basic Euclidean algorithm

We consider the following problem: given two non-negative integers a and

b, compute their greatest common divisor, gcd(a, b). We can do this using

the well-known Euclidean algorithm, also called Euclid™s algorithm.

The basic idea of Euclid™s algorithm is the following. Without loss of

generality, we may assume that a ≥ b ≥ 0. If b = 0, then there is nothing to

do, since in this case, gcd(a, 0) = a. Otherwise, if b > 0, we can compute the

integer quotient q := a/b and remainder r := a mod b, where 0 ¤ r < b.

From the equation

a = bq + r,

it is easy to see that if an integer d divides both b and r, then it also divides

a; likewise, if an integer d divides a and b, then it also divides r. From

this observation, it follows that gcd(a, b) = gcd(b, r), and so by performing

a division, we reduce the problem of computing gcd(a, b) to the “smaller”

problem of computing gcd(b, r).

The following theorem develops this idea further:

Theorem 4.1. Let a, b be integers, with a ≥ b ≥ 0. Using the division with

remainder property, de¬ne the integers r0 , r1 , . . . , r +1 , and q1 , . . . , q , where

≥ 0, as follows:

55

56 Euclid™s algorithm

a = r0 ,

b = r1 ,

r0 = r 1 q 1 + r2 (0 < r2 < r1 ),

.

.

.

ri’1 = ri qi + ri+1 (0 < ri+1 < ri ),

.

.

.

r =r ’1 q ’1 +r (0 < r < r ’1 ),

’2

r =r q (r = 0).

’1 +1

Note that by de¬nition, = 0 if b = 0, and > 0, otherwise.

Then we have √ = gcd(a, b). Moreover, if b > 0, then ¤ log b/ log φ + 1,

r

where φ := (1 + 5)/2 ≈ 1.62.

Proof. For the ¬rst statement, one sees that for i = 1, . . . , , we have ri’1 =

ri qi + ri+1 , from which it follows that the common divisors of ri’1 and ri are

the same as the common divisors of ri and ri+1 , and hence gcd(ri’1 , ri ) =

gcd(ri , ri+1 ). From this, it follows that

gcd(a, b) = gcd(r0 , r1 ) = gcd(r , r +1 ) = gcd(r , 0) = r .

To prove the second statement, assume that b > 0, and hence > 0. If

= 1, the statement is obviously true, so assume > 1. We claim that for

i = 0, . . . , ’ 1, we have r ’i ≥ φi . The statement will then follow by setting

i = ’ 1 and taking logarithms.

We now prove the above claim. For i = 0 and i = 1, we have

r ≥ 1 = φ0 and r ≥ r + 1 ≥ 2 ≥ φ1 .

’1

For i = 2, . . . , ’ 1, using induction and applying the fact the φ2 = φ + 1,

we have

≥ φi’1 + φi’2 = φi’2 (1 + φ) = φi ,

≥r

r +r

’i ’(i’1) ’(i’2)

which proves the claim. 2

Example 4.1. Suppose a = 100 and b = 35. Then the numbers appearing

in Theorem 4.1 are easily computed as follows:

i 0 1 2 3 4

ri 100 35 30 5 0

qi 2 1 6

4.1 The basic Euclidean algorithm 57

So we have gcd(a, b) = r3 = 5. 2

We can easily turn the scheme described in Theorem 4.1 into a simple

algorithm, taking as input integers a, b, such that a ≥ b ≥ 0, and producing

as output d = gcd(a, b):

r ← a, r ← b

while r = 0 do

r ← r mod r

(r, r ) ← (r , r )

d←r

output d

We now consider the running time of Euclid™s algorithm. Naively, one

could estimate this as follows. Suppose a and b are k-bit numbers. The

algorithm performs O(k) divisions on numbers with at most k-bits. As each

such division takes time O(k 2 ), this leads to a bound on the running time

of O(k 3 ). However, as the following theorem shows, this cubic running time

bound is well o¬ the mark.

Theorem 4.2. Euclid™s algorithm runs in time O(len(a) len(b)).

Proof. We may assume that b > 0. The running time is O(„ ), where „ :=

i=1 len(ri ) len(qi ). Since ri ¤ b for i = 1, . . . , , we have

„ ¤ len(b) len(qi ) ¤ len(b) (log2 qi + 1) = len(b)( + log2 ( qi )).

i=1 i=1 i=1

Note that

a = r0 ≥ r 1 q 1 ≥ r 2 q 2 q 1 ≥ · · · ≥ r q · · · q 1 ≥ q · · · q 1 .

¤ log b/ log φ + 1. Combining this with the above, we have

We also have

„ ¤ len(b)(log b/ log φ + 1 + log2 a) = O(len(a) len(b)),

which proves the theorem. 2

Exercise 4.1. This exercise looks at an alternative algorithm for comput-

ing gcd(a, b), called the binary gcd algorithm. This algorithm avoids

complex operations, such as division and multiplication; instead, it relies

only on division and multiplication by powers of 2, which assuming a binary

representation of integers (as we are) can be very e¬ciently implemented

using “right shift” and “left shift” operations. The algorithm takes positive

integers a and b as input, and runs as follows:

58 Euclid™s algorithm

r ← a, r ← b, e ← 0

while 2 | r and 2 | r do r ← r/2, r ← r /2, e ← e + 1

repeat

while 2 | r do r ← r/2

while 2 | r do r ← r /2

if r < r then (r, r ) ← (r , r)

r ←r ’r

until r = 0