(carry, ri+j ) ← divmod(tmp, B)

9.

ri+ ← ri+ + carry

10.

11. while ri+ < 0 do

carry ← 0

12.

for j ← 0 to ’ 1 do

13.

tmp ← ri+j + bi + carry

14.

(carry, ri+j ) ← divmod(tmp, B)

15.

ri+ ← ri+ + carry

16.

qi ← qi ’ 1

17.

output the quotient q = (qk’ · · · q0 )B

18.

and the remainder r = (r ’1 · · · r0 )B

Fig. 3.1. Division with Remainder Algorithm

Finally, consider the general case, where b may not be normalized. We

multiply both a and b by an appropriate value 2w , with 0 ¤ w < w,

obtaining a := a2w and b := 2w , where b is normalized; alternatively, we

can use a more e¬cient, special-purpose “left shift” algorithm to achieve

the same e¬ect. We then compute q and r such that a = b q + r , using

the above division algorithm for the normalized case. Observe that q =

a /b = a/b , and r = r2w , where r = a mod b. To recover r, we simply

divide r by 2w , which we can do either using the above “single precision”

division algorithm, or by using a special-purpose “right shift” algorithm. All

of this normalizing and denormalizing takes time O(k + ). Thus, the total

running time for division with remainder is still O( · (k ’ + 1)).

Exercise 3.14. Work out the details of algorithms for arithmetic on signed

integers, using the above algorithms for unsigned integers as subroutines.

You should give algorithms for addition, subtraction, multiplication, and

3.3 Basic integer arithmetic 45

division with remainder of arbitrary signed integers (for division with re-

mainder, your algorithm should compute a/b and a mod b). Make sure

your algorithm correctly computes the sign bit of the result, and also strips

leading zero digits from the result.

Exercise 3.15. Work out the details of an algorithm that compares two

signed integers a and b, determining which of a < b, a = b, or a > b holds.

Exercise 3.16. Suppose that we run the division with remainder algorithm

in Fig. 3.1 for > 1 without normalizing b, but instead, we compute the

value qi in line 4 as follows:

qi ← (ri+ B 2 + ri+ ’1 B + ri+ ’2 )/(b ’1 B +b ’2 ) .

Show that qi is either equal to the correct quotient digit, or the correct

quotient digit plus 1. Note that a limitation of this approach is that the

numbers involved in the computation are larger than B 2 .

Exercise 3.17. Work out the details for an algorithm that shifts a given

unsigned integer a to the left by a speci¬ed number of bits s (i.e., computes

b := a · 2s ). The running time of your algorithm should be linear in the

number of digits of the output.

Exercise 3.18. Work out the details for an algorithm that shifts a given

unsigned integer a to the right by a speci¬ed number of bits s (i.e., computes

b := a/2s ). The running time of your algorithm should be linear in the

number of digits of the output. Now modify your algorithm so that it

correctly computes a/2s for signed integers a.

Exercise 3.19. This exercise is for C /Java programmers. Evaluate the

C /Java expressions

(-17) % 4; (-17) & 3;

and compare these values with (’17) mod 4. Also evaluate the C /Java

expressions

(-17) / 4; (-17) >> 2;

and compare with ’17/4 . Explain your ¬ndings.

Exercise 3.20. This exercise is also for C /Java programmers. Suppose

that values of type int are stored using a 32-bit 2™s complement representa-

tion, and that all basic arithmetic operations are computed correctly modulo

232 , even if an “over¬‚ow” happens to occur. Also assume that double pre-

cision ¬‚oating point has 53 bits of precision, and that all basic arithmetic

46 Computing with large integers

operations give a result with a relative error of at most 2’53 . Also assume

that conversion from type int to double is exact, and that conversion from

double to int truncates the fractional part. Now, suppose we are given int

variables a, b, and n, such that 1 < n < 230 , 0 ¤ a < n, and 0 ¤ b < n.

Show that after the following code sequence is executed, the value of r is

equal to (a · b) mod n:

int q;

q = (int) ((((double) a) * ((double) b)) / ((double) n));

r = a*b - q*n;

if (r >= n)

r = r - n;

else if (r < 0)

r = r + n;

3.3.5 Summary

We now summarize the results of this section. For an integer a, we de¬ne

len(a) to be the number of bits in the binary representation of |a|; more

precisely,

log2 |a| + 1 if a = 0,

len(a) :=

1 if a = 0.

Notice that for a > 0, if := len(a), then we have log2 a < ¤ log2 a + 1, or

equivalently, 2 ’1 ¤ a < 2 .

Assuming that arbitrarily large integers are represented as described at

the beginning of this section, with a sign bit and a vector of base-B digits,

where B is a constant power of 2, we may state the following theorem.

Theorem 3.3. Let a and b be arbitrary integers.

(i) We can compute a ± b in time O(len(a) + len(b)).

(ii) We can compute a · b in time O(len(a) len(b)).

(iii) If b = 0, we can compute the quotient q := a/b and the remainder

r := a mod b in time O(len(b) len(q)).

Note the bound O(len(b) len(q)) in part (iii) of this theorem, which may be

signi¬cantly less than the bound O(len(a) len(b)). A good way to remember

this bound is as follows: the time to compute the quotient and remainder is

roughly the same as the time to compute the product bq appearing in the

equality a = bq + r.

This theorem does not explicitly refer to the base B in the underlying

3.3 Basic integer arithmetic 47

implementation. The choice of B a¬ects the values of the implied big-O

constants; while in theory, this is of no signi¬cance, it does have a signi¬cant

impact in practice.

From now on, we shall (for the most part) not worry about the imple-

mentation details of long-integer arithmetic, and will just refer directly this

theorem. However, we will occasionally exploit some trivial aspects of our

data structure for representing large integers. For example, it is clear that

in constant time, we can determine the sign of a given integer a, the bit

length of a, and any particular bit of the binary representation of a; more-

over, as discussed in Exercises 3.17 and 3.18, multiplications and divisions

by powers of 2 can be computed in linear time via “left shifts” and “right

shifts.” It is also clear that we can convert between the base-2 representa-

tion of a given integer and our implementation™s internal representation in

linear time (other conversions may take longer”see Exercise 3.25).

A note on notation: “len” and “log.” In expressing the run-

ning times of algorithms, we generally prefer to write, for exam-

ple, O(len(a) len(b)), rather than O((log a)(log b)). There are two

reasons for this. The ¬rst is esthetic: the function “len” stresses

the fact that running times should be expressed in terms of the bit

length of the inputs. The second is technical: big-O estimates in-

volving expressions containing several independent parameters, like

O(len(a) len(b)), should be valid for all possible values of the param-

eters, since the notion of “su¬ciently large” does not make sense in

this setting; because of this, it is very inconvenient to have functions,

like log, that vanish or are unde¬ned on some inputs.