output x

To analyze this procedure, suppose that ± = γ x with 0 ¤ x < q. Now, x

can be written in a unique way as x = vm + u, where u and v are integers

with 0 ¤ u < m and 0 ¤ v ¤ m . In the jth loop iteration, for j = 0, 1, . . . ,

we have

β = ±γ ’mj = γ (v’j)m+u .

So we will detect i = ’1 precisely when j = v, in which case i = u. Thus, the

output will be correct, and the total running time of the algorithm (for both

the “baby steps” and “giant steps” parts) is easily seen to be O(q 1/2 len(p)2 ).

While this algorithm is much faster than brute-force search, it has the

drawback that it requires a table ˜(q 1/2 ) elements of Zp . Of course, there

is a “time/space trade-o¬” here: by choosing m smaller, we get a table of

size O(m), but the running time will be proportional to O(q/m). In §11.2.5

below, we discuss an algorithm that runs (at least heuristically) in time

O(q 1/2 len(q) len(p)2 ), but which requires space for only a constant number

of elements of Zp .

11.2.3 Groups of order q e

Suppose that γ ∈ Z— generates a subgroup G of Z— of order q e , where q > 1

p p

and e ≥ 1, and we are given p, q, e, γ, and ± ∈ G, and wish to compute

logγ ±.

There is a simple algorithm that allows one to reduce this problem to the

problem of computing discrete logarithms in the subgroup of Z— of order q.

p

It is perhaps easiest to describe the algorithm recursively. The base case

is when e = 1, in which case, we use an algorithm for the subgroup of Z— of

p

order q. For this, we might employ the algorithm in §11.2.2, or if q is very

small, the algorithm in §11.2.1.

Suppose now that e > 1. We choose an integer f with 0 < f < e. Di¬erent

strategies for choosing f yield di¬erent algorithms ” we discuss this below.

Suppose ± = γ x , where 0 ¤ x < q e . Then we can write x = q f v + u, where

11.2 Computing discrete logarithms Z— 273

p

u and v are integers with 0 ¤ u < q f and 0 ¤ v < q e’f . Therefore,

e’f e’f u

±q = γq .

e’f

Note that γ q has multiplicative order q f , and so if we recursively compute

e’f e’f

the discrete logarithm of ±q to the base γ q , we obtain u.

Having obtained u, observe that

f

±/γ u = γ q v .

f

Note also that γ q has multiplicative order q e’f , and so if we recursively

f

compute the discrete logarithm of ±/γ u to the base γ q , we obtain v, from

which we then compute x = q f v + u.

Let us put together the above ideas succinctly in a recursive procedure

RDL(p, q, e, γ, ±) that runs as follows:

if e = 1 then

return logγ ± // base case: use a di¬erent algorithm

else

select f ∈ {1, . . . , e ’ 1}

e’f e’f

u ← RDL(p, q, f, γ q , ±q ) // 0 ¤ u < q f

f

v ← RDL(p, q, e ’ f, γ q , ±/γ u ) // 0 ¤ v < q e’f

return q f v + u

To analyze the running time of this recursive algorithm, note that the run-

ning time of the body of one recursive invocation (not counting the running

time of the recursive calls it makes) is O(e len(q) len(p)2 ). To calculate the

total running time, we have to sum up the running times of all the recursive

calls plus the running times of all the base cases.

Regardless of the strategy for choosing f , the total number of base case

invocations is e. Note that all the base cases compute discrete logarithms

e’1

to the base γ q . Assuming we implement the base case using the baby

step/giant step algorithm in §11.2.2, the total running time for all the base

cases is therefore O(eq 1/2 len(p)2 ).

The total running time for the recursion (not including the base case

computations) depends on the strategy used to choose the split f .

• If we always choose f = 1 or f = e ’ 1, then the total running time

for the recursion is O(e2 len(q) len(p)2 ). Note that if f = 1, then the

algorithm is essentially tail recursive, and so may be easily converted

to an iterative algorithm without the need for a stack.

• If we use a “balanced” divide-and-conquer strategy, choosing

f ≈ e/2, then the total running time of the recursion is

Finding generators and discrete logarithms in Z—

274 p

O(e len(e) len(q) len(p)2 ). To see this, note that the depth of the

“recursion tree” is O(len(e)), while the running time per level of the

recursion tree is O(e len(q) len(p)2 ).

Assuming we use the faster, balanced recursion strategy, the total running

time, including both the recursion and base cases, is:

O((eq 1/2 + e len(e) len(q)) · len(p)2 ).

11.2.4 Discrete logarithms in Z—

p

Suppose that we are given a prime p, along with the prime factorization

r

e

p’1= qi i ,

i=1

a generator γ for Z— , and ± ∈ Z— . We wish to compute logγ ±.

p p

x , where 0 ¤ x < p ’ 1. Then for i = 1, . . . , r, we have

Suppose that ± = γ

e e

(p’1)/qi i (p’1)/qi i x

± =γ .

ei

e

Note that γ (p’1)/qi has multiplicative order qi i , and if xi is the discrete

ei ei

e

logarithm of ±(p’1)/qi to the base γ (p’1)/qi , then we have 0 ¤ xi < qi i and

e

x ≡ xi (mod qi i ).

Thus, if we compute the values x1 , . . . , xr , using the algorithm in §11.2.3,

we can obtain x using the algorithm of the Chinese remainder theorem (see

Theorem 4.5). If we de¬ne q := max{q1 , . . . , qr }, then the running time of

this algorithm will be bounded by q 1/2 len(p)O(1) .

We conclude that

the di¬culty of computing discrete logarithms in Z— is deter-

p

mined by the size of the largest prime dividing p ’ 1.

11.2.5 A space-e¬cient square-root time algorithm

We present a more space-e¬cient alternative to the algorithm in §11.2.2, the

analysis of which we leave as a series of exercises for the reader.

The algorithm makes a somewhat heuristic assumption that we have a

function that “behaves” for all practical purposes like a random function.

Such functions can indeed be constructed using cryptographic techniques

under reasonable intractability assumptions; however, for the particular ap-

plication here, one can get by in practice with much simpler constructions.

Let p be a prime, q a prime dividing p ’ 1, γ an element of Z— that

p

— of order q, and ± ∈ G. Let F be a function

generates a subgroup G of Zp

11.3 The Di¬e“Hellman key establishment protocol 275

mapping elements of G to {0, . . . , q ’ 1}. De¬ne H : G ’ G to be the

function that sends β to β±γ F (β) .

The algorithm runs as follows:

i←1

x ← 0, β ← ±,

x ← F (β), β ← H(β)

while β = β do

x ← (x + F (β)) mod q, β ← H(β)