xnew := xold ’ [f (xold )]’1 f (xold )

gives

1

δ δ

xold ’ z 2 .

xnew ’ z ¤ t vx vx dt =

ρ 2ρ

0

From here on the argument is the same as in the one dimensional case.

1.2.3 Implementation.

We return to the one dimensional case.

In numerical practice we have to deal with two problems: it may not be easy

to encode the derivative, and we may not be able to tell in advance whether the

conditions for Newton™s method to work are indeed ful¬lled.

In case f is a polynomial, MATLAB has an e¬cient command “polyder” for

computing the derivative of f . Otherwise we replace the derivative by the slope

of the secant, which requires the input of two initial values, call them x’ and

xc and replaces the derivative in Newton™s method by

f (xc ) ’ f (x’ )

fapp (xc ) = .

xc ’ x ’

So at each stage of the Newton iteration we carry along two values of x, the

“current value” denoted say by “xc” and the “old value” denoted by “x ’ ”. We

also carry along two values of f , the value of f at xc denoted by fc and the value

of f at x’ denoted by f’ . So the Newton iteration will look like

fpc=(fc-f’ )/(xc-x’ );

xnew=xc-fc/fpc;

x’ -=xc; f’ =fc;

xc=xnew; fc=feval(fname,xc);

In the last line, the command feval is the MATLAB evaluation of a function

command: if fname is a “script” (that is an expression enclosed in ˜ ˜) giving

10 CHAPTER 1. ITERATIONS AND FIXED POINTS

the name of a function, then feval(fname,x) evaluates the function at the point

x.

The second issue - that of deciding whether Newton™s method should be used

at all - is handled as follows: If the zero in question is a critical point, so that

f (z) = 0, there is no chance of Newton™s method working. So let us assume

that f (z) = 0, which means that f changes sign at z, a fact that we can verify

by looking at the graph of f . So assume that we have found an interval [a, b]

containing the zero we are looking for, and such that f takes on opposite signs

at the end-points:

f (a)f (b) < 0.

A sure but slow method on narrowing in on a zero of f contained in this interval

1

is the “bisection method”: evaluate f at the midpoint 2 (a + b). If this value

has a sign opposite to that of f (a) replace b by 1 (a + b). Otherwise replace a

2

1

by 2 (a + b). This produces an interval of half the length of [a, b] containing a

zero.

The idea now is to check at each stage whether Newton™s method leaves us

in the interval, in which case we apply it, or else we apply the bisection method.

We now turn to the more di¬cult existence problem.

1.2.4 The existence theorem.

For the purposes of the proof, in order to simplify the notation, let us assume

that we have “shifted our coordinates” so as to take x0 = 0. Also let

B = {x : |x| ¤ 1}.

We need to assume that P (x) is nowhere zero, and that P (x) is bounded. In

fact, we assume that there is a constant K such that

|P (x)’1 | ¤ K, |P (x)| ¤ K, ∀x ∈ B. (1.13)

3

Proposition 1.2.1 Let „ = and choose the K in (1.13) so that

2

K ≥ 23/4 .

Let

8

c= ln K.

3

Then if

P (0) ¤ K ’5 (1.14)

the recursion (1.5) starting with x0 = 0 satis¬es

xn ∈ B ∀n (1.15)

and n

|xn ’ xn’1 | ¤ e’c„ . (1.16)

In particular, the sequence {xn } converges to a zero of P .

11

1.2. NEWTON™S METHOD

Proof. In fact, we will prove a somewhat more general result. So we will let „

be any real number satisfying

1<„ <2

and we will choose c in terms of K and „ to make the proof work. First of all

we notice that (1.15) is a consequence of (1.16) if c is su¬ciently large. In fact,

xj = (xj ’ xj’1 ) + · · · + (x1 ’ x0 )

so

|xj | ¤ |xj ’ xj’1 | + · · · + |x1 ’ x0 |.

Using (1.16) for each term on the right gives

j ∞ ∞

e’c(„ ’1)

’c„ n ’c„ n

e’cn(„ ’1) =

|xj | ¤ e < e < .

1 ’ e’c(„ ’1)

1 1 1

Here the third inequality follows from writing

„ = 1 + („ ’ 1)

so by the binomial formula

„ n = 1 + n(„ ’ 1) + · · · > n(„ ’ 1)

since „ > 1. The equality is obtained by summing the geometric series. So if

we choose c su¬ciently large that

e’c(„ ’1)

¤1 (1.17)

1 ’ e’c(„ ’1)

(1.15) follows from (1.16). This choice of c is conditioned by our choice of „ .

But at least we now know that if we can arrange that (1.16) holds, then by

choosing a possibly smaller value of c (so that (1.16) continues to hold) we can

guarantee that the algorithm keeps going.

So let us try to prove (1.16) by induction. If we assume it is true for n, we

may write

|xn+1 ’ xn | = |Sn P (xn )|

where we set

Sn = P (xn )’1 . (1.18)

We use the ¬rst inequality in (1.13) and the de¬nition (1.5) for the case n ’ 1

(which says that xn = xn’1 ’ Sn’1 P (xn’1 ) to get

|Sn P (xn )| ¤ K|P (xn’1 ’ Sn’1 P (xn’1 ))|. (1.19)

Taylor™s formula with remainder says that for any twice continuously di¬eren-

tiable function f ,

1

sup |f (z)|h2

f (y + h) = f (y) + f (y)h + R(y, h) where |R(y, h)| ¤

2z

12 CHAPTER 1. ITERATIONS AND FIXED POINTS

where the supremum is taken over the interval between y and y + h. If we use

Taylor™s formula with remainder with

f = P, y = P (xn’1 ), and h = Sn’1 P (xn’1 ) = xn ’ xn’1