£

¡ W"U "U

W

b

$¬ b ` ¢b ¢

²W¦S ¬

By the de¬nition of , , and so from which the result

S Y

¢ ¢ ¢ ¢

¡F

W

follows immediately.

T

A rough estimate of the cost of each step of the algorithm is determined as follows. If

X T

§ §

is the number of nonzero elements of , then steps of the Arnoldi procedure will

! X

§ ¥

require matrix-vector products at the cost of . Each of the Gram-Schmidt !˜

! £ ¤ v¥

¥

steps costs approximately operations, which brings the total over the steps to !

´C

yq

„ 5| mV ¡ ® ky5„" @¦£ t ©

| § |

±

¥

¨ ¡ © B ¦£

§

!–¡ 1B

§

¤£

approximately . Thus, on the average, a step of FOM costs approximately

!˜ T

X

§ ¤¢

‘ !˜ w

)˜

¤

Regarding storage, vectors of length are required to save the basis . Additional ¢

!

vectors must be used to keep the current solution and right-hand side, and a scratch vector ¡

for the matrix-vector product. In addition, the Hessenberg matrix must be saved. The ¢

total is therefore roughly

T £

¤X !

¡

w w

!

˜

¤

In most situations is small relative to , so this cost is dominated by the ¬rst term.

!

¢£¡©D'C7 B8

E BA

C7¤ I ( © 07C7A ¡6P

$

! 38 5 A3 8 F53

'

¢

T

Consider now the algorithm from a practical viewpoint. As increases, the computational !

X£ T

X¤

cost increases at least as because of the Gram-Schmidt orthogonalization. The !

¤ ¤

memory cost increases as . For large this limits the largest value of that can ! !

be used. There are two remedies. The ¬rst is to restart the algorithm periodically and the

second is to “truncate” the orthogonalization in the Arnoldi algorithm. In this section we

consider the ¬rst of these two options, which is described below.

™ © ™

´ A ˜h¤ v ¢

¡¦ #¤ ©

¥ UY9@ S ¢ 5

5S8

0 (%&$

'#

) 2

1 3

’S ©V§ ²A¨ ¨

¬

¬ ¬

£

1. Compute , , and .

¨ –Q

S ¨

¦

W¡

2. Generate the Arnoldi basis and the matrix using the Arnoldi algorithm

¢

3. starting with . ¦

W¡

¢b ¢b ¢ w

¬ ¬© ©

4. Compute and . If satis¬ed then Stop.

FS W “ ¢ ¢

¡

3 W

© ¬ ©

5. Set and go to 1.

¢

There are many possible variations to this basic scheme. One that is generally more

economical in practice is based on the observation that sometimes a small is suf¬cient !

for convergence and sometimes the largest possible is necessary. Hence, the idea of !

¬

averaging over different values of . Start the algorithm with and increment by ! ! !

y

one in line 5 until a certain is reached, after which is reset to one, or kept the R !¢ ! !

same. These variations will not be considered here.

¥

CDA 7 §¨ ¦ ¥£

¤¢

Table 6.1 shows the results of applying the FOM algorithm with no pre-

conditioning to three of the test problems described in Section 3.7.

Matrix Iters K¬‚ops Residual Error

F2DA 109 4442 0.36E-03 0.67E-04

F3D 66 11664 0.87E-03 0.35E-03

ORS 300 13558 0.26E+00 0.71E-04

´ ¨ ¡¡µ£ „ ¢

§|5¥ yq¢| ¢ £ ¥§

„ 5| j C¦£¥

5§

9C "–© ¡"

§