t p c r p  )

£
˜˜ ˜ ˜˜
Y} x p Y e} x p
}
 D c rp 
and note that . If , then ,
D p c r p Y p D c rpY
Y
x
t
£ ¤c ¤
˜ ˜ } Y }} x
and . Therefore the relation (12.9) translates into the
p xc rp x v p
D
recurrence,
£ d c p£c p
p p
h pY D
v
v
©
Finally, the following algorithm is obtained.
“
vx¤ … ¢
¡¦ 975301¡ ( &%3 # §"§ ¡§ ¡# % #
!
8 64 2 ) ' $ $
¨©§¥
¦¤
¨
# z D Y ˜
1. ; ;
’©
Dc
£ c@
c eGD
D Y
2. ; ;
3. For until convergence Do:
£ tihii¢t ¢£DQ
hh
4. t p D c r p  p

£
˜
5. p D c r p Y p
Y
}
c dx D c rp
c
6. ;
&DE B¨ v p £
EC A
£
7. F
#c r p p p c rp
c rpY
D
8. EndDo
Lines 7 and 4 can also be recast into one single update of the form
} p # d } c p W p  c p ˜
t p p#c r p 
p D

 h
x
z{x
v
v
©
It can be shown that when and , the resulting preconditioned matrix
) 6
¡) `7
GD D
c
˜˜ }
minimizes the condition number of the preconditioned matrices of the form over all x
polynomials of degree . However, when used in conjunction with the Conjugate
9U
e
Gradient method, it is observed that the polynomial which minimizes the total number
of Conjugate Gradient iterations is far from being the one which minimizes the condition
number. If instead of taking and , the interval [6 ] is chosen to be 6 ) 7 G) 8A
7t
D D
c
slightly inside the interval [) ], a much faster convergence might be achieved. The true G )tc
optimal parameters, i.e., those that minimize the number of iterations of the polynomial
preconditioned Conjugate Gradient method, are dif¬cult to determine in practice.
There is a slight disadvantage to the approaches described above. The parameters 6
˜
and , which approximate the smallest and largest eigenvalues of , are usually not avail
7
able beforehand and must be obtained in some dynamic way. This may be a problem mainly
because a software code based on Chebyshev acceleration could become quite complex.
t™
—— p z7 w p w {x U gt
˜ { {  p p ¡
£¤¢
¡ 9
§ "
! "
¡
£
¢ ¥
¤
To remedy this, one may ask whether the values provided by an application of Gersh
gorin™s theorem can be used for and . Thus, in the symmetric case, the parameter 6 7
˜$ ˜
, which estimates the smallest eigenvalue of , may be nonpositive even when is a
6
positive de¬nite matrix. However, when , the problem of minimizing (12.5) is not 6 ¢
U
well de¬ned, since it does not have a unique solution due to the non strictconvexity of
the uniform norm. An alternative uses the norm on [6 ] with respect to some weight ¨¢ 8A
7t
}
function . This “leastsquares” polynomials approach is considered next.
)x £
¨¡§¥I
I¦ ¤ 5
©
% $&$"
# %# ! A ) 5 1 ) CB A@XI
97