that encloses an approximate convex hull of the spectrum. Consider an ellipse centered
¨5
at , and with focal distance . Then as seen in Chapter 6, the shifted and scaled Chebyshev
©
polynomials de¬ned by
#@ a
2 Fv 0 p
¤ } @
)x p a
D
2F0 p
are nearly optimal. The use of these polynomials leads again to an attractive threeterm
recurrence and to an algorithm similar to Algorithm 12.1. In fact, the recurrence is identi
cal, except that the scalars involved can now be complex to accommodate cases where the
˜
ellipse has foci not necessarily located on the real axis. However, when is real, then the
symmetry of the foci with respect to the real axis can be exploited. The algorithm can still
be written in real arithmetic.
An alternative to Chebyshev polynomials over ellipses employs a polygon that ¢
˜ }
contains . Polygonal regions may better represent the shape of an arbitrary spectrum.
x
The best polynomial for the in¬nity norm is not known explicitly but it may be computed
by an algorithm known in approximation theory as the Remez algorithm. It may be simpler
to use an norm instead of the in¬nity norm, i.e., to solve (12.11) where is some weight
¨¢ £
function de¬ned on the boundary of the polygon . ¢
Now here is a sketch of an algorithm based on this approach. We use an norm as ¨¢
sociated with Chebyshev weights on the edges of the polygon. If the contour of consists ¢
¨
of edges each with center and halflength , then the weight on each edge is de¬ned ©
C C
by
An! j
¨ ¤c ¨ } C ¨ ¥ I
˜ ˜
d
} £
)xC £ ) x C ©P C
D vP “jiii¢’f"w t
h thhhte D
Using the power basis to express the best polynomial is not a safe practice. It is preferable to
use the Chebyshev polynomials associated with the ellipse of smallest area containing . ¢
With the above weights or any other Jacobi weights on the edges, there is a ¬nite procedure
which does not require numerical integration to compute the best polynomial. To do this,
each of the polynomials of the basis (namely, the Chebyshev polynomials associated with
the ellipse of smallest area containing ) must be expressed as a linear combination of¢
¨ ¨
the Chebyshev polynomials associated with the different intervals . This t C ¢© lT¢©
$C –
tC C
redundancy allows exact expressions for the integrals involved in computing the least
squares solution to (12.11).
Next, the main lines of a preconditioned GMRES algorithm are described based on
leastsquares polynomials. Eigenvalue estimates are obtained from a GMRES step at the
beginning of the outer loop. This GMRES adaptive corrects the current solution and the
eigenvalue estimates are used to update the current polygon . Correcting the solution ¢
at this stage is particularly important since it often results in a few orders of magnitude
—U™
™
—— p z7 w p w {x U gt
˜ { {  p p
£¤¢
¡ 9
§ "
! "
¡
£
¢ ¥
¤
improvement. This is because the polygon may be inaccurate and the residual vector is ¢
dominated by components in one or two eigenvectors. The GMRES step will immediately
annihilate those dominating components. In addition, the eigenvalues associated with these
components will now be accurately represented by eigenvalues of the Hessenberg matrix.
¡v¤r …¨ ¦3¥¢
¦¤ Y
' 0 4 ¡§8 96 6 % #
6 ' ¦¤§8 7652 9£8 && )
¥ ¢ 4 4¢ 6$
6 "7
@
¨
1. Start or Restart:
˜
2. Compute current residual vector . Y D z #

3. Adaptive GMRES step:
£
˜
4. Run steps of GMRES for solving .
§ D Y
c
£
5. Update by .   D t
6. Get eigenvalue estimates from the eigenvalues of the
7. Hessenberg matrix.
8. Compute new polynomial:
9. Re¬ne from previous hull and new eigenvalue estimates.
¢ ¢
10. Get new best polynomial . p
11. Polynomial Iteration:
˜ ˜
˜£
12. Compute the current residual vector . )Y
D z 
˜
} }
13. Run steps of GMRES applied to .
¨§ xp xp
D Y
£
14. Update by .   D t
15. Test for convergence.
16. If solution converged then Stop; else GoTo 1.
¥ Y% #
! ©¦§¥£¡
¨ ¤¢
Table 12.1 shows the results of applying GMRES(20) with polynomial
preconditioning to the ¬rst four test problems described in Section 3.7.
Matrix Iters K¬‚ops Residual Error
F2DA 56 2774 0.22E05 0.51E06