Chapter 2: Eigensystem Analysis

Usage Notes

This chapter includes routines for linear eigensystem analysis. Many of these are for matrices with special properties. Some routines compute just a portion of the eigensystem. Use of the appropriate routine can substantially reduce computing time and storage requirements compared to computing a full eigensystem for a general complex matrix.

An ordinary linear eigensystem problem is represented by the equation Ax = λx where A denotes an  matrix. The value λ is an eigenvalue and x ≠ 0 is the corresponding eigenvector. The eigenvector is determined up to a scalar factor. In all routines, we have chosen this factor so that x has Euclidean length with value one, and the component of x of smallest index and largest magnitude is positive. In case x is a complex vector, this largest component is real and positive.

Similar comments hold for the use of the remaining Level 1 routines in the following tables in those cases where the second character of the Level 2 routine name is no longer the character "2".

A generalized linear eigensystem problem is represented by Ax = λBx where A and B are n × n matrices. The value λ is an eigenvalue, and x is the corresponding eigenvector. The eigenvectors are normalized in the same manner as for the ordinary eigensystem problem. The linear eigensystem routines have names that begin with the letter “E”. The generalized linear eigensystem routines have names that begin with the letter “G”. This prefix is followed by a two-letter code for the type of analysis that is performed. That is followed by another two-letter suffix for the form of the coefficient matrix. The following tables summarize the names of the eigensystem routines.

 

Symmetric and Hermitian Eigensystems

 

Symmetric
Full

Symmetric
Band

Hermitian
Full

All eigenvalues

EVLSF

EVLSB

EVLHF

All eigenvalues
and eigenvectors

EVCSF

EVCSB

EVCHF

Extreme eigenvalues

EVASF

EVASB

EVAHF

Extreme eigenvalues
and eigenvectors

EVESF

EVESB

EVEHF

Eigenvalues in
an interval

EVBSF

EVBSB

EVBHF

Eigenvalues and
eigevectors in an interval

EVFSF

EVFSB

EVFHF

Performance index

EPISF

EPISB

EPIHF

 

General Eigensystems

 

Real
General

Complex
General

Real
Hessenberg

Complex
Hessenberg

All eigenvalues

EVLRG

EVLCG

EVLRH

EVLCH

All eigenvalues and eigenvectors

EVCRG

EVCCG

EVCRH

EVCCH

Performance index

EPIRG

EPICG

EPIRG

EPICG

 

Generalized Eigensystems Ax = λBx

 

Real
General

Complex
General

A Symmetric
B Positive Definite

All eigenvalues

GVLRG

GVLCG

GVLSP

All eigenvalues and eigenvectors

GVCRG

GVCCG

GVCSP

Performance index

GPIRG

GPICG

GPISP

Error Analysis and Accuracy

The remarks in this section are for the ordinary eigenvalue problem. Except in special cases, routines will not return the exact eigenvalue-eigenvector pair for the ordinary eigenvalue problem Ax = λx. The computed pair

is an exact eigenvector-eigenvalue pair for a “nearby” matrix A + E. Information about E is known only in terms of bounds of the form || E||2 ≤ ƒ(n) ||A||2 ε. The value of ƒ(n) depends on the algorithm but is typically a small fractional power of n. The parameter ε is the machine precision. By a theorem due to Bauer and Fike (see Golub and Van Loan [1989, page 342]),

where σ (A) is the set of all eigenvalues of A (called the spectrum of A), X is the matrix of eigenvectors, || ⋅ ||2 is the 2-norm, and κ(X) is the condition number of X defined as
κ (X) = || X ||2 || X-1 ||2. If A is a real symmetric or complex Hermitian matrix, then its eigenvector matrix X is respectively orthogonal or unitary. For these matrices, κ(X) = 1.

The eigenvalues

and eigenvectors

computed by EVC** can be checked by computing their performance index τ using EPI**. The performance index is defined by Smith et al. (1976, pages 124 126) to be

No significance should be attached to the factor of 10 used in the denominator. For a real vector x, the symbol || x ||1 represents the usual 1-norm of x. For a complex vector x, the symbol || x ||1 is defined by

The performance index τ is related to the error analysis because

where E is the “nearby” matrix discussed above.

While the exact value of τ is machine and precision dependent, the performance of an eigensystem analysis routine is defined as excellent if τ < 1, good if 1 ≤ τ ≤ 100, and poor if τ > 100. This is an arbitrary definition, but large values of τ can serve as a warning that there is a blunder in the calculation. There are also similar routines GPI** to compute the performance index for generalized eigenvalue problems.

If the condition number κ(X) of the eigenvector matrix X is large, there can be large errors in the eigenvalues even if τ is small. In particular, it is often difficult to recognize near multiple eigenvalues or unstable mathematical problems from numerical results. This facet of the eigenvalue problem is difficult to understand: A user often asks for the accuracy of an individual eigenvalue. This can be answered approximately by computing the condition number of an individual eigenvalue. See Golub and Van Loan (1989, pages 344-345). For matrices A such that the computed array of normalized eigenvectors X is invertible, the condition number of λj is κj ≡ the Euclidean length of row j of the inverse matrix X-1. Users can choose to compute this matrix with routine LINCG, see Chapter 1, Linear Systems. An approximate bound for the accuracy of a computed eigenvalue is then given by κj ε|| A ||.  To compute an approximate bound for the relative accuracy of an eigenvalue, divide this bound by | λj |.

Reformulating Generalized Eigenvalue Problems

The generalized eigenvalue problem Ax = λBx is often difficult for users to analyze because it is frequently ill-conditioned. There are occasionally changes of variables that can be performed on the given problem to ease this ill-conditioning. Suppose that B is singular but A is nonsingular. Define the reciprocal μ = λ-1.  Then, the roles of A and B are interchanged so that the reformulated problem
Bx = μAx is solved. Those generalized eigenvalues μj = 0 correspond to eigenvalues λj = ∞. The remaining

The generalized eigenvectors for λj correspond to those for μj. Other reformulations can be made: If B is nonsingular, the user can solve the ordinary eigenvalue problem Cx ≡ B-1 Ax = λx. This is not recommended as a computational algorithm for two reasons. First, it is generally less efficient than solving the generalized problem directly. Second, the matrix C will be subject to perturbations due to ill-conditioning and rounding errors when computing B-1 A. Computing the condition numbers of the eigenvalues for C may, however, be helpful for analyzing the accuracy of results for the generalized problem.

There is another method that users can consider to reduce the generalized problem to an alternate ordinary problem. This technique is based on first computing a matrix decomposition B = PQ, where both P and Q are matrices that are “simple” to invert. Then, the given generalized problem is equivalent to the ordinary eigenvalue problem Fy = λy. The matrix FP-1 AQ-1. The unnormalized eigenvectors of the generalized problem are given by x = Q-1 y. An example of this reformulation is used in the case where A and B are real and symmetric with B positive definite. The IMSL routines GVLSP and GVCSP use P = RT and Q = R where R is an upper triangular matrix obtained from a Cholesky decomposition, B = RTR. The matrix F = R-T AR-1 is symmetric and real. Computation of the eigenvalue-eigenvector expansion for F is based on routine EVCSF.

Using ARPACK for Ordinary and Generalized Eigenvalue Problems

ARPACK consists of a set of Fortran 77 subroutines which use the Arnoldi method (Sorensen, 1992) to solve eigenvalue problems. ARPACK is well suited for large structured eigenvalue problems where structured means that a matrix-vector product wAv requires O(n) rather than the usual O(n2) floating point operations.

The suite of features that we have implemented from ARPACK are described in the work of Lehoucq, Sorensen and Yang, ARPACK Users’ Guide, SIAM Publications, (1998).  Users will find access to this Guide helpful.  Due to the size of the package, we provide for the use of double precision real and complex arithmetic only.

The ARPACK computational algorithm computes a partial set of approximate eigenvalues or singular values for various classes of problems.  This includes the ordinary problem,, the generalized problem,, and the singular value decomposition, .

The original API for ARPACK is a Reverse Communication Interface. This interface can be used as illustrated in the Guide. However, we provide a Fortran 2003 interface to ARPACK that will be preferred by some users. This is a forward communication interface based on user-written functions for matrix-vector products or linear equation solving steps required by the algorithms in ARPACK.  It is not necessary that the linear operators be expressed as dense or sparse matrices.  That is permitted, but for some problems the best approach is the ability to form a product of the operator with a vector.

The forward communication interface includes an argument of a user-extended derived type or class object.  The intent of producing this argument is that an extended type provides access to threaded user data or other required information, including procedure pointers, for use in the user-written product functions.  It also hides information that can often be ignored with a first use.



http://www.vni.com/
PHONE: 713.784.3131
FAX:713.781.9260