Subsection 9.5.2 Summary
¶Definition 9.5.2.1. Eigenvalue, eigenvector, and eigenpair.
Let \(A \in \C^{m \times m} \text{.}\) Then \(\lambda \in \C \) and nonzero \(x \in \C^{m} \) are said to be an eigenvalue and corresponding eigenvector if \(A x = \lambda x \text{.}\) The tuple \(( \lambda, x ) \) is said to be an eigenpair.
For \(A \in \mathbb C^{m \times m} \text{,}\) the following are equivalent statements:
\(A \) is nonsingular.
\(A \) has linearly independent columns.
There does not exist \(x \neq 0 \) such that \(A x = 0 \text{.}\)
\(\Null( A ) = \{ 0 \} \text{.}\) (The null space of \(A \) is trivial.)
\(\dim(\Null( A )) = 0 \text{.}\)
\(\det( A ) \neq 0 \text{.}\)
For \(A \in \mathbb C^{m \times m} \text{,}\) the following are equivalent statements:
\(\lambda \) is an eigenvalue of \(A \)
\(( \lambda I - A ) \) is singular.
\(( \lambda I - A ) \) has linearly dependent columns.
There exists \(x \neq 0 \) such that \(( \lambda I - A ) x = 0 \text{.}\)
The null space of \(\lambda I - A \) is nontrivial.
\(\dim(\Null( \lambda I - A )) \gt 0 \text{.}\)
\(\det( \lambda I - A ) = 0 \text{.}\)
Definition 9.5.2.2. Spectrum of a matrix.
The set of all eigenvalues of \(A \) is denoted by \(\Lambda( A ) \) and is called the spectrum of \(A \text{.}\)Definition 9.5.2.3. Spectral radius.
The spectral radius of \(A \text{,}\) \(\rho( A ) \text{,}\) equals the magnitude of the largest eigenvalue, in magnitude:
Theorem 9.5.2.4. Gershgorin Disk Theorem.
Let \(A \in \mathbb C^{m \times m} \text{,}\)
and
In other words, \(\rho_i(A) \) equals the sum of the absolute values of the off diagonal elements of \(A \) in row \(i \) and \(R_i(A) \) equals the set of all points in the complex plane that are within a distance \(\rho_i \) of diagonal element \(\alpha_{i,i} \text{.}\) Then
In other words, every eigenvalue lies in one of the disks of radius \(\rho_i(A) \) around diagonal element \(\alpha_{i,i} \text{.}\)
Corollary 9.5.2.5.
Let \(A \) and \(R_i(A) \) be as defined in Theorem 9.5.2.4. Let \(K \) and \(K^C \) be disjoint subsets of \(\{ 0, \ldots, m-1 \}\) such that \(K \cup K^C = \{ 0, \ldots, m-1 \} \text{.}\) In other words, let \(K \) be a subset of \(\{ 0, \ldots , m-1 \) and \(K^C \) its complement. If
then \(\cup_{k \in K} R_k(A) \) contains exactly \(\vert K \vert \) eigenvalues of \(A \) (multiplicity counted). In other words, if \(\cup_{k \in K} R_k(A) \) does not intersect with any of the other disks, then it contains as many eigenvalues of \(A \) (multiplicity counted) as there are elements of \(K \text{.}\)
Some useful facts for \(A \in \C^{m \times m} \text{:}\)
\(0 \in \Lambda( A ) \) if and only \(A \) is singular.
The eigenvectors corresponding to distinct eigenvalues are linearly independent.
Let \(A \) be nonsingular. Then \(( \lambda, x ) \) is an eigenpair of \(A \) if and only if \(( 1/\lambda, x ) \) is an eigenpair of \(A^{-1} \text{.}\)
\(( \lambda, x ) \) is an eigenpair of \(A \) if and only if \(( \lambda - \rho, x ) \) is an eigenpair of \(A - \rho I \text{.}\)
Some useful facts for Hermitian \(A \in \C^{m \times m} \text{:}\)
All eigenvalues are real-valued.
\(A \) is HPD if and only if all its eigenvalues are positive.
If \(( \lambda, x ) \) and \(( \mu, y ) \) are both eigenpairs of Hermitian \(A \text{,}\) then \(x\) and \(y \) are orthogonal.
Definition 9.5.2.6.
The determinant of
is given by
The characteristic polynomial of
is given by
This is a second degree polynomial in \(\lambda \) and has two roots (multiplicity counted). The eigenvalues of \(A \) equal the roots of this characteristic polynomial.
The characteristic polynomial of \(A \in \mathbb C^{m \times m} \) is given by
This is a polynomial in \(\lambda \) of degree \(m \) and has \(m \) roots (multiplicity counted). The eigenvalues of \(A \) equal the roots of this characteristic polynomial. Hence, \(A \) has \(m\) eigenvalues (multiplicity counted).
Definition 9.5.2.7. Algebraic multiplicity of an eigenvalue.
Let \(A \in \Cmxm \) and \(p_m( \lambda ) \) its characteristic polynomial. Then the (algebraic) multiplicity of eigenvalue \(\lambda_i \) equals the multiplicity of the corresponding root of the polynomial.
If
and
then
Corollary 9.5.2.8.
If \(A \in \R^{m \times m} \) is real-valued then some or all of its eigenvalues may be complex-valued. If eigenvalue \(\lambda \) is complex-valued, then its conjugate, \(\bar \lambda \text{,}\) is also an eigenvalue. Indeed, the complex eigenvalues of a real-valued matrix come in complex pairs.
Corollary 9.5.2.9.
If \(A \in \R^{m \times m} \) is real-valued and \(m\) is odd, then at least one of the eigenvalues of \(A\) is real-valued.
Let \(\lambda \) be an eigenvalue of \(A \in \mathbb C^{m \times m} \) and
bet the set of all eigenvectors of \(A \) associated with \(\lambda \) plus the zero vector (which is not considered an eigenvector). This set is a subspace.
The elements on the diagonal of a diagonal matrix are its eigenvalues. The elements on the diagonal of a triangular matrix are its eigenvalues.
Definition 9.5.2.10.
Given a nonsingular matrix \(Y \text{,}\) the transformation \(Y^{-1} A Y \) is called a similarity transformation (applied to matrix \(A \)).
Let \(A, B, Y \in \mathbb C^{m \times m} \text{,}\) where \(Y \) is nonsingular, \(B = Y^{-1} A Y \text{,}\) and \(( \lambda, x ) \) an eigenpair of \(A \text{.}\) Then \(( \lambda, Y^{-1} x ) \) is an eigenpair of \(B \text{.}\)
Theorem 9.5.2.11.
Let \(A,Y,B \in \C^{m \times m} \text{,}\) assume \(Y \) is nonsingular, and let \(B = Y^{-1} A Y \text{.}\) Then \(\Lambda( A ) = \Lambda( B ) \text{.}\)
Definition 9.5.2.12.
Given a nonsingular matrix \(Q \) the transformation \(Q^H A Q \) is called a unitary similarity transformation (applied to matrix \(A \)).
Theorem 9.5.2.13. Schur Decomposition Theorem.
Let \(A \in \C^{m \times m} \text{.}\) Then there exist a unitary matrix \(Q \) and upper triangular matrix \(U\) such that \(A = Q U Q^H \text{.}\) This decomposition is called the Schur decomposition of matrix \(A\text{.}\)
Theorem 9.5.2.14. Spectral Decomposition Theorem.
Let \(A \in \C^{m \times m} \) be Hermitian. Then there exist a unitary matrix \(Q \) and diagonal matrix \(D \in \Rmxm \) such that \(A = Q D Q^H \text{.}\) This decomposition is called the spectral decomposition of matrix \(A\text{.}\)
Theorem 9.5.2.15.
Let \(A \in \C^{m \times m} \) be of form \(A = \FlaTwoByTwo{ A_{TL} }{ A_{TR} }{ 0 }{ A_{BR} } \text{,}\) where \(A_{TL} \) and \(A_{BR} \) are square submatrices. Then \(\Lambda( A ) = \Lambda( A_{TL} ) \cup \Lambda( A_{BR} ) \text{.}\)
Definition 9.5.2.16. Diagonalizable matrix.
A matrix \(A \in \Cmxm \) is said to be diagonalizable if and only if there exists a nonsingular matrix \(X \) and diagonal matrix \(D \) such that \(X^{-1} A X = D \text{.}\)
Theorem 9.5.2.17.
A matrix \(A \in \Cmxm \) is diagonalizable if and only if it has \(m \) linearly independent eigenvectors.
If \(A \in \Cmxm \) has distinct eigenvalues, then it is diagonalizable.
Definition 9.5.2.18. Defective matrix.
A matrix \(A \in \Cmxm \) that does not have \(m \) linearly independent eigenvectors is said to be defective.
Corollary 9.5.2.19.
Matrix \(A \in \C^{m \times m} \) is diagonalizable if and only if it is not defective.
Definition 9.5.2.20. Geometric multiplicity.
Let \(\lambda \in \Lambda( A ) \text{.}\) Then the geometric multiplicity of \(\lambda \) is defined to be the dimension of \({\cal E}_\lambda( A ) \) defined by
In other words, the geometric multiplicity of \(\lambda \) equals the number of linearly independent eigenvectors that are associated with \(\lambda \text{.}\)
Definition 9.5.2.21. Jordan Block.
Define the \(k \times k \) Jordan block with eigenvalue \(\mu \) as
Theorem 9.5.2.22. Jordan Canonical Form Theorem.
Let the eigenvalues of \(A \in \C^{m \times m} \) be given by \(\lambda_0, \lambda_1 , \cdots , \lambda_{k-1} \text{,}\) where an eigenvalue is listed as many times as its geometric multiplicity. There exists a nonsingular matrix \(X \) such that
A practical Power Method for finding the eigenvector associated with the largest eigenvalue (in magnitude):
Definition 9.5.2.23. Rayleigh quotient.
If \(A \in \Cmxm \) and \(x \neq 0 \in \Cm \) then
is known as the Rayleigh quotient.
If \(x \) is an eigenvector of \(A \text{,}\) then
is the associated eigenvalue.
Definition 9.5.2.24. Convergence of a sequence of scalars.
Let \(\alpha_0, \alpha_1 , \alpha_2, \ldots \in \R \) be an infinite sequence of scalars. Then \(\alpha_k \) is said to converge to \(\alpha \) if
Definition 9.5.2.25. Convergence of a sequence of vectors.
Let \(x_0, x_1 , x_2, \ldots \in \C^m \) be an infinite sequence of vectors. Then \(x_k \) is said to converge to \(x \) if for any norm \(\| \cdot \| \)
Definition 9.5.2.26. Rate of convergence.
Let \(\alpha_0, \alpha_1 , \alpha_2, \ldots \in \R \) be an infinite sequence of scalars that converges to \(\alpha \text{.}\) Then
-
\(\alpha_k \) is said to converge linearly to \(\alpha \) if for sufficiently large \(k \)
\begin{equation*} \vert \alpha_{k+1} - \alpha \vert \leq C \vert \alpha_{k} - \alpha \vert \end{equation*}for some constant \(C \lt 1 \text{.}\)
-
\(\alpha_k \) is said to converge superlinearly to \(\alpha \) if for sufficiently large \(k \)
\begin{equation*} \vert \alpha_{k+1} - \alpha \vert \leq C_k \vert \alpha_{k} - \alpha \vert \end{equation*}with \(C_k \rightarrow 0 \text{.}\)
-
\(\alpha_k \) is said to converge quadratically to \(\alpha \) if for sufficiently large \(k \)
\begin{equation*} \vert \alpha_{k+1} - \alpha \vert \leq C \vert \alpha_{k} - \alpha \vert ^2 \end{equation*}for some constant \(C \text{.}\)
-
\(\alpha_k \) is said to converge superquadratically to \(\alpha \) if for sufficiently large \(k \)
\begin{equation*} \vert \alpha_{k+1} - \alpha \vert \leq C_k \vert \alpha_{k} - \alpha \vert ^2 \end{equation*}with \(C_k \rightarrow 0 \text{.}\)
-
\(\alpha_k \) is said to converge cubically to \(\alpha \) if for large enough \(k \)
\begin{equation*} \vert \alpha_{k+1} - \alpha \vert \leq C \vert \alpha_{k} - \alpha \vert ^3 \end{equation*}for some constant \(C \text{.}\)
The convergence of the Power Method is linear.
A practical Inverse Power Method for finding the eigenvector associated with the smallest eigenvalue (in magnitude):
The convergence of the Inverse Power Method is linear.
A practical Rayleigh quation iteration for finding the eigenvector associated with the smallest eigenvalue (in magnitude):
The convergence of the Rayleigh Quotient Iteration is quadratic (eventually, the number of correct digits doubles in each iteration). If \(A \) is Hermitian, the convergence is cubic (eventually, the number of correct digits triples in each iteration).