Subsection 9.2.6 Jordan Canonical Form
¶Homework 9.2.6.1.
Compute the eigenvalues of \(k \times k \) matrix
where \(k \gt 1 \text{.}\) For each eigenvalue compute a basis for the subspace of its eigenvectors (including the zero vector to make it a subspace).
How many linearly independent columns does \(\lambda I - J_k( \mu ) \) have?
What does this say about the dimension of the null space \(\Null( \lambda I - J_k( \mu ) ) \text{?}\)
You should be able to find eigenvectors by examination.
Since the matrix is upper triangular and all entries on its diagonal equal \(\mu \text{.}\) Now,
has \(m-1 \) linearly independent columns and hence its nullspace is one dimensional: \(\dim( \Null( \mu I - J_k( \mu) ) ) = 1 \text{.}\) So, we are looking for one vector in the basis of \(\Null( \mu I - J_k( \mu) ) \text{.}\) By examination, \(J_k( \mu ) e_0 = \mu e_0 \) and hence \(e_0 \) is an eigenvector associated with the only eigenvalue \(\mu \text{.}\)
The matrix in (9.2.1) is known as a Jordan block.
The point of the last exercise is to show that if \(A \) has an eigenvalue of algebraic multiplicity \(k \text{,}\) then it does not necessarily have \(k \) linearly independent eigenvectors. That, in turn, means there are matrices that do not have a full set of eigenvectors. We conclude that there are matrices that are not diagonalizable. We call such matrices defective.
Definition 9.2.6.1. Defective matrix.
A matrix \(A \in \Cmxm \) that does not have \(m \) linearly independent eigenvectors is said to be defective.
Corollary 9.2.6.2.
Matrix \(A \in \C^{m \times m} \) is diagonalizable if and only if it is not defective.
Proof.
This is an immediate consequence of Theorem 9.2.5.3.
Definition 9.2.6.3. Geometric multiplicity.
Let \(\lambda \in \Lambda( A ) \text{.}\) Then the geometric multiplicity of \(\lambda \) is defined to be the dimension of \({\cal E}_\lambda( A ) \) defined by
In other words, the geometric multiplicity of \(\lambda \) equals the number of linearly independent eigenvectors that are associated with \(\lambda \text{.}\)
Homework 9.2.6.2.
Let \(A \in \Cmxm \) have the form
where \(A_{00} \) and \(A_{11} \) are square. Show that
If \(( \lambda, x ) \) is an eigenpair of \(A_{00} \) then \(( \lambda, \left( \begin{array}{c} x \\ 0 \end{array} \right) ) \) is an eigenpair of \(A \text{.}\)
If \(( \mu, y ) \) is an eigenpair of \(A_{11} \) then \(( \mu, \left( \begin{array}{c} 0 \\ y \end{array} \right) ) \) is an eigenpair of \(A \text{.}\)
If \(( \lambda, \left( \begin{array}{c} x \\ y \end{array} \right) ) \) is an eigenpair of \(A \) then \(( \lambda, x ) \) is an eigenpair of \(A_{00}\) and \(( \lambda, y ) \) is an eigenpair of \(A_{11} \text{.}\)
\(\Lambda( A ) = \Lambda( A_{00} ) \cup \Lambda( A_{11} ) \text{.}\)
Let \(A \in \Cmxm \) have the form
where \(A_{00} \) and \(A_{11} \) are square. Show that
-
If \(( \lambda, x ) \) is an eigenpair of \(A_{00} \) then \(( \lambda, \left( \begin{array}{c} x \\ 0 \end{array} \right)) \) is an eigenpair of \(A \text{.}\)
\begin{equation*} \left( \begin{array}{c | c} A_{00} \amp 0 \\ \hline 0 \amp A_{11} \end{array} \right) \left( \begin{array}{c} x \\ 0 \end{array} \right) = \left( \begin{array}{c} A_{00} x \\ 0 \end{array} \right) = \left( \begin{array}{c} \lambda x \\ 0 \end{array} \right) = \lambda \left( \begin{array}{c} x \\ 0 \end{array} \right) . \end{equation*} -
If \(( \mu, y ) \) is an eigenpair of \(A_{11} \) then \(( \mu, \left( \begin{array}{c} 0 \\ y \end{array} \right) ) \) is an eigenpair of \(A \text{.}\)
\begin{equation*} \left( \begin{array}{c | c} A_{00} \amp 0 \\ \hline 0 \amp A_{11} \end{array} \right) \left( \begin{array}{c} 0 \\ y \end{array} \right) = \left( \begin{array}{c c} 0 \\ A_{11} y \end{array} \right) = \left( \begin{array}{c c} 0 \\ \mu y \end{array} \right) = \mu \left( \begin{array}{c} 0 \\ y \end{array} \right) . \end{equation*} -
\(\left( \begin{array}{c | c} A_{00} \amp 0 \\ \hline 0 \amp A_{11} \end{array} \right) \left( \begin{array}{c} x \\ \hline y \end{array} \right) = \lambda \left( \begin{array}{c} x \\ \hline y \end{array} \right) \) implies that
\begin{equation*} \left( \begin{array}{c c} A_{00} x \\ \hline A_{11} y \end{array} \right) = \left( \begin{array}{c c} \lambda x \\ \hline \lambda y \end{array} \right), \end{equation*}and hence \(A_{00} x = \lambda x \) and \(A_{11} y = \lambda y \text{.}\)
-
\(\Lambda( A ) = \Lambda( A_{00} ) \cup \Lambda( A_{11} ) \text{.}\)
This follows from the first three parts of this problem.
This last homework naturally extends to
The following is a classic result in linear algebra theory that characterizes the relationship between of a matrix and its eigenvectors:
Theorem 9.2.6.4. Jordan Canonical Form Theorem.
Let the eigenvalues of \(A \in \C^{m \times m} \) be given by \(\lambda_0, \lambda_1 , \cdots , \lambda_{k-1} \text{,}\) where an eigenvalue is listed as many times as its geometric multiplicity. There exists a nonsingular matrix \(X \) such that
For our discussion, the sizes of the Jordan blocks \(J_{m_i}( \lambda_i ) \) are not particularly important. Indeed, this decomposition, known as the Jordan Canonical Form of matrix \(A \text{,}\) is not particularly interesting in practice. It is extremely sensitive to perturbation: even the smallest random change to a matrix will make it diagonalizable. As a result, there is no practical mathematical software library or tool that computes it. For this reason, we don't give its proof and don't discuss it further.