Skip to main content

Subsection 9.2.6 Jordan Canonical Form

Homework 9.2.6.1.

Compute the eigenvalues of \(k \times k \) matrix

\begin{equation} J_k( \mu ) = \left( \begin{array}{c c c c c c } \mu \amp 1 \amp 0 \amp \cdots \amp 0 \amp 0 \\ 0 \amp \mu \amp 1 \amp \ddots \amp 0 \amp 0 \\ \vdots \amp \ddots \amp \ddots \amp \ddots \amp \vdots \amp \vdots \\ 0 \amp 0 \amp 0 \amp \ddots \amp \mu \amp 1 \\ 0 \amp 0 \amp 0 \amp \cdots \amp0 \amp \mu \end{array} \right) \label{chapter09-jordan-block-eqn}\tag{9.2.1} \end{equation}

where \(k \gt 1 \text{.}\) For each eigenvalue compute a basis for the subspace of its eigenvectors (including the zero vector to make it a subspace).

Hint
  • How many linearly independent columns does \(\lambda I - J_k( \mu ) \) have?

  • What does this say about the dimension of the null space \(\Null( \lambda I - J_k( \mu ) ) \text{?}\)

  • You should be able to find eigenvectors by examination.

Solution

Since the matrix is upper triangular and all entries on its diagonal equal \(\mu \text{.}\) Now,

\begin{equation*} \mu I - J_k( \mu)= \left( \begin{array}{c c c c c c } 0 \amp -1 \amp 0 \amp \cdots \amp 0 \amp 0 \\ 0 \amp 0 \amp -1 \amp \ddots \amp 0 \amp 0 \\ \vdots \amp \ddots \amp \ddots \amp \ddots \amp \vdots \amp \vdots \\ 0 \amp 0 \amp 0 \amp \ddots \amp 0 \amp -1 \\ 0 \amp 0 \amp 0 \amp \cdots \amp0 \amp 0 \end{array} \right) \end{equation*}

has \(k-1 \) linearly independent columns and hence its nullspace is one dimensional: \(\dim( \Null( \mu I - J_k( \mu) ) ) = 1 \text{.}\) So, we are looking for one vector in the basis of \(\Null( \mu I - J_k( \mu) ) \text{.}\) By examination, \(J_k( \mu ) e_0 = \mu e_0 \) and hence \(e_0 \) is an eigenvector associated with the only eigenvalue \(\mu \text{.}\)

The matrix in (9.2.1) is known as a Jordan block.

The point of the last exercise is to show that if \(A \) has an eigenvalue of algebraic multiplicity \(k \text{,}\) then it does not necessarily have \(k \) linearly independent eigenvectors. That, in turn, means there are matrices that do not have a full set of eigenvectors. We conclude that there are matrices that are not diagonalizable. We call such matrices defective.

Definition 9.2.6.1. Defective matrix.

A matrix \(A \in \Cmxm \) that does not have \(m \) linearly independent eigenvectors is said to be defective.

Definition 9.2.6.3. Geometric multiplicity.

Let \(\lambda \in \Lambda( A ) \text{.}\) Then the geometric multiplicity of \(\lambda \) is defined to be the dimension of \({\cal E}_\lambda( A ) \) defined by

\begin{equation*} {\cal E}_\lambda( A ) = \{ x \in \C^m \vert A x = \lambda x \}. \end{equation*}

In other words, the geometric multiplicity of \(\lambda \) equals the number of linearly independent eigenvectors that are associated with \(\lambda \text{.}\)

Homework 9.2.6.2.

Let \(A \in \Cmxm \) have the form

\begin{equation*} A = \left( \begin{array}{c | c} A_{00} \amp 0 \\ \hline 0 \amp A_{11} \end{array} \right) \end{equation*}

where \(A_{00} \) and \(A_{11} \) are square. Show that

  • If \(( \lambda, x ) \) is an eigenpair of \(A_{00} \) then \(( \lambda, \left( \begin{array}{c} x \\ 0 \end{array} \right) ) \) is an eigenpair of \(A \text{.}\)

  • If \(( \mu, y ) \) is an eigenpair of \(A_{11} \) then \(( \mu, \left( \begin{array}{c} 0 \\ y \end{array} \right) ) \) is an eigenpair of \(A \text{.}\)

  • If \(( \lambda, \left( \begin{array}{c} x \\ y \end{array} \right) ) \) is an eigenpair of \(A \) then \(( \lambda, x ) \) is an eigenpair of \(A_{00}\) and \(( \lambda, y ) \) is an eigenpair of \(A_{11} \text{.}\)

  • \(\Lambda( A ) = \Lambda( A_{00} ) \cup \Lambda( A_{11} ) \text{.}\)

Solution

Let \(A \in \Cmxm \) have the form

\begin{equation*} A = \left( \begin{array}{c | c} A_{00} \amp 0 \\ \hline 0 \amp A_{11} \end{array} \right) \end{equation*}

where \(A_{00} \) and \(A_{11} \) are square. Show that

  • If \(( \lambda, x ) \) is an eigenpair of \(A_{00} \) then \(( \lambda, \left( \begin{array}{c} x \\ 0 \end{array} \right)) \) is an eigenpair of \(A \text{.}\)

    \begin{equation*} \left( \begin{array}{c | c} A_{00} \amp 0 \\ \hline 0 \amp A_{11} \end{array} \right) \left( \begin{array}{c} x \\ 0 \end{array} \right) = \left( \begin{array}{c} A_{00} x \\ 0 \end{array} \right) = \left( \begin{array}{c} \lambda x \\ 0 \end{array} \right) = \lambda \left( \begin{array}{c} x \\ 0 \end{array} \right) . \end{equation*}
  • If \(( \mu, y ) \) is an eigenpair of \(A_{11} \) then \(( \mu, \left( \begin{array}{c} 0 \\ y \end{array} \right) ) \) is an eigenpair of \(A \text{.}\)

    \begin{equation*} \left( \begin{array}{c | c} A_{00} \amp 0 \\ \hline 0 \amp A_{11} \end{array} \right) \left( \begin{array}{c} 0 \\ y \end{array} \right) = \left( \begin{array}{c c} 0 \\ A_{11} y \end{array} \right) = \left( \begin{array}{c c} 0 \\ \mu y \end{array} \right) = \mu \left( \begin{array}{c} 0 \\ y \end{array} \right) . \end{equation*}
  • \(\left( \begin{array}{c | c} A_{00} \amp 0 \\ \hline 0 \amp A_{11} \end{array} \right) \left( \begin{array}{c} x \\ \hline y \end{array} \right) = \lambda \left( \begin{array}{c} x \\ \hline y \end{array} \right) \) implies that

    \begin{equation*} \left( \begin{array}{c c} A_{00} x \\ \hline A_{11} y \end{array} \right) = \left( \begin{array}{c c} \lambda x \\ \hline \lambda y \end{array} \right), \end{equation*}

    and hence \(A_{00} x = \lambda x \) and \(A_{11} y = \lambda y \text{.}\)

  • \(\Lambda( A ) = \Lambda( A_{00} ) \cup \Lambda( A_{11} ) \text{.}\)

    This follows from the first three parts of this problem.

This last homework naturally extends to

\begin{equation*} A = \left( \begin{array}{c | c | c | c } A_{00} \amp 0 \amp \cdots \amp 0\\ \hline 0 \amp A_{11} \amp \cdots \amp 0 \\ \hline \vdots \amp \vdots \amp \ddots \amp \vdots \\ \hline 0 \amp 0 \amp \cdots \amp A_{kk} \end{array} \right) \end{equation*}

The following is a classic result in linear algebra theory that characterizes the relationship between of a matrix and its eigenvectors:

For our discussion, the sizes of the Jordan blocks \(J_{m_i}( \lambda_i ) \) are not particularly important. Indeed, this decomposition, known as the Jordan Canonical Form of matrix \(A \text{,}\) is not particularly interesting in practice. It is extremely sensitive to perturbation: even the smallest random change to a matrix will make it diagonalizable. As a result, there is no practical mathematical software library or tool that computes it. For this reason, we don't give its proof and don't discuss it further.