Subsection 9.2.3 More properties of eigenvalues and vectors
ΒΆNo video for this unit
This unit reminds us of various properties of eigenvalue and eigenvectors through a sequence of homeworks.
Homework 9.2.3.1.
Let \(\lambda \) be an eigenvalue of \(A \in \mathbb C^{m \times m} \) and let
be the set of all eigenvectors of \(A \) associated with \(\lambda \) plus the zero vector (which is not considered an eigenvector). Show that \({\cal E}_\lambda( A ) \) is a subspace.
A set \({\cal S} \subset \Cm \) is a subspace if and only if for all \(\alpha \in \C \) and \(x,y \in \Cm \) two conditions hold:
\(x \in {\cal S} \) implies that \(\alpha x \in {\cal S} \text{.}\)
\(x, y \in {\cal S} \) implies that \(x + y \in {\cal S} \text{.}\)
-
\(x \in {\cal E}_{\lambda}( A )\) implies \(\alpha x \in {\cal E}_{\lambda}( A )\text{:}\)
\(x \in {\cal E}_{\lambda}(A) \) means that \(A x = \lambda x \text{.}\) If \(\alpha \in \C \) then \(\alpha A x = \alpha \lambda x \) which, by commutivity and associativity means that \(A ( \alpha x ) = \lambda ( \alpha x ) \text{.}\) Hence \((\alpha x) \in {\cal E}_{\lambda}(A) \text{.}\)
-
\(x,y \in {\cal E}_{\lambda}( A )\) implies \(x+y \in {\cal E}_{\lambda}( A )\text{:}\)
\begin{equation*} A( x + y ) = A x + A y = \lambda x + \lambda y = \lambda( x + y ) . \end{equation*}
While there are an infinite number of eigenvectors associated with an eigenvalue, the fact that they form a subspace (provided the zero vector is added) means that they can be described by a finite number of vectors, namely a basis for that space.
Homework 9.2.3.2.
Let \(D \in \Cmxm \) be a diagonal matrix. Give all eigenvalues of \(D \text{.}\) For each eigenvalue, give a convenient eigenvector.
Let
Then
is singular if and only if \(\lambda = \delta_i \) for some \(i \in \{ 0, \ldots , m-1 \} \text{.}\) Hence \(\Lambda( D ) = \{ \delta_0, \delta_1, \ldots, \delta_{m-1} \} \text{.}\)
Now,
and hence \(e_j \) is an eigenvector associated with \(\delta_j \text{.}\)
Homework 9.2.3.3.
Compute the eigenvalues and corresponding eigenvectors of
(Recall: the solution is not unique.)
The eigenvalues can be found on the diagonal: \(\{ -2, 1, 2 \} \text{.}\)
-
To find an eigenvector associated with \(-2\text{,}\) form
\begin{equation*} (-2) I - A = \left( \begin{array}{rrr} 0 \amp -3 \amp 7 \\ 0 \amp -3 \amp -1 \\ 0 \amp 0 \amp -4 \\ \end{array}\right) \end{equation*}and look for a vector in the null space of this matrix. By examination,
\begin{equation*} \left( \begin{array}{c} 1 \\ 0 \\ 0 \end{array} \right) \end{equation*}is in the null space of this matrix and hence an eigenvector of \(A \text{.}\)
-
To find an eigenvector associated with \(1\text{,}\) form
\begin{equation*} (1) I - A = \left(\begin{array}{rrr} 3 \amp -3 \amp 7 \\ 0 \amp 0 \amp -1 \\ 0 \amp 0 \amp -1 \\ \end{array}\right) \end{equation*}and look for a vector in the null space of this matrix. Given where the zero appears on the diagonal, we notice that a vector of the form
\begin{equation*} \left( \begin{array}{c} \chi_0 \\ 1 \\ 0 \end{array} \right) \end{equation*}is in the null space if \(\chi_0 \) is choosen appropriately. This means that
\begin{equation*} 3 \chi_0 - 3 (1) = 0 \end{equation*}and hence \(\chi_0 = 1 \) so that
\begin{equation*} \left( \begin{array}{r} 1 \\ 1 \\ 0 \end{array} \right) \end{equation*}in the null space of this matrix and hence an eigenvector of \(A \text{.}\)
-
To find an eigenvector associated with \(2\text{,}\) form
\begin{equation*} (2) I - A = \left(\begin{array}{rrr} 4 \amp -3 \amp 7 \\ 0 \amp 1 \amp -1 \\ 0 \amp 0 \amp 0 \\ \end{array}\right) \end{equation*}and look for a vector in the null space of this matrix. Given where the zero appears on the diagonal, we notice that a vector of the form
\begin{equation*} \left( \begin{array}{c} \chi_0 \\ \chi_1 \\ 1 \end{array} \right) \end{equation*}is in the null space if \(\chi_0 \) and \(\chi_1\) are choosen appropriately. This means that
\begin{equation*} \chi_1 - 1(1) = 0 \end{equation*}and hence \(\chi_1 = 1 \text{.}\) Also,
\begin{equation*} 4 \chi_0 - 3 (1) + 7 ( 1) = 0 \end{equation*}so that \(\chi_0 = -1 \text{,}\)
\begin{equation*} \left( \begin{array}{r} -1 \\ 1 \\ 1 \end{array} \right) \end{equation*}is in the null space of this matrix and hence an eigenvector of \(A \text{.}\)
Homework 9.2.3.4.
Let \(U \in \Cmxm \) be an upper triangular matrix. Give all eigenvalues of \(U \text{.}\) For each eigenvalue, give a convenient eigenvector.
Let
Then
is singular if and only if \(\lambda = \upsilon_{i,i} \) for some \(i \in \{ 0, \ldots , m-1 \} \text{.}\) Hence \(\Lambda( U ) = \{ \upsilon_{0,0}, \upsilon_{1,1}, \ldots, \upsilon_{m-1,m-1} \} \text{.}\)
Let \(\lambda \) be an eigenvalue of \(U \text{.}\) Things get a little tricky if \(\lambda \) has multiplicity greater than one. Partition
where \(\upsilon_{11} = \lambda \text{.}\) We are looking for \(x \neq 0 \) such that \(( \lambda I - U ) x = 0 \) or, partitioning \(x \text{,}\)
If we choose \(x_2 = 0 \) and \(\chi_1 =1 \text{,}\) then
and hence \(x_0 \) must satisfy
If \(\upsilon_{11} I - U_{00} \) is nonsingular, then there is a unique solution to this equation, and
is the desired eigenvector. HOWEVER, this means that the partitioning
must be such that \(\upsilon_{11} \) is the FIRST diagonal element that equals \(\lambda \text{.}\)
In the next week, we will see that practical algorithms for computing the eigenvalues and eigenvectors of a square matrix morph that matrix into an upper triangular matrix via a sequence of transforms that preserve eigenvalues. The eigenvectors of that triangular matrix can then be computed using techniques similar to those in the solution to the last homework. Once those have been computed, they can be "back transformed" into the eigenvectors of the original matrix.