The algebraic eigenvalue problem or, more simply, the computation of eigenvalues and eigenvectors is often presented as the problem of diagonalizing a matrix. We make that link in this unit.
In other words, X−1w is the vector of coefficients when w is writen in terms of the basis that consists of the columns of X. Similarly, we can write v as a linear combination of the columns of X:
We conclude that if we view the matrices in the right basis (namely the basis that consists of the columns of X), then the transformation w:=Av simplifies to ˜w:=D˜v. This is a really big deal.
Now assume that the eigenvalues of A∈Cm×m are given by {λ0,λ1,…,λm−1}, where eigenvalues are repeated according to their algebraic multiplicity. Assume that there are m linearly independent vectors {x0,x1,…,xm−1} such that Axj=λjxj. Then
Here are some classes of matrices that are diagonalizable:
Diagonal matrices.
If A is diagonal, then choosing X=I and A=D yields X−1AX=D.
Hermitian matrices.
If A is Hermitian, then the spectral decomposition theorem tells us that there exists unitary matrix Q and diagonal matrix D such that A=QDQH. Choosing X=Q yields X−1AX=D.
Triangular matrices with distinct diagonal elements.
If U is upper triangular and has distinct diagonal elements, then by Homework 9.2.3.4 we know we can find an eigenvector associated with each diagonal element and by design those eigenvectors are linearly independent. Obviously, this can be extended to lower triangular matrices as well.
Let \(A = Q U Q^H \) be the Schur decomposition of matrix \(A \text{.}\) Since \(U \) is upper triangular, and has the same eigenvalues as \(A \text{,}\) it has distinct entries along its diagonal. Hence, by our earlier observations, there exists a nonsingular matrix \(X \) such that \(X^{-1} U X = D \text{,}\) a diagonal matrix. Now,
\begin{equation*}
X^{-1} Q^H A Q X = X^{-1} U X = D
\end{equation*}
and hence \(Y = Q X \) is the nonsingular matrix that diagonalizes \(A \text{.}\)