Subsection 2.2.7 Why we love unitary matrices
¶In Subsection 1.4.1, we looked at how sensitive solving
is to a change in the right-hand side
when \(A \) is nonsingular. We concluded that
when an induced matrix norm is used. Let's look instead at how sensitive matrix-vector multiplication is.
Homework 2.2.7.1.
Let \(A \in \Cnxn \) be nonsingular and \(x \in \Cn \) a nonzero vector. Consider
Show that
where \(\| \cdot \| \) is an induced matrix norm.
Since \(x = A^{-1} y \) we know that
and hence
Subtracting \(y = A x \) from \(y + \delta\!y = A ( x + \delta\!x ) \) yields
and hence
There are choices of \(x \) and \(\delta\!x \) for which the bound is tight.
What does this mean? It means that if as part of an algorithm we use matrix-vector or matrix-matrix multiplication, we risk amplifying relative error by the condition number of the matrix by which we multiply. Now, we saw in Section 1.4 that \(1 \leq \kappa( A ) \text{.}\) So, if there are algorithms that only use matrices for which \(\kappa(A) = 1 \text{,}\) then those algorithms don't amplify relative error.
Remark 2.2.7.1.
We conclude that unitary matrices, which do not amplify the 2-norm of a vector or matrix, should be our tool of choice, whenever practical.