In a future chapter, we will see that the QR factorization is used to solve the linear least-squares problem. To do so, we need to be able to compute \(\hat y = Q^H y \) where \(Q^H = H_{n-1} \cdots H_0 \text{.}\)
where \(\omega_1 = ( \psi_1 + u_2^H y_2 ) / \tau_1 \text{.}\) This motivates the algorithm in Figure 3.3.6.1 for computing \(y := H_{n-1} \cdots H_0 y \) given the output matrix \(A \) and vector \(t \) from routine \({\rm HQR\_unb\_var1} \text{.}\)
Homework3.3.6.1.
What is the approximate cost of algorithm in Figure 3.3.6.1 if \(Q \) (stored as Householder vectors in \(A \)) is \(m \times n \text{.}\)
The cost of this algorithm can be analyzed as follows: When \(y_T \) is of length \(k \text{,}\) the bulk of the computation is in a dot product with vectors of length \(m-k-1 \) (to compute \(\omega_1 \)) and an axpy operation with vectors of length \(m-k-1 \) to subsequently update \(\psi_1 \) and \(y_2 \text{.}\) Thus, the cost is approximately given by
\begin{equation*}
\sum_{k=0}^{n-1}
4 (m-k-1) =
4 \sum_{k=0}^{n-1}
m - 4 \sum_{k=0}^{n-1} (k-1)
\approx 4 m n - 2 n^2.
\end{equation*}
Notice that this is much cheaper than forming \(Q\) and then multiplying \(Q^H y \text{.}\)