It is of interest to accumulate the roundoff error encountered during computation as a perturbation of input and/or output parameters:
\(\check \kappa = ( x + \deltax )^T y \text{;}\)
(\(\check \kappa\) is the exact output for a slightly perturbed \(x \))
\(\check \kappa = x^T ( y + \deltay ) \text{;}\)
(\(\check \kappa\) is the exact output for a slightly perturbed \(y \))
\(\check \kappa = x^T y + \delta\!\kappa \text{.}\)
(\(\check \kappa\) equals the exact result plus an error)
The first two are backward error results (error is accumulated onto input parameters, showing that the algorithm is numerically stable since it yields the exact output for a slightly perturbed input) while the last one is a forward error result (error is accumulated onto the answer). We will see that in different situations, a different error result may be needed by analyses of operations that require a dot product.
Let us focus on the second result. Ideally one would show that each of the entries of \(y \) is slightly perturbed relative to that entry:
where each \(\sigma_i \) is "small" and \(\Sigma = \diag{ \sigma_0, \ldots, \sigma_{n-1} }\text{.}\) The following special structure of \(\Sigma \text{,}\) inspired by (6.3.3) will be used in the remainder of this note:
Recall that \(\theta_j \) is an order of magnitude variable with \(\vert \theta_j \vert \leq \gamma_j \text{.}\)
Homework6.3.3.1.
Let \(k \geq 0 \) and assume that \(\vert \epsilon_1 \vert, \vert \epsilon_2 \vert \leq \meps \text{,}\) with \(\epsilon_1 = 0 \) if \(k = 0 \text{.}\) Show that
Proof by Mathematical Induction on \(n\text{,}\) the size of vectors \(x\) and \(y\text{.}\)
Base case.
\(m( x ) = m( y ) = 0 \text{.}\) Trivial!
Inductive Step.
Inductive Hypothesis (I.H.): Assume that if \(x_T, y_T \in \R^k \text{,}\) \(k > 0\text{,}\) then
\begin{equation*}
\fl{ x_T^T y_T } = x_T^T (I + \Sigma_T ) y_T
, \mbox{ where }
\Sigma_T = \Sigma^{(k)}.
\end{equation*}
We will show that when \(x_T, y_T \in \R^{k+1}\text{,}\) the equality \(\fl{ x_T^T y_T } = x_T^T (I + \Sigma_T
) y_T\) holds true again. Assume that \(x_T, y_T \in \R^{k+1}\text{,}\) and partition \(x_T \rightarrow \FlaTwoByOneSingleLine{ x_0 }{ \chi_1 }\) and \(y_T \rightarrow \FlaTwoByOneSingleLine{ y_0 }{ \psi_1 }\text{.}\) Then
By the Principle of Mathematical Induction, the result holds.
A number of useful consequences of Theorem 6.3.3.1 follow. These will be used later as an inventory (library) of error results from which to draw when analyzing operations and algorithms that utilize a dot product.
Corollary6.3.3.2.
Under the assumptions of Theorem 6.3.3.1 the following relations hold:
R-1B
\(\check \kappa = ( x + \deltax )^T y \text{,}\) where \(\vert \deltax \vert \leq \gamma_n \vert x \vert \text{,}\)
R-2B
\(\check \kappa = x^T ( y + \deltay ) \text{,}\) where \(\vert \deltay \vert \leq \gamma_n \vert y \vert \text{;}\)
R-1F
\(\check \kappa = x^T y + \delta\!\kappa \text{,}\) where \(\vert \delta\!\kappa \vert \leq
\gamma_n \vert x \vert^T \vert y \vert \text{.}\)