Subsection 6.3.5 Matrix-matrix multiplication
¶The idea behind backward error analysis is that the computed result is the exact result when computing with changed inputs. Let's consider matrix-matrix multiplication:
What we would like to be able to show is that there exist \(\Delta\!A \) and \(\Delta\!B \) such that the computed result, \(\check C \text{,}\) satisfies
Let's think about this...
Ponder This 6.3.5.1.
Can one find matrices \(\Delta\!A \) and \(\Delta\! B\) such that
For matrix-matrix multiplication, it is possible to "throw" the error onto the result, as summarized by the following theorem:
Theorem 6.3.5.1. Forward error for matrix-matrix multiplication.
Let \(C \in \Rmxn \text{,}\) \(A \in \Rmxk \text{,}\) and \(B \in \Rkxn \) and consider the assignment \(C \becomes A B \) implemented via matrix-vector multiplication. Then there exists \(\Delta\!C \in \Rmxn \) such that
\(\check C = A B + \Delta\!C \text{,}\) where \(\vert \Delta\!C \vert \leq \gamma_{k} \vert A \vert \vert B \vert \text{.}\)
Homework 6.3.5.2.
Prove Theorem 6.3.5.1.
Partition
Then
From R-1F 6.3.4.1 regarding matrix-vector multiplication we know that
where \(\vert \delta\!c_j \vert \leq \gamma_k \vert A \vert \vert b_j \vert\text{,}\) \(j = 0, \ldots , n-1 \text{,}\) and hence \(\vert \Delta\!C \vert \leq \gamma_k \vert A \vert \vert B \vert \text{.}\)
Remark 6.3.5.2.
In practice, matrix-matrix multiplication is often the parameterized operation \(C := \alpha A B + \beta C.\) A consequence of Theorem 6.3.5.1 is that for \(\beta \neq 0 \text{,}\) the error can be attributed to a change in parameter \(C \text{,}\) which means the error has been "thrown back" onto an input parameter.