Subsection 6.3.5 Matrix-matrix multiplication
¶fit widthThe idea behind backward error analysis is that the computed result is the exact result when computing with changed inputs. Let's consider matrix-matrix multiplication:Ponder This 6.3.5.1.
Can one find matrices ΔA and ΔB such that
Theorem 6.3.5.1. Forward error for matrix-matrix multiplication.
Let C∈Rm×n, A∈Rm×k, and B∈Rk×n and consider the assignment C:=AB implemented via matrix-vector multiplication. Then there exists ΔC∈Rm×n such that
ˇC=AB+ΔC, where |ΔC|≤γk|A||B|.
Homework 6.3.5.2.
Prove Theorem 6.3.5.1.
Partition
Then
From R-1F 6.3.4.1 regarding matrix-vector multiplication we know that
where \(\vert \delta\!c_j \vert \leq \gamma_k \vert A \vert \vert b_j \vert\text{,}\) \(j = 0, \ldots , n-1 \text{,}\) and hence \(\vert \Delta\!C \vert \leq \gamma_k \vert A \vert \vert B \vert \text{.}\)
Remark 6.3.5.2.
In practice, matrix-matrix multiplication is often the parameterized operation C:=αAB+βC. A consequence of Theorem 6.3.5.1 is that for β≠0, the error can be attributed to a change in parameter C, which means the error has been "thrown back" onto an input parameter.