Subsection C.1.2 Vector-vector operations
ΒΆThese "vector-vector" operations play an important role.
-
Dot product: \(\alpha := x^T y \text{,}\) where \(x,y \in \R^m \text{.}\)
Notice that
\begin{equation*} x^T y = \chi_0 \psi_0 + \chi_1 \psi_1 + \cdots + \chi_{m-1} \psi_{m-1}. \end{equation*}It is easy to see that this requires \(m \) multiplications and \(m-1 \) additions for a total of \(2m - 1 \text{.}\)
We usually say that the cost of this dot product is \(2 m \) flops. The reason is that very often we actually want to compute \(\alpha := x^T y + \alpha \text{,}\) which then incurs an additional addition. Also, the computer usually performs FMAs instead, so that even the one multiplication that is not matched up with a multiply requires also an addition. In other words, in practice \(\alpha := x^T y \) is computed as \(\alpha := x^T y + 0 \text{.}\)
-
Axpy: \(y := \alpha x + y \text{,}\) where \(x,y \in \R^m \text{.}\)
For each pair \(\chi_i \) and \(\psi_i \) we have to perform a multiplication and an addition:
\begin{equation*} \psi_i := \alpha \chi_i + \psi_i. \end{equation*}Hence, this operation requires \(2m \) flops.
-
Multiplying a scalar times a vector: \(x := \alpha x \text{,}\) where \(x \in \R^m \text{.}\)
This operation requires \(m \) multiplications.
-
Dividing a vector by a scalar: \(x := x / \alpha \text{,}\) where \(x \in \R^m \text{.}\)
What is the cost of this operation? We noted that a division is expensive. However, this operation can instead be implemented as
\begin{equation*} x := (1/\alpha) x, \end{equation*}which exposes the fact that we can form \(1/\alpha\) once and then perform \(m \) multiplications. If \(m \) is large enough, the cost of the division is inconsequential, and the cost is taken to equal approximately \(m \) flops.
We observe that such vector-vector operations with vectors of size \(m \) require \(O( m ) \) flops.