Processing math: 100%
Skip to main content

Subsection C.1.1 Computation with scalars

Most computation with matrices and vectors in the end comes down to the addition, subtraction, or multiplication with floating point numbers:

Ο‡ op Οˆ

where Ο‡ and ψ are scalars and op is one of +,βˆ’,Γ—. Each of these is counted as one floating point operation. However, not all such floating point operations are created equal: computation with complex-valued (double precision) numbers is four times more expensive than computation with real-valued (double precision) numbers. As mentioned: usually we just pretend we are dealing with real-valued numbers when counting the cost. We assume you know how to multiply by four.

Dividing two scalars is a lot more expensive. Frequently, instead of dividing by Ξ± we can instead first compute 1/Ξ± and then reuse that result for many multiplications, instead of dividing many times. Thus, the number of divisions in an algorithm is usually a "lower order term" and hence we can ignore it.

Another observation is that almost all computation we encounter involves a "Fused Multiply Accumulate":

Ξ±Ο‡+ψ,

which requires two flops: a multiply and an add.