While the last unit introduced the notion of registers, modern CPUs accelerate computation by computing with small vectors of of numbers (double) simultaneously.
As the reader should have noticed by now, in matrix-matrix multiplication for every floating point multiplication a corresponding floating point addition is encountered to accumulate the result:
For this reason, such floating point computations are usually cast in terms of fused multiply add ( FMA) operations, performed by a floating point unit (FPU) of the core.
What is faster than computing one FMA at a time? Computing multiple FMAs at a time! For this reason, modern cores compute with small vectors of data, performing the same FMA on corresponding elements in those vectors, which is referred to as "SIMD" computation: Single-Instruction, Multiple Data. This exploits instruction-level parallelism.
If a vector register has length four, then it can store four (double precision) numbers. Let's load one such vector register with a column of the submatrix of \(C \text{,}\) a second vector register with the vector from \(A \text{,}\) and a third with an element of \(B \) that has been duplicated: