Local computation performed by the PLAPACK infrastructure relies heavily upon a few computational kernels generally known as the Basic Linear Algebra Subprograms (BLAS). This set of operations is widely used by linear algebra libraries and applications to perform basic linear algebra operations like inner product, matrix-vector multiplication and matrix-matrix multiplication. By building libraries using these kernels, and expecting the computer vendors to provide highly optimized implementations, high performance can be attained in a portable fashion.
The subprograms are classified into three categories:
- Level-1 BLAS []: vector-vector operations. Examples include the ``axpy'', or scalar a times vector x plus vector y , and the dot (inner) product of two vectors. Notice that O(n) computation is performed on O(n) data, where n is the length of the vectors involved.
- Level-2 BLAS []: matrix-vector operations. Examples include matrix-vector multiplication and the solution of a triangular system of equations. Notice that now computation is performed on data, where n is the dimension of the matrix involved.
- Level-3 BLAS []: matrix-matrix operations. Examples include matrix-matrix multiplication and the solution of a triangular system with multiple right-hand-sides. Notice that computation is performed on data, where n is the dimension of the matrices involved. It is this higher order of computation on less data that is used to overcome the bottleneck of slow access to main memory relative to processor speed.
In the chapters on vector-vector, matrix-vector, and matrix-matrix operations, we discuss the BLAS in detail.