We will show how all communication required to implement the copy and reduce operations can be viewed as a sequence of collective communications. Collective communications are more complex communications that involve all or a subset (group) of nodes. In particular, we will encounter the following:
- Broadcast: Given a vector of data owned by one node in the group (the root), the broadcast duplicates the data to all other nodes in the group. MPI provides the call MPI_Bcast.
- Reduce-to-one: Given that each node in the group owns a vector of data, the reduce operation reduces corresponding elements of the vectors to a single result, which is owned by one designated node in the group. The most frequent occurrence of reduce in our library is the summation of the vectors. MPI provides the call MPI_Reduce.
- Scatter: The scatter is much like the broadcast, except that each node only receives a (non overlapping) subvector of the original vector of data. MPI provides the call MPI_Scatter.
- Gather: The gather is the inverse operation of the scatter. MPI provides the call MPI_Gather.
- Collect: The collect operation can be viewed as a simultaneous gather to all nodes in the group. Indeed, MPI provides the call MPI_Allgather.
- Distributed Reduce: The distributed reduce is much like the reduce-to-one, except that each node receives a (non overlapping) sub-vector of the result. MPI provides the call MPI_Reduce_scatter, which indicates that the operation is equivalent to a reduce-to-one followed by a scattering of the result within the group.
- Reduce-to-all: The distributed reduce is much like the reduce-to-one, except that all node in the group receive a copy of the result. MPI provides the call MPI_Allreduce.