In modern shared-memory multiprocessors, processes can be halted or delayed without warning by interrupts, pre-emption, or cache misses. In such environments, it is desirable to design synchronization protocols that are wait-free: each non-faulty process will finish the protocol in a fixed number of steps, regardless of delays or failures by other processes, or lock-free: some non-faulty process always makes progress.
This talk surveys a decade's worth of research on two basic questions: (1) under what circumstances does a problem have a wait-free or lock-free solution, and (2) under what circumstances is such a solution practical? For the first question, we review the notion of consensus number and the wait-free hierarchy, as well as recent advances in applying techniques from algebraic topology to characterizing the computational power of synchronization primitives. For the second question, we review different techniques for implementing highly-concurrent data structures, ranging from methods based entirely in software, to methods employing customized hardware.
Back to LESS
Last modified: October 10, 1997