David Zuckerman holds an Endowed Professorship in the Computer Science Department at the University of Texas at Austin. He received an A.B. in Mathematics from Harvard University, where he was a Putnam Fellow, in 1987, and a Ph.D. in Computer Science from U.C. Berkeley in 1991. He was a postdoctoral fellow at MIT from 1991-1993 and at Hebrew University in the Fall of 1993. He has been with the University of Texas since then, with sabbaticals at U.C. Berkeley, Harvard University, and the Institute for Advanced Study.
The primary goal of David Zuckerman's research is to understand whether access to random numbers can make computation more efficient. An important advance in computer science was the development of efficient randomized algorithms to solve computational problems not known to have efficient deterministic algorithms. Such problems include number-theoretic problems, such as factoring polynomials, as well as a variety of approximation problems.
In practice, however, computers do not have access to truly random numbers. Instead, they use the outputs of pseudorandom number generators or physical devices. It is therefore natural and important to ask whether we can still make use of the efficient randomized algorithms even if we have no source of randomness, or perhaps a defective one. Zuckerman has shown how to convert randomized algorithms into robust randomized algorithms that work even if the source of randomness is defective. For certain natural classes of algorithms, he has been able to convert the randomized algorithms into deterministic ones.
The key tool for much of this work is the randomness extractor. An extractor is an efficient algorithm which extracts high-quality randomness from a low-quality random source, using a few auxiliary high-quality random bits. Building on Zuckerman's earlier work, he and Nisan defined and constructed the first good extractor, and he subsequently constructed several others. Moreover, Zuckerman showed how extractors can be used to construct various types of pseudorandom generators, results which do not directly involve low-quality random sources. More surprisingly, he showed that extractors have important applications in seemingly unrelated areas, including non-blocking network and expander constructions, coding theory, and the difficulty of approximating certain problems.
In 2016, Zuckerman and his then PhD student Eshan Chattopadhyay solved a longstanding open problem by introducing an algorithm that combines two independent low-quality random sources to create one high-quality random source. Previous attempts needed at least one of the two input sources to be of moderately high-quality. The new algorithm, called a two-source extractor, also gives a major improvement to an important mathematical problem in Ramsey Theory.
Most of Zuckerman's work is joint with others; see Papers and Talks for details.