David Zuckerman

David Zuckerman, Professor


Curriculum Vitae

Prospective students and interns,
please read FAQ before emailing me.


Background image is the atrium of the Gates Dell building, which houses the Department of Computer Science at the University of Texas at Austin.

Contact Information
Department of Computer Science
The University of Texas at Austin
2317 Speedway, D9500
Austin, TX 78712

Office: GDC 4.508
Phone: 512-471-9729
Fax: 512-471-8885
Email: diz at cs.utexas.edu

The National Academy of Sciences awarded the 2024 Michael and Sheila Held Prize to my former PhD student Eshan Chattopadhyay and me. For more about this work, listen to my 100 second radio talk about randomness or read our article targeting a general audience.

My PhD student Vinayak Kumar is a co-winner of the Danny Lewin Best Student Paper Award at STOC 2024 for this paper.

Seeking Postdoctoral Fellow for 2025-26.

100 second talk about randomness on The Academic Minute, produced by an NPR affiliate.

David Zuckerman holds an Endowed Professorship in the Computer Science Department at the University of Texas at Austin. He received an A.B. in Mathematics from Harvard University, where he was a Putnam Fellow, in 1987, and a Ph.D. in Computer Science from U.C. Berkeley in 1991. He was a postdoctoral fellow at MIT from 1991-1993 and at Hebrew University in the Fall of 1993. He has been with the University of Texas since then, with sabbaticals at U.C. Berkeley, Harvard University, and the Institute for Advanced Study.

His research focuses primarily on pseudorandomness and the role of randomness in computing. He is best known for his work on randomness extractors and their applications. His other research interests include coding theory, distributed computing, cryptography, inapproximability, and other areas of computational complexity. His research awards include the 2024 National Academy of Sciences Held Prize, a 30-Year Test of Time Award at FOCS 2021, a Simons Investigator Award, a Best Paper Award at STOC 2016, ACM Fellow, a Guggenheim Fellowship, a Packard Fellowship for Science and Engineering, a Sloan Research Fellowship, and an NSF Young Investigator Award.

For more details, see his C.V.

The primary goal of David Zuckerman's research is to understand whether access to random numbers can make computation more efficient. An important advance in computer science was the development of efficient randomized algorithms to solve computational problems not known to have efficient deterministic algorithms. Such problems include number-theoretic problems, such as factoring polynomials, as well as a variety of approximation problems.

In practice, however, computers do not have access to truly random numbers. Instead, they use the outputs of pseudorandom number generators or physical devices. It is therefore natural and important to ask whether we can still make use of the efficient randomized algorithms even if we have no source of randomness, or perhaps a defective one. Zuckerman has shown how to convert randomized algorithms into robust randomized algorithms that work even if the source of randomness is defective. For certain natural classes of algorithms, he has been able to convert the randomized algorithms into deterministic ones.

The key tool for much of this work is the randomness extractor. An extractor is an efficient algorithm which extracts high-quality randomness from a low-quality random source, using a few auxiliary high-quality random bits. Building on Zuckerman's earlier work, he and Nisan defined and constructed the first good extractor, and he subsequently constructed several others. Moreover, Zuckerman showed how extractors can be used to construct various types of pseudorandom generators, results which do not directly involve low-quality random sources. More surprisingly, he showed that extractors have important applications in seemingly unrelated areas, including non-blocking network and expander constructions, coding theory, and the difficulty of approximating certain problems.

In 2016, Zuckerman and his then PhD student Eshan Chattopadhyay solved a longstanding open problem by introducing an algorithm that combines two independent low-quality random sources to create one high-quality random source. Previous attempts needed at least one of the two input sources to be of moderately high-quality. The new algorithm, called a two-source extractor, also gives a major improvement to an important mathematical problem in Ramsey Theory.

Most of Zuckerman's work is joint with others; see Papers and Talks for details.

Site Design by Gigi Taylor, PhD
Site Development by Monica Aguilar