Skip to main content

Faculty

Isil Dillig Named Texas 10 Award Winner

Photograph credit - Jeff Wilson

05/02/2023 - UT Computer Science Isil Dillig has been named one of the Texas 10 award winners for 2023 by the Alcalde. The Texas 10 award is an annual recognition given to ten outstanding UT Austin faculty members who have made significant contributions to their respective fields. Dillig's selection for the Texas 10 award is a well-deserved recognition of her contributions to the field of computer science and her dedication to teaching and mentoring.

Brain Activity Decoder Can Reveal Stories in People’s Minds

Alex Huth (left), Shailee Jain (center) and Jerry Tang (right) prepare to collect brain activity data in the Biomedical Imaging Center at The University of Texas at Austin. The researchers trained their semantic decoder on dozens of hours of brain activity data from participants, collected in an fMRI scanner. Photo Credit: Nolan Zunk/University of Texas at Austin.

05/01/2023 - The work relies in part on a transformer model, similar to the ones that power ChatGPTA new artificial intelligence system called a semantic decoder can translate a person’s brain activity — while listening to a story or silently imagining telling a story — into a continuous stream of text. The system developed by researchers at The University of Texas at Austin might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again.

Greg Durrett Awarded Sloan Fellowships

UT Computer Science Assistant Professor Greg Durrett

02/17/2023 - ​The Alfred P. Sloan Foundation announced today the early-career researchers across the U.S. and Canada who are recipients of the 2023 Sloan Research Fellowship, including UT Computer Science Assistant Professor Greg Durrett. ​Based on a "candidate's research accomplishments, creativity, and potential to become a leader in their field," independent panels composed of senior scholars select 126 recipients every year out of more than a thousand who are nominated by fellow scientists.

AAAI Selects UT Professor of Computer Science as a Fellow

UT Computer Science Professor Risto Miikkulainen

02/16/2023 - The Association for the Advancement of Artificial Intelligence (AAAI) has selected Risto Miikkulainen as one of 11 fellows for 2023. Founded in 1990, AAAI's Fellows Program seeks to highlight the individuals who contribute greatly to the field of AI. Miikkulainen was honored for "significant contributions to neuroevolution techniques and applications."

Pingali Receives Prestigious Parallel Computing Award

UT Computer Science Professor Keshav Pingali

02/06/2023 - The IEEE Computer Society has selected Keshav Pingali to receive the 2023 IEEE CS Charles Babbage Award for his "contributions to high-performance compilers and graph computing." At The University of Texas at Austin, Pingali is the W.A. "Tex" Moncrief Chair of Grid and Distributed Computing and a professor in the Department of Computer Science and core faculty in the Oden Institute for Computational Engineering and Sciences.

Scott Aaronson Elected AAAS Fellow

UT Computer Science Professor Scott Aaronson

02/01/2023 - UT Computer Science Professor Scott Aaronson is one of six faculty in The University of Texas at Austin, to be elected as a fellow of the American Association for the Advancement of Science (AAAS)— the world's largest general scientific society. His research interests center around the capabilities and limits of quantum computers, and computational complexity theory more generally. He has won numerous awards throughout his career, most recently the 2020 Association for Computing Machinery Prize for groundbreaking contributions to quantum computing.

Exploring Annotator Rationales for Active Learning with Transformers

Filtering data in transformers

12/14/2022 - For decades, natural language processing (NLP) has provided methods for computers to understand language in a way that mimics humans. Since they are built on transformers, complex neural network layers, these large language models' decision making processes are usually incomprehensible to humans and require large amounts of data to be trained properly. In the past, researchers have tried to remedy this by having models explain their decisions by providing rationales, short excerpts of data that contributed most to the label.