Skip to main content

Artificial Intelligence

UT Computer Science Professor Trains AI Through Game Theory

Computer scientists Ryan Farell and Chandrajit Bajaj standing side-by-side in front of the visualization wall in the POB Vis Lab.

08/30/2024 - Computer science professor Chandrajit Bajaj was recently awarded funding by the U.S. Army Futures Command’s University Technology Development Division (UTDD), in support of DEVCOM C5ISR, for game theory research to develop artificial intelligence systems. The project will utilize Dynamic Belief Games to train AI agents to be better planning and decision support tools for next-generation communications systems.

Central Texas students start school year with new tool that could revolutionize education, experts say

Professor Greg Durrett teaches a course designed for educators that explains the ins and outs of large language models like Chat GPT.

08/22/2024 - AUSTIN (KXAN) — Students heading back to school this semester are entering the classroom with a new tool that experts say could soon be as common as a calculator. “It’s a very useful tool, and students are going to have to know how to use that tool when they should use that tool when they shouldn’t use the tool,” said Greg Durrett, an associate professor in the Department of Computer Science at the University of Texas in Austin.

Texas RoboCup Team on KXAN

NAO Humanoid robots playing soccer in the Intelligent Robotics lab in the UT Computer Science Gates Dell Complex.

08/12/2024 - Are AI robots the future of sports? These UT students think soAustin (KXAN) — A team of UT students, led by Professor Peter Stone, recently triumphed at the RoboCup Home competition in the Netherlands, where their AI-powered robots autonomously played soccer. The students believe their research is paving the way for a future where robots can compete against humans in sports, revolutionizing the field of AI robotics.

UT computer science lab announces way to make short-form content more accessible

Amy Pavel standing outside on UT Austin campus in a black button down shirt smiling at the camera.

08/09/2024 - The UT computer science lab, with faculty member Amy Pavel and recent graduate Tess Van Daele at the forefront, has developed an AI system called ShortScribe to enhance accessibility for visually impaired users of short-form videos on platforms like TikTok and Instagram Reels. Pavel, an assistant computer science professor and co-author of the research paper, explained that the system utilizes AI technologies such as Optical Character Recognition, Automatic Speech Transmission, and GPT-4 to segment videos, transcribe speech, and create detailed audio descriptions.

Transforming Video Accessibility Through Artificial Intelligence

Smart phone positioned on a phone tripod in a netural-tone, naturally lit room with a large window in the background.

06/27/2024 - Digital media is one of the best ways to engage with new communities, where each click takes you to new, engaging platforms like TikTok, Instagram Reels, and YouTube Shorts. This content is enhanced when you consider the intricacies of webcam visuals and overlays making it a really immersive experience.  Now, imagine this experience if you’re unable to see the video. For people with visual impairments, accessing this content comes with many challenges. These platforms currently lack effective solutions to bridge the accessibility gap for the blind and low vision (BLV) community.

Artificial Intelligence Trained to Draw Inspiration From Images, Not Copy Them

Three rows of similarly themed illustrations—earnest dogs, scientist pandas and robot graffiti—differ in each of five iterations per row.

05/17/2024 - Researchers are using corrupted data to help generative AI models avoid the misuse of images under copyright. Powerful new artificial intelligence models sometimes, quite famously, get things wrong — whether hallucinating false information or memorizing others’ work and offering it up as their own. To address the latter, researchers led by a team at The University of Texas at Austin have developed a framework to train AI models on images corrupted beyond recognition.

Transforming Human-Robot Interaction Through Mood Inducing Music

Young person lounging back with a boombox under their right foot listening to music.

05/06/2024 - Music has always had the power to stir our emotions, from the exhilaration of a fast-paced rock anthem to the melancholy of a soulful ballad. But, could the music we listen to also affect how we make decisions, especially in our interactions with robots? This intriguing question lies at the heart of a study conducted by UT Austin Assistant Professor Elad Liebman and Professor Peter Stone.