05/17/2024 - Researchers are using corrupted data to help generative AI models avoid the misuse of images under copyright. Powerful new artificial intelligence models sometimes, quite famously, get things wrong — whether hallucinating false information or memorizing others’ work and offering it up as their own. To address the latter, researchers led by a team at The University of Texas at Austin have developed a framework to train AI models on images corrupted beyond recognition.  Read more
05/16/2024 - Guide-dog users and trainers can provide insight into features that make robotic helpers useful in the real world. Read more
03/20/2024 - From large language models to brain-machine interfaces, students work with faculty on cutting-edge research. Even before The University of Texas at Austin declared 2024 the Year of AI, artificial intelligence and machine learning had researchers across campus abuzz with activity. Undergraduates under the mentorship of professors in computer science and departments across campus are making their contributions in this fast-growing field. Read more
03/14/2024 - Researchers at The University of Texas at Austin have developed innovative tools, including an AI system and glowing biosensors, to engineer microbes for the large-scale production of galantamine, a crucial medication for Alzheimer's and dementia. This breakthrough method promises a reliable, cost-effective supply unaffected by factors like weather or crop yields. Read more
08/11/2023 - The Department of Computer Science and the Good Systems program present a one semester-credit-hour course titled “Essentials of AI for Life and Society.” Experts in artificial intelligence predict that AI-powered technologies will continue to become more and more a part of our everyday lives. It will affect how we work, spend our leisure time, make policy decisions and make sense of the world around us. And yet, most of us don’t really understand how the technologies work or what their potential risks and benefits are. Read more
05/01/2023 - The work relies in part on a transformer model, similar to the ones that power ChatGPT A new artificial intelligence system called a semantic decoder can translate a person’s brain activity — while listening to a story or silently imagining telling a story — into a continuous stream of text. The system developed by researchers at The University of Texas at Austin might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again. Read more
04/21/2023 - UT Computer Science Ph.D. Garrett Bingham’s research under Professor Risto Miikkulainen in smart automated machine learning has made significant steps toward more efficient neural network systems. Read more
11/17/2021 - Any fan of jazz music can attest to the beauty of musical improvisation. However, many famous improvisational piano pieces aren't recorded in sheet music. “There's a lot of music that exists in the world that doesn't have musical transcriptions because it was played improvisationally—virtuosos that never decided to write anything down,” explained Varun Rajaram. This is because transcribing the notes of a piece (especially polyphonic pieces where multiple notes play at a time) is a difficult task even for skilled musicians. Read more
10/11/2021 - Bilingual aphasia is a language impairment to multilingual people acquired through some sort of injury, usually a stroke. Patterns of language impairment in multilingual stroke patients are very diverse. Sometimes language impairment affects all languages the person speaks equally, while other times it affects one language more than the other. The way in which a stroke affects a multilingual patient depends on many different variables such as when each language was learned, how frequently each one is used etc. Read more
07/22/2021 - Floorplans are used in many industries to help people visualize what the inside of a building looks like without actually seeing it. Traditionally, floorplans have been created by actually observing a 3D environment either manually or with the aid of 3D sensors. But what happens when the luxury of observing the 3D environment isn’t available—for example, when a robot is introduced to a new environment? Would it be able to quickly create floor maps without actually seeing the entire environment being mapped in detail? Read more

Pages

Subscribe to Topic: Machine Learning