Students

Current Students || Ph.D. Alumni || Post-Docs || Master's Alumni || Undergraduate Honors Thesis Alumni

Yuqian Jiang (entered Fall 2017)
B.S. in Computer Science and in Math, The University of Texas at Austin, 2017
Research Interests: robotics, reinforcement learning, AI planning
Website: [ https://yuqianjiang.us ]
[ First-authored publications from UT ]
Yuqian is interested in robotics and AI, with a current focus on using planning and learning techniques to improve systems of autonomous service robots. She is working on the BWI project, and participating in the RoboCup@Home team for UT Austin Villa. Outside of research, she enjoys watching basketball, traveling, and playing the piano.
Jin Soo Park (entered Fall 2017)
B.S. in Engineering in Electronic Engineering, Korea University, 2017
Research Interests: reinforcement learning, robotics, computer vision
Website: [ https://jinsoopark-17.github.io ]
[ First-authored publications from UT ]
Jin Soo is interested in reinforcement learning, continual learning, multiagent systems and their applications in robotics. His current research is focused on collision avoidance in multi-robot systems. Outside of research, he loves to play strategic boardgames, listen to the music, and watch anime.
Yu-Sian Jiang (entered Fall 2017)
B.S. in Electrical Engineering, National Taiwan University, 2006
M.S. in Communication Engineering, National Taiwan University, 2008
Research Interests: reinforcement learning, robotics, computer vision
[ First-authored publications from UT ]
Yu-Sian is interested in human-centric AI, particularly in techniques regarding how a robot can learn a human's intention and collaborate with the human smoothly. Her current research focuses on intention inference and shared autonomy applied to autonomous driving and intelligent wheelchairs. Outside of research, she enjoys reading, listening to and playing music, and playing with kittens.
Bo Liu (entered Fall 2019)
M.S. in Computer Science, Stanford University, 2019
B.Eng. in Computer Engineering, Johns Hopkins University, 2017
Research Interests: reinforcement learning, robotics
Website: [ https://cranial-xix.github.io ]
First-authored publications from UT ]
Bo is interested in reinforcement learning, continual learning, imitation learning and their applications in robotics. In his spare time, he loves playing music (piano and guitar), reading detective fictions, watching anime and playing tennis and badminton.
Yifeng Zhu (entered Fall 2019)
B.Eng. in in Automation, Zhejiang University, 2018
Research Interests: robot learning, robot planning, robot control
Website: [ https://cs.utexas.edu/~yifengz ]
Yifeng's research interest lies at the intersection of robot learning, robot control, and robot planning. He is interested in general-purpose robot autonomy, especially its potential social impacts on our daily lives. In his spare time, he likes to play soccer and reading book.
Jiaxun Cui (entered Fall 2019)
B.S. in Mechanical Engineering, Shanghai Jiao Tong University, 2019
Research Interests: reinforcement learning, game theory, robotics
Website: [ https://cuijiaxun.github.io/ ]
First-authored publications from UT ]
Jiaxun is interested in the intersection of multiagent reinforcement learning and game theory and theirapplications in robotics. Outside of research, she enjoys playing tennis and rap music.
Caroline Wang (entered Fall 2020)
B.S. in Computer Science and Mathematics, Duke University, 2020
Research Interests: reinforcement learning, multiagent systems
Website: [ https://carolinewang01.github.io/ ]
Caroline is interested in reinforcement learning, imitation learning, and multiagent systems. In her spare time, she enjoys reading, playing tennis, watching anime, and playing the cello.
Zizhao Wang (entered Fall 2020)
M.S. in Computer Science, Columbia University, 2019
B.S. in Computer Engineering, the University of Michigan at Ann Arbor, 2018
Research Interests: reinforcement learning, robot learning, Bayesian inference
Website: [ https://wangzizhao.github.io/ ]
[ First-authored publications from UT ]
Zizhao is interested in reinforcement learning, causal inference, Bayesian methods, and their applications in robotics. His research focuses on introducing causal understanding into robot learning for better generalization and sample efficiency. In his spare time, he loves photography, playing badminton, and building ship models.
Zifan Xu (entered Fall 2021)
M.S. in Physics, The University of Texas at Austin, 2021
B.S. in Physics, University of Science and Technology of China, 2018
Research Interests: reinforcement learning, curriculum learning, robotics
Website: [ https://daffan.github.io ]
[ First-authored publications from UT ]
Zifan's research focuses on reinforcement learning, curriculum learning, lifelong learning, and their applications in robotics, especially autonomous navigation systems. In his spare time, he enjoys watching movies and anime, hiking, and playing tennis. 
Jiaheng Hu (entered Fall 2022)
M.S. in Robotics, Carnegie Mellon University, 2022
B.S. in Computer Science, Columbia University, 2020
Research Interests: reinforcement learning, robotics
Website: [ https://jiahenghu.github.io ]
[ Selected first-authored publications from UT ]
Jiaheng's research focuses on causal, lifelong, and/or unsupervised reinforcement learning for robot systems, with a focus on mobile manipulation. His research goal is to enable complex robot systems to autonomously learn and improve over a long period of time, e.g. during deployment. In his spare time, he is an enthusiastic player of bridge and soccer.
Siddhant Agarwal (entered Fall 2022)
B.Tech (Hons.) + M.Tech. (Dual Degree) in Computer Science and Engineering, Indian Institute of Technology Kharagpur, 2022
Research Interests: goal-conditioned reinforcement learning, representation learning
Website: [ https://agarwalsiddhant10.github.io ]
[ First-authored publications from UT ]
Siddhant is interested in goal-conditioned reinforcement learning and representation learning. He is exploring the properties of state visitation distributions of agents in MDPs. He is a member of the UT Austin Villa RoboCup SPL Team where he works on motion and localization. Outside of research he enjoys cooking, hiking, and playing tennis.
Michael Munje (entered Fall 2023)
M.S. in Computer Science, Georgia Institute of Technology, 2022
B.S. in Computer Science, California State University Northridge, 2019
Research Interests: reinforcement learning, robotics
Website: [ https://michaelmunje.com/ ]
Michael is interested in reinforcement learning, large multimodal models, and their applications in robotics. His current research focuses on leveraging reasoning capabilities of large vision-language models for robot navigation. Outside of research, Michael enjoys running, playing guitar, and playing video games.
Zhihan Wang (entered Fall 2023)
B.S. in Computer Science and in Math, University of Southern California, 2023
Research Interests: reinforcement learning, multiagent systems, robotics
Zhihan is interested in helping AI coordinate with humans or other AI agents. He is a member of the UT RoboCup SPL team, where he works on using multiagent reinforcement learning for high-level decision making for robots. In his spare time, he loves cooking, running, tennis, photography, and playing the violin.
Haresh Karnan (May 2024)
"Aligning Robot Navigation Behaviors with Human Intentions and Preferences"
Website: [ https://hareshkarnan.github.io ]
[ First-authored publications from UT ]
Job after graduation: Research Scientist, Zoox
Currently: Same
Haresh is interested in Reinforcement Learning, Imitation learning, Computer Vision and its applications in Robotics. Prior to his thesis research, he focused on leveraging simulators to learn robot skills for the real world. He also participated in RoboCup@Home as a member of UT Austin Villa team. In his spare time, he loves playing the violin and listening to carnatic music.
Representative publication from UT:
Ishan Durugkar (May 2023)
"Estimation and Control of Visitation Distributions for Reinforcement Learning"
Website: [ www.cs.utexas.edu/~ishand ]
[ First-authored publications from UT ]
Job after graduation: Research Scientist, Sony AI
Currently: Same
Ishan is interested in Reinforcement Learning and AI in general, with a focus on techniques involving Deep Learning. His dissertation research focused on intrinsic motivation, meaning behavior that is motivated by the agent itself rather than as a result of a reward signal that is given to the agent externally. In his spare time, Ishan enjoys photography, reading, gaming, and cooking.
Representative publication from UT:
Faraz Torabi (August 2021)
"Imitation Learning from Observation"
Website: [ http://users.ices.utexas.edu/~faraz/ ]
[ First-authored publications from UT ]
Job after graduation: Research Scientist, Snap
Currently: Same
Faraz is interested in reinforcement learning and imitation learning, particularly in how to use already existing resources to train agents. His dissertation focuses on learning to imitate skills from raw video observation. In his free time, Faraz enjoys playing volleyball, ping-pong, movies, and traveling.
Representative publications from UT:
Sanmit Narvekar (May 2021)
"Curriculum Learning in Reinforcement Learning"
Website: [ www.cs.utexas.edu/~sanmit ]
[ First-authored publications from UT ]
Job after graduation: Research Scientist, Waymo
Currently: Same
Sanmit is interested in reinforcement learning, and in machine learning in general. His dissertation focuses on curriculum learning -- the automated design of a sequence of tasks that enable autonomous agents to learn faster or better. Sanmit was also a member of the UT Austin Villa Standard Platform League team, where he worked primarily on the vision system. In his free time, Sanmit enjoys playing soccer, running, and reading.
Representative publications from UT:
Josiah Hanna (August 2019)
"Data Efficient Reinforcement Learning with Off-policy and Simulated Data"
Website: [ http://homepages.inf.ed.ac.uk/jhanna2 ]
[ First-authored publications from UT ]
Job after graduation: Postdoctoral Fellow at University of Edinburgh
Currently: Assistant Professor at University of Wisconsin, Madison
Josiah's dissertation was on increasing the data efficiency of off-policy reinforcement learning and using simulated data for reinforcement learning on robots. He also worked on developing new traffic systems for autonomous vehicles and was a part of both the Standard Platform League and 3D simulation teams for UT Austin Villa working on robot motion and skill optimization. Before joining U.T., Josiah obtained his B.S. in Computer Science and Mathematics from the University of Kentucky. Outside of his research, Josiah enjoys just about any sport, running, hiking, traveling, and reading.
Representative publications from UT:
Patrick MacAlpine (August 2017)
"Multilayered Skill Learning and Movement Coordination for Autonomous Robotic Agents"
[ Thesis defense slides; video of the presentation; ]
Winner of UT Austin Computer Science Bert Kay Outstanding Dissertation Award.
Website: [ www.cs.utexas.edu/~patmac ]
[ First-authored publications from UT ]
Job after graduation: Postdoctoral fellow, Microsoft Research
Currently: Research Scientist, Sony AI
Patrick's dissertation was on autonomous multiagent systems and machine learning. His research was motivated by using reinforcement learning to develop locomotion skills and strategy for the for the UT Austin Villa RoboCup 3D Simulation League team. Before coming to UT, Patrick worked as a software engineer at Green Hills Software and Acelot, Inc. in Santa Barbara, California. Outside of his research, Patrick enjoys playing soccer, traveling, and following college football.
Representative publications from UT:
Katie Genter (August 2017)
"Fly with Me: Algorithms and Methods for Influencing a Flock"
[ Thesis defense presentation and slides ]
Website: [ www.cs.utexas.edu/~katie ]
[ First-authored publications from UT ]
Job after graduation: Writer, Red Ventures Currently: Same
Katie's dissertation examined the problem of influencing flocks of birds using robot birds that are seen by the flock as one of their own. In particular, Katie considered how these robot birds should behave, where they should be placed within the flock, and how to minimize disruption while joining and leaving the flock. Katie contributed to the UT Austin Villa robot soccer team throughout her time at UT, including a first place finish in the Standard Platform League (SPL) at RoboCup 2012 and a second place finish in the SPL at RoboCup 2016. She served as the SPL organizing committee chair for RoboCup 2013, the SPL technical committee chair for RoboCup 2014-2017, and will serve as an SPL executive committee member for RoboCup 2018-2020. Before joining UT, she obtained her B.S. in Computer Science from the Georgia Institute of Technology.
Representative publications from UT:
Piyush Khandelwal (May 2017)
"On-Demand Coordination of Multiple Service Robots"
[ Thesis slides ]
Website: [ piyushk.net ]
[ First-authored publications from UT ]
Job after graduation: Research Scientist, Cogitai, Inc.
Currently: Research Scientist, Sony AI
Piyush's research focuses on advancing mobile robot systems in unstructured indoor environments. As part of his dissertation research, he developed multiple indoor Segway-based robots. Using these robots, he demonstrated how to use centralized probabilistic planning techniques to efficiently aid humans in the environment. During his time as a Ph.D. student, he has also worked on autonomous vehicles and the RoboCup Standard Platform League (SPL). In his spare time, he likes to cook.
Representative publications from UT:
Matthew Hausknecht (December 2016)
"Cooperation and Communication in Multiagent Deep Reinforcement Learning"
[ Thesis slides and videos from the defense. ]
Website: [ www.cs.utexas.edu/~mhauskn ]
[ First-authored publications from UT ]
Job after graduation: Researcher, Microsoft Research
Currently: Same
Matthew's research focuses on the intersection of Deep Neural Networks and Reinforcement Learning with the goal of developing autonomous agents capable of adapting and learning in complex environments. In his spare time he enjoys rock climbing and freediving.
Representative publications from UT:
Daniel Urieli (December 2015)
"Autonomous Trading in Modern Electricity Markets"
[ Thesis defense slides and related agent binaries. ]
Website: [ www.cs.utexas.edu/~urieli ]
[ First-authored publications from UT ]
Job after graduation: Research Scientist, core machine learning team for autonomous driving at General Motors
Currently: Staff Researcher at General Motors
Daniel, an NSF IGERT Graduate Research Fellow, is researching how to design learning agents that solve sustainable energy problems. Such agents need to take robust decisions under uncertainty, while learning, predicting, planning and adapting to changing environments. Daniel's research included designing a learning agent for controlling a smart thermostat, and designing the champion power-trading agent that won the finals of the 2013 Power Trading Agent Competition. Previously, Daniel was part of the champion RoboCup 3D simulation team, UT Austin Villa. Outside work, Daniel enjoys literature, theatre, hiking and biking.
Representative publication from UT:
Samuel Barrett (December 2014)
"Making Friends on the Fly: Advances in Ad Hoc Teamwork"
[ Thesis defense presentation. ]
Website: [ www.cs.utexas.edu/~sbarrett ]
[ First-authored publications from UT ]
Job after graduation: Research Scientist, Kiva Systems
Currently: Senior Research Scientist, Sony AI
Sam's dissertation examined the problem of ad hoc teamwork, cooperating with unknown teammates. His work explored how robots and other agents could reuse knowledge learned about previous teammates in order to quickly adapt to new teammates. While at UT, he also helped the UT Austin Villa team win the 2012 international RoboCup competition in the standard platform league (SPL). Before joining UT, he obtained his B.S. in Computer Science from Stevens Institute of Technology.
Representative publication from UT:
Todd Hester (December 2012)
"TEXPLORE: Temporal Difference Reinforcement Learning for Robots and Time-Constrained Domains"
[ Thesis code repository and annotated slides available from AI lab thesis page. ]
Website: [ www.cs.utexas.edu/~todd ]
[ First-authored publications from UT ]
Job after graduation: Research Educator, Freshman Research Initiative, UT Austin
Currently: Applied Scientist Manager, Amazon
Todd's dissertation focussed on reinforcement learning and robotics, specifically looking at the exploration versus exploitation problem in reinforcement learning and working to apply it to large domains. Before coming to UT, Todd worked in the Motion Analysis Laboratory at Spaulding Rehabiliation Hospital, Motorola, Sun Microsystems, and the Air Force Research Laboratory. Outside of his research, Todd enjoys ultimate frisbee and foosball and is a dedicated New England Patriots fan.
Representative publication from UT:
Brad Knox (August 2012)
"Learning from Human-Generated Reward"
[ Thesis defense presentation and slides ]
Winner of UT Austin Computer Science Bert Kay Outstanding Dissertation Award.
Website: [ www.cs.utexas.edu/~bradknox ]
[ First-authored publications from UT ]
Job after graduation: Postdoctoral Fellow, MIT
Currently: Founding Data Scientist at Perfect Price
Brad, an NSF Graduate Research Fellow, is researching how to design agents that can be taught interactively by human reward—somewhat like animal training. The TAMER framework is the result of his efforts. After giving a lot of demos of a trainable Tetris agent, he keeps getting called "The Tetris Guy." Brad spent the summer of 2011 collaborating at the MIT Media Lab with with Cynthia Breazeal, where he implemented TAMER on the social robot Nexi and began a postdoc in late 2012. In his free time, Brad runs in "barefoot" sandals, eats tasty trailer food, and tries out his robot training techniques on his dog.
Representative publication from UT:
Doran Chakraborty (August 2012)
"Sample Efficient Multiagent Learning in the Presence of Markovian Agents"
Website: [ www.cs.utexas.edu/users/ai-lab/?ChakrabortyDoran ]
[ First-authored publications from UT ]
Job after graduation: Research Engineer, Microsoft
Currently: Same
Doran's research is on agent modeling in multiagent systems. His main interest lies in modeling opponents in multiagent settings such as repeated and sequential games. He also has a parallel interest in model-based reinforcement learning and its overlap with multiagent modeling. He is also a member of the team that won the first Trading Agent Ad Auction competition held at IJCAI, 2009. Between his years in schools, he has worked as a software architect for a couple of years for Sybase and Johnson & Johnson. He is one of the developers of the open source software project perf4j: a statistical logging framework. Outside work, he is a soccer geek and loves following all kinds of soccer leagues around Europe.
Representative publications from UT:
Shivaram Kalyanakrishnan (December 2011)
"Learning Methods for Sequential Decision making with Imperfect Representations"
Website: [ http://www.cse.iitb.ac.in/~shivaram/ ]
[ First-authored publications from UT ]
Job after graduation: Scientist, Yahoo! Labs Bangalore; then INSPIRE Faculty Fellow, Indian Institute of Science, Bangalore
Currently: Associate Professor, Indian Institute of Technology Bombay
Shivaram is fascinated by the question of how intelligence can be programmed, and in particular how systems can learn from experience. His dissertation examines the relationship between representations and learning methods in the context of sequential decision making. With an emphasis on practical applications, Shivaram has extensively used robot soccer as a test domain in his research. His other interests include multi-agent systems, humanoid robotics, and bandit problems. Shivaram's return to his home country, India, upon graduation is motivated by the desire to apply his expertise as a computer scientist to problems there of social relevance.
Representative publications from UT:
Juhyun Lee (December 2011)
"Robust Color-based Vision for Mobile Robots"
Website: [ www.cs.utexas.edu/users/ai-lab/?LeeJuhyun ]
[ First-authored publications from UT ]
Job after graduation: Software Engineer, Google
Currently: Same
Juhyun's dissertation was on robust color-based vision for autonomous mobile robots. Specifically, he employed concepts and techniques from 3D computer graphics and exploited the robot's motion and observation capabilities to achieve that goal. As a member of the UT Austin Villa team, he contributed to the RoboCup @Home league 2007 and RoboCup Standard Platform League 2010, which finished in 2nd and 3rd places, respectively. Before joining UTCS, he obtained his B.S. in Computer Science from Seoul National University and worked at Ebay Korea. In his free time, Juhyun enjoys playing the electric guitar, snowboarding, and videogaming.
Representative publication from UT:
David Pardoe (May 2011)
"Adaptive Trading Agent Strategies Using Market Experience "
Website: [ www.cs.utexas.edu/~TacTex/dpardoe ]
[ First-authored publications from UT ]
Job after graduation: Scientist, Yahoo! Labs
Currently: Software Engineer/Scientist, LinkedIn
David's research focuses on applications of machine learning in e-commerce settings. This research was motivated by his participation in the Trading Agent Competition, where he designed winning agents in supply chain management and ad auction scenarios. His dissertation explored methods by which agents in such settings can adapt to the behavior of other agents, with a particular focus on the use of transfer learning to learn quickly from limited interaction with these agents.
Representative publications from UT:
Nicholas K. Jong (December 2010)
"Structured Exploration for Reinforcement Learning"
[ Thesis code repository and annotated slides ]
Website: [ www.cs.utexas.edu/users/ai-lab/?JongNicholas ]
[ First-authored publications from UT ]
Job after graduation: Software Engineer, Apple
Currently: Engineering Manager, Apple
Nick's dissertation examined the interplay between exploration and generalization in reinforcement learning, in particular the effects of structural assumptions and knowledge. To this end, his research integrated ideas in function approximation, hierarchical decomposition, and model-based learning. At Apple, he was responsible for the typing model and search algorithms that underlie keyboard autocorrection on iPhones and iPads. Now at Google, he continues to help bring machine learning to mobile devices.
Representative publication from UT:
Gregory Kuhlmann (August 2010)
"Automated Domain Analysis for General Game Playing"
Website: [ www.cs.utexas.edu/users/ai-lab/?KuhlmannGregory ]
[ First-authored publications from UT ]
Job after graduation: Senior Research Scientist, 21st Century Technologies
Currently: Co-founder at sumatra.ai
Greg's dissertation explored the benefits of domain analysis and transfer learning to the general game playing problem. Earlier in his graduate career, he was also a contributing member of the UT Austin Villa robot soccer team in both the standard platform and simulated coach leagues. At 21st Century Technologies, Greg applied machine learning and intelligent agent techniques to unmanned systems and data mining problems.
Representative publication from UT:
Kurt Dresner (December 2009)
"Autonomous Intersection Management"
Winner of UT Austin Computer Science Bert Kay Outstanding Dissertation Award and
UT Austin Outstanding Dissertation Award.
Website: [ www.cs.utexas.edu/users/ai-lab/?DresnerKurt ]
[ First-authored publications from UT ]
Job after graduation: Software Engineer, Google
Currently: same
Kurt's dissertation was on Autonomous Intersection Management, the project he helped start in his second year. He currently is employed at Google, where he employs mind-boggling amounts of data to make the World Wide Web a better place. Outside of his academic interests, Kurt enjoys playing the guitar, listening to music, playing board games, and photography.
Representative publication from UT:
Matthew E. Taylor (August 2008)
"Autonomous Inter-Task Transfer in Reinforcement Learning Domains"
Website: [ eecs.wsu.edu/~taylorm/ ]
[ First-authored publications from UT ]
Job after graduation: Postdoctoral Research Associate with Milind Tambe at The University of Southern California
Currently: Associate Professor, Department of Computer Science, University of Alberta; and Fellow in Residence, Alberta Machine Intelligence Institute
Matt's Ph.D. dissertation focused on transfer learning, a novel method for speeding up reinforcement learning through knowledge reuse. His dissertation received an honorable mention in the competition for the IFAAMAS-08 Victor Lesser Distinguished Dissertation Award. After UT, Matt moved to The University of Southern California to work with Milind Tambe as a post-doc, pursuing his interests in multi-agent systems.
Representative publications from UT:
Daniel Stronger (August 2008)
"Autonomous Sensor and Action Model Learning for Mobile Robots"
[ First-authored publications from UT ]
Job after graduation: Credit Derivatives Strategist at Morgan Stanley
Currently: Optimization and Market Impact at Two Sigma
Dan's dissertation presented algorithms enabling an autonomous mobile robot to learn about the effects of its actions and the meanings of its sensations. These action and sensor models are learned without the robot starting with an accurate estimate of either model. These algorithms were implemented and tested on two robotic platforms: a Sony AIBO ERS-7 and an autonomous car. After graduating, Dan joined Morgan Stanley as a trading strategist to model the behavior of illiquid credit derivatives.
Representative publication from UT:
Shimon Whiteson (May 2007)
"Adaptive Representations for Reinforcement Learning"
Website: [ staff.science.uva.nl/~whiteson ]
[ First-authored publications from UT ]
Job after graduation: Assistant Professor in the Informatics Institute at the University of Amsterdam
Currently: Professor in the Department of Computer Science at the University of Oxford
Shimon's research is primarily focused on single- and multi-agent decision-theoretic planning and learning, especially reinforcement learning, though he is also interested in stochastic optimization methods such as neuroevolution. Current research efforts include comparing disparate approaches to reinforcement learning, developing more rigorous frameworks for empirical evaluations, improving the scalability of multiagent planning, and applying learning methods to traffic management, helicopter control, and data filtering in high energy physics.
Representative publications from UT:
Mohan Sridharan (August 2007)
"Robust Structure-Based Autonomous Color Learning on a Mobile Robot"
Website: [ www.cs.bham.ac.uk/~sridharm/ ]
[ First-authored publications from UT ]
Job after graduation: Research Fellow at University of Birmingham (UK) on the EU-funded Cognitive Systems (CoSy) project. August 2007 -- October 2008.
Then: Assistant Professor in the Department of Computer Science at Texas Tech University. 2008 -- 2014.
Currently: Senior Lecturer in the Department of Computer Science at University of Birmingham (UK).
Mohan's Ph.D. dissertation focused on enabling a mobile robot to autonomously plan its actions to learn models of color distributions in its world, and to use the learned models to detect and adapt to illumination changes. His post-doctoral work on using hierarchical POMDPs for visual processing management won a distinguished paper award at ICAPS-08. His current research interests include knowledge representation, machine learning, computer vision and cognitive science as applied to autonomous robots and intelligent agents.
Representative publications from UT:
Chen Tang (2023 -- Current)
Ph.D. in Mechanical Engineering, UC Berkeley, 2022.
Research Interests: trustworthy interactive autonomy, autonomous driving, robot navigation
Website: [ https://chentangmark.github.io/ ]
Chen's research aims to develop trustworthy and safe autonomous agents interacting with humans. In particular, he is interested in improving the transparency and robustness of learning-based autonomous systems, leveraging the strength of deep learning, reinforcement learning, imitation learning, explainable AI, and control. Chen's current work is leveraging large-scale data and human feedback to develop interactive autonomous systems, including autonomous vehicles and social navigation robots.
Alexander Levine (2023 -- Current)
Ph.D. in Computer Science, University of Maryland, 2023.
Research Interests: reinforcement learning, robustness, planning.
Website: [ https://sites.google.com/umd.edu/alexander-levine ]
Alex's research interests include robust reinforcement learning, goal-conditioned reinforcement learning in high-dimensional environments, and techniques which combine reinforcement learning and search. During his Ph.D., he also worked on adversarial robustness in supervised deep learning, proposing several novel techniques for classification with stability guarantees under adversarial input perturbations.
Arrasy Rahman (2022 -- Current)
Ph.D. in Informatics, The University of Edinburgh, 2023.
Research Interests: ad hoc teamwork, human-AI interaction, multiagent RL, game theory.
Website: [ https://raharrasy.github.io ]
Arrasy's research focuses on designing adaptive agents capable of robustly collaborating with a wide range of previously unseen teammates. This research goal requires agents that can approximate the best-response policy to any team configuration during collaboration. Currently, Arrasy is exploring the possibility of generating diverse teammates requiring different best-response policies to be used for training robust ad hoc teamwork agents.
Rohan Chandra (2022 -- Current)
Ph.D. in Computer Science, University of Maryland, College Park, 2022.
Research Interests: multi-robot systems, autonomous driving, control theory
Website: [ http://rohanchandra30.github.io ]
Rohan's research focuses on developing algorithms and systems for deploying intelligent robots in complex, unstructured, and dynamic environments. He has developed advanced machine learning, game-theoretic, and computer vision techniques for accurately tracking and predicting the movements of pedestrians and vehicles as well as estimating the risk tolerances of drivers, facilitating conflict resolution and multi-agent coordination in traffic scenarios. Rohan's overarching research goal is to instill human-like mobility in robots, enabling them to navigate densely populated areas safely, smoothly, and efficiently, even taking calculated risks, when necessary.
Yoonchang Sung (2021-current)
Ph.D. in Electrical and Computer Engineering, Virginia Tech, 2019.
Research Interests: task and motion planning, multi-robot systems.
Website: [ https://yoonchangsung.com/ ]
Yoon's research focuses on building intelligent robots that can plan efficiently, learn from past experience, and reason about their decisions and other agents. In particular, he designs task and motion planning algorithms leveraging data-driven machine learning and metareasoning. He is currently working on developing a general-purpose task and motion planning framework for multi-robot systems.
Shahaf Shperberg (2021-2022)
Ph.D. in Computer Science, Ben-Gurion University of the Negev, 2021.
Research Interests: safety in RL, curriculum learning, metareasoning, search and learning.
Website: [ https://shperb.github.io/ ]
Job after UT: Assistant professor at Ben-Gurion University in Israel
Currently: same
Shahaf's dissertation focused on developing, applying, and analyzing techniques for metareasoning, i.e. methods for deliberation about the reasoning process of agents in order to improve their decisions. His algorithms were applied to a variety of search, planning and scheduling algorithms in challenging settings. Shahaf is currently working on reinforcement learning when operating under safety (and other) constraints, as well as automating curriculum learning.
Yulin Zhang (2021-2022)
Ph.D. in Computer Science, Texas A&M University, 2021.
Research Interests: planning and estimation, automated robot design, privacy-preserving applications.
Website: [ https://www.cs.utexas.edu/~yulin/ ] Job after UT: Applied Scientist II at Amazon Robotics
Currently: same
Yulin's dissertation focused on automating robot design in the context of planning and estimation. He studied impossibility results for privacy-preserving tracking, abstractions, algorithms to search for plans and sensors jointly, and counterexamples and hardness results for filter minimization problems. Yulin is currently working on automating curriculum learning and multiagent systems in traffic management.
Representative publication from UT:
Xuesu Xiao (2019-2022)
Ph.D. in Computer Science, Texas A&M University, 2019.
Research Interests: locomotion, motion planning, risk-awareness.
Website: [ https://www.cs.utexas.edu/~xiao/ ]
Job after UT: Assistant professor at George Mason University
Currently: same
Xuesu's dissertation focused on risk-aware planning for robots locomoting in unstructured or confined environments. He proposed a formal risk reasoning framework which allows human and robot agents to quantify and compare safety of robot motion. He also developed a low-level motion suite, including perception, planning, and actuation, for robots operating under navigational constraints, such as a tether. Xuesu's current work is leveraging learning to enable safe, robust, and trustworthy robot motion.
Representative publication from UT:
Reuth Mirsky (2019-2022)
PhD in Software and Information Systems Engineering, Ben Gurion University, 2019.
Research Interests: plan recognition, diagnosis, human-robot interaction.
Website: [ https://sites.google.com/site/dekelreuth ]
Job after UT: Assistant professor at Bar-Ilan University in Israel
Currently: same
Reuth's dissertation focused on plan recognition challenges, such as compact problem representation, efficient domain design and hypotheses disambiguation. Her algorithms have been applied in tasks for education, clinical treatment, and finance. Her long-term goal is to make human-aware machines that can perform tasks in collaboration with human team members. Representative publication from UT:
Harel Yedidsion (2017-2021)
Ph.D. in Industrial Engineering and Management, Ben-Gurion University, 2015.
Research Interests: distributed coordination of multi-robot systems, human-robot interaction.
Website: [ https://sites.google.com/site/harelyedidsion/ ]
Job after UT: Research and Development Scientist at Applied Materials
Currently: same
Harel's thesis focused on developing a framework and distributed algorithms to represent and solve distributed mobile multiagent problems. Harel is a part of the BWI project and his work lies at the intersection of Robot Perception, Grasping and Navigation, Human Robot Interaction, Natural Language Processing, and Reinforcement Learning.  In his free time, Harel enjoys playing basketball, soccer, swimming, and outdoor activities.
Representative publication from UT:
Shani Alkoby (2017-2019)
Ph.D. in Computer Science, Bar-Ilan University, 2017.
Research Interests: artificial intelligence, information disclosure, value of information, information brokers in multiagent systems, game theory, auction theory, multiagent economic search/exploration, human-computer interaction, mechanism design.
Website: [ https://www.cs.utexas.edu/~shani/ ]
Job after UT: Assistant Professor of Industrial Engineering at Ariel University
Currently: same
Shani's thesis research focused on the role of providing information to agents in multi-agent systems. Her thesis focused on three highly applicable settings: auctions, economic search, and interaction with people. Her research is based on both theoretical analysis and online empirical experiments. The theoretical analysis was carried out using concepts from game theory, auction theory, and search theory, while the online experiments were conducted on Amazon Mechanical Turk, a well-known crowdsourcing platform. Shani is currently working on projects including robots, ad-hoc team work, and real time traffic management of autonomous vehicles.
Guni Sharon (2015-2018)
Ph.D. in Information Systems Engineering, Ben-Gurion University, 2015.
Research Interests: artificial intelligence, heuristic search, real-time search, multiagent pathfinding.
Website: [ http://faculty.cse.tamu.edu/guni/ ]
Job after UT: Assistant Professor of Computer Science at Texas A&M University
Currently: same
Guni's thesis research focused on two complicated variants of the path finding problem on an explicitly given graph: (1) multiagent path-finding where paths should be assigned to multiple agents, each with a unique start and goal vertices such that the paths do not overlap and (2) real-time agent-centered where a single agent must physically find its way from start to goal while utilizing a bounded amount of time and memory prior to performing each physical step. These days Guni is working towards integrating his past research into real time traffic management of autonomous vehicles.
Representative publication from UT:
Justin Hart (2016-2017)
Ph.D. in Computer Science, Yale University, 2014.
Research Interests: robotics, human-robot interaction.
Website: [ http://justinhart.net ]
Justin's dissertation focused on robots learning about their bodies and senses through experience, culminating in a robot inferring the visual perspective of a mirror. His current work focuses on autonomous human-robot interaction, in particular the use of natural language and language grounding. His long-term goal is to enable robots to pass the mirror test, and to use self-reflexive reasoning and perspective-taking in human-robot dialog.
Representative publication from UT:
Stefano Albrecht (2016-2017)
Ph.D. in Artificial Intelligence, The University of Edinburgh, 2015.
Research Interests: multiagent interaction, ad hoc coordination, game theory.
Website: [ http://www.cs.utexas.edu/~svalb ]
Job after UT: Lecturer (Assistant Professor) in Artificial Intelligence, University of Edinburgh; and Royal Society Industry Fellow at FiveAI
Currently: same
Stefano's research focuses on ad hoc coordination in multiagent systems. Therein, the goal is to develop autonomous agents that can achieve flexible and efficient interaction with other agents whose behaviours are a priori unknown. This involves a number of challenging problems, such as efficient learning and adaptation in the presence of uncertainty as well as robustness with respect to violations of prior beliefs. The long-term goal of this research is to enable innovative applications such as adaptive user interfaces, automated trading agents, and robotic elderly care.
Representative publication from UT:
Jivko Sinapov (2014-2017)
Ph.D. in CS and Human-Computer Interaction, Iowa State University, 2013.
Research Interests: developmental robotics, computational perception, manipulation
Website: [ http://www.cs.utexas.edu/~jsinapov/ ]
Job after UT: Assistant Professor of CS, Tufts University
Currently: Associate Professor of CS, Tufts University
Jivko's research focuses on aspects of developmental robotics dealing with multi-modal object perception and behavioral object exploration. His long term goal is to enable robots to autonomously extract knowledge about objects in their environment through active interaction with them. He is currently working on projects involving Transfer Learning in a Reinforcement Learning setting and Building-Wide Intelligence (BWI) using autonomous robots.
Representative publication from UT:
Michael Albert (2015-2016)
Ph.D. in Financial Economics, Duke University, 2013.
Research Interests: algorithmic mechanism design, game theory, multiagent systems
Website: [ www.michaelalbert.co ]
Job after UT: Postdoctoral Fellow in Economics and Computer Science at Duke University
Currently: Assistant Professor of Business Administration, University of Virginia
Michael's research focuses on robust autonomous mechanism design. His work looks at the allocation of scarce resources to self-interested agents in an optimal fashion. This requires making assumptions about the value of those resources to the agents. His research explores the sensitivity of the mechanism design process to those assumptions and how to design algorithms that generate mechanisms that are robust to mis-specification of the assumptions. His long term goal is to integrate this line of research with machine learning techniques to estimate and incorporate the valuations of the agents in a repeated mechanism design setting.
Representative publication from UT:
Shiqi Zhang (2014-2016)
Ph.D. in Computer Science, Texas Tech University, 2013
Research Interests: knowledge representation and reasoning for robots
Website: [ http://www.cs.binghamton.edu/~szhang/ ]
Job after UT: Assistant professor of EECS at Cleveland State University
Currently: Assistant professor of Computer Science at SUNY Binghamton.
Shiqi's research is on knowledge representation and reasoning in robotics. His research goal is to enable robots to represent, reason with and learn from qualitative and quantitative descriptions of uncertainty and knowledge. He is currently working on projects including hierarchical planning using ASP and Building-Wide Intelligence (BWI).
Representative publication from UT:
Matteo Leonetti (2013-2015)
Ph.D. in Computer Engineering, Sapienza University of Rome, Italy, 2011
Research Interests: reinforcement learning, knowledge representation, robotics.
Website: [ www.cs.utexas.edu/~matteo ]
Job after UT: Lecturer (Assistant professor) at The University of Leeds in the UK
Currently: same
Matteo's research is about the application of AI to robotics, in particular about decision making. His work is focused on mitigating the uncertainty on knowledge representations in autonomous robots, and the consequent flimsy behavior, through reinforcement learning. He is in general interested in how the rationality of automated reasoning can come to terms with the wild attitude towards exploration of most machine learning.
Representative publication from UT:
Noa Agmon (2010-2012)
Ph.D. in Computer Science, Bar-Ilan University, Israel, 2009
Research Interests: multi-robot systems
Website: [ http://u.cs.biu.ac.il/~agmon/ ]
Job after UT: Assistant professor at Bar-Ilan University in Israel
Currently: Associate Professor at Bar-Ilan University in Israel
Noa's research is focused on various aspects of multi-robot systems, including multi-robot patrolling, robot navigation and multi-agent planning in adversarial environments. Her research uses tools from theoretical computer science for analyzing problems in practical multi-robot systems.
Representative publication from UT:
Tsz-Chiu Au (2008-2012)
Ph.D. in Computer Science, University of Maryland, College Park, 2008
Research Interests: multiagent systems and artificial intelligent planning
Website: [ www.cs.utexas.edu/~chiu ]
Job after UT: Assistant professor at Ulsan National Institute of Science and Technology in Korea
Currently: Associate professor at Ulsan National Institute of Science and Technology in Korea
Chiu worked mainly on the Autonomous Intersection Management project while at UT Austin as a post-doc. Before entering UT Austin, he worked on several research projects in Artificial Intelligence, including automated planning and error detection in multiagent systems. In his spare time, he enjoys hiking and watching movies.
Representative publication from UT:
Michael Quinlan (2007-2011)
Ph.D. in Computer Science, University of Newcastle, Australia, 2006
Research Interests: legged robotics, autonomous vehicles
Website: [ www.cs.utexas.edu/~ai-lab/people-view.php?PID=271 ]
Job after UT: Engineer at Clover in Mountain View, California
Currently: Software Engineer at X (now within Everyday Robots) in Mountain View, California
Michael researches various aspects of robotic systems, including motion, vision and localization. He currently teaches a class on Autonomous Vehicles and competes as part of the Austin Villa team at RoboCup using the Nao humanoid robots. In his spare time he enjoys basketball, cycling, running and soccer.
Representative publication from UT:
Tobias Jung (2008-2010)
Ph.D. in Computer Science, University of Mainz, 2007
Research Interests: reinforcement learning, machine learning
Website: [ www.cs.utexas.edu/~tjung ]
Job after UT: Postdoc at University of Liege in Belgium
Currently: Quantitative Modeler, Uniper Global Commodities
Tobias is interested in optimization and optimal decision-making: in particular how to act optimally when individual decisions have uncertain outcomes and far-reaching consequences. Knowing that his own abilities in this area are rather limited, he focuses his research on how these problems can be solved automatically and computationally. His thesis (nominated for the GI National Dissertation Prize) describes a novel and highly sample efficient online algorithm for reinforcement learning that is specifically aimed at learning in high-dimensional continuous state spaces. Some of his other work includes multivariate time series prediction, sensor evolution and curiosity-driven learning.
Representative publication from UT:
Patrick Beeson (2008-2009)
Ph.D. in Computer Science from UT Austin, 2008
Research Interests: developmental and cognitive robotics, autonomous vehicles, human robot interaction
Website: [ daneel.traclabs.com/~pbeeson/ ]
Job after UT: Senior Scientist at TRACLabs Inc.
Currently: same
Patrick is interested in the intersection of AI and robotics. This includes human-robot interaction, cognitive models for robotic navigation, and developmental robotics. He is currently working on a sensor-to-symbol cognitive architecture, which will enable modular robotic platforms to be used in a variety of domains with minimal software changes.
Representative publication from UT:
Ian Fasel (2007-2008)
Ph.D. in Cognitive Science, University of California, San Diego, 2006
Research Interests: developmental robotics, human-robot interaction
Website: [ www.cs.utexas.edu/users/ai-lab/?FaselIan ]
Job after UT: Assistant Research Professor, CS, University of Arizona
Currently: Head of development, Machine Perception Technologies
Ian's research is on developmental robotics and human-robot interaction. The first topic seeks to answer: how can a (human or robot) baby discover basic low-level perceptual and motor concepts through prolonged autonomous interactions with the world? The second topic broadens this to include social concepts, such as emotions, gaze-following, and turn-taking, and incorporates both care-giving and explicit teaching into the developmental process as well. This work mostly involves application and development of machine learning methods to robots which must interact with objects and people in the world in real-time.
Representative publication from UT:
Bikramjit Banerjee (2006)
Ph.D. in Computer Science, Tulane University, 2006
Research Interests: multiagent systems and machine learning
Website: [ www.cs.usm.edu/~banerjee/ ]
Job after UT: Assistant Professor, DigiPen Institute of Technology
Currently: Professor in The University of Southern Mississippi
As a post-doc, Bikram worked on transfer learning methodology for challenging knowledge transfer tasks, such as general game playing. Currently, he is working on multiple projects funded by NASA and DHS, that exploit multi-agent systems technology to solve problems relating to rocket engine tests and large scale crowd simulation.
Representative publication from UT:
Yaxin Liu (2004-2007)
Ph.D. in Computer Science, Georgia Tech, 2005
Research Interests: planning and transfer learning
Job after UT: Fair Isaac Corporation
Currently: Google
Yaxin's Ph.D. research at Georgia Tech was on risk-sensitive planning. As a post-doc at UT Austin, Yaxin was key personnel on the Transfer Learning project. His research focussed on transfering value functions among related sequential decision making problems in order to speed up learning.
Representative publication from UT:
Abayomi Adekanmbi (Spring 2024)
"Deep Reinforcement Learning in RoboCup Keepaway"
Website: [ https://aabayomi.com/ ]
Job after graduation: Argonne National Laboratory
Currently: Same

Abayomi is interested in robotics, game theory, and multi-agent reinforcement learning, particularly in exploring how players form coalitions and cooperate to achieve better outcomes in strategic settings. These interests inspired his thesis on learning cooperative behavior in Robot Soccer. He was also part of the UT Austin Villa RoboCup SPL team.
Sai Kiran Narayanaswami (Summer 2023)
"Decision-Making Problems in Computationally Constrained Robot Perception"
Job after graduation: Researcher at the Centre for Responsible AI (CeRAI), IIT Madras
Currently: Same
Sai Kiran is interested in a variety of topics in reinforcement learning, including model-based RL, sim-to-real transfer, and multiagent systems, as well as how program synthesis can play a role in developing neurosymbolic RL agents. He was part of the UT Austin Villa Robocup SPL team, which motivated his Masters thesis on efficient robot vision.
William Macke (Spring 2023)
"Optimizing and planning with queries for Communication in Ad Hoc Teamwork"
Website: [ https://williammacke.github.io ]
Job after graduation: Intermediate Artificial Intelligence Engineer at Mitre
Currently: Same

William's Master's thesis focuses on communication in ad hoc teamwork. In particular, it focuses on a particular form of communication, queries, and introduces the Sequential Oneshot MultiAgent Limited Inquiry Communication in Ad Hoc Teamwork (SOMALI CAT) scenario. It provide a theoretical analysis of when to ask queries in SOMALI CAT problems, followed by an empirical evaluation showing that asking at these times gives the best performance in practice.
Qiping Zhang (August 2021)
"Interactive Learning from Implicit Human Feedback: The EMPATHIC Framework"
Website: [ https://cpsc.yale.edu/people/qiping-zhang ]
Job after graduation: Ph.D. student in Computer Science at Yale University
Currently: Same

Qiping's Master's thesis defined and studied the general problem of learning from implicit human feedback, and introduced a data-driven framework named EMPATHIC as a solution, which first maps implicit human feedback to corresponding task statistics, and then learns a task with the constructed mapping. His work demonstrated the ability of the EMPATHIC framework to (1) infer reward ranking of events from offline human reaction data in the training task; (2) improve the online agent policy with live human reactions as they observe the training task; and (3) generalize to a novel domain in which robot manipulation trajectories are evaluated by the learned reaction mappings. He is currently a Ph.D. student at Yale, continuing his research on interactive machine learning and robotics.
Brahma Pavse (May 2020)
"Reducing Sampling Error in Batch Temporal Difference Learning"
Website: [ https://brahmasp.github.io ]
Job after graduation: Software Engineer at Salesforce
Currently: Ph.D. student at the University of Wisconsin-Madison

Brahma's thesis showed that in the offline RL setting, a fundamental value function (VF) learning algorithm, TD(0), computes the VF for the wrong policy. In particular, batch TD(0) computes the VF for the maximum-likelihood policy according to the batch of data instead of the desired evaluation policy. His work proposed a new estimator, policy sampling error corrected-TD(0), which uses a well-established importance sampling approach, regression importance sampling, to compute the VF for the desired evaluation policy. His work includes a proof specifying the new fixed point that PSEC-TD(0) converges to and an empirical analysis in the discrete and continuous settings.
Prabhat Nagarajan (August 2018)
"Nondeterminism as a Reproducibility Challenge for Deep Reinforcement Learning"
Website: [ http://prabhatnagarajan.com ]
Job after graduation: Engineer at Preferred Networks
Currently: Ph.D. student at The University of Alberta, RLAI lab

Prabhat's Master's thesis studied the impact of nondeterminism on reproducibility in deep reinforcement learning. His work demonstrated that individual sources of nondeterminism in algorithm implementations can substantially impact the reproducibility of an agent's performance. Furthermore, standard practices in deep reinforcement learning may be inadequate at detecting differences in performance between algorithms. His thesis argues for deterministic implementations as a solution to both of these issues, showing how they eliminate nondeterminism and how statistical tests can be formulated to take advantage of determinism. Prabhat has previously interned at Yahoo!, Microsoft, and Facebook, working on ads targeting, team services, and messaging.
Rolando Fernandez Jr. (May 2018)
"Light-Based Nonverbal Signaling with Passive Demonstrations for Mobile Service Robots"
Job after graduation: R&D Graduate Student Intern at Sandia National Laboratories
Currently: Computer Scientist at Army Research Laboratory

During his time as a Master's student at UT, Rolando worked on the Building-Wide Intelligence (BWI) Project. His Master's thesis was on light-based nonverbal robot signaling methods with passive demonstrations. Robots leverage passive demonstrations to allow users to understand the meaning behind nonverbal signals without having to explicitly teach users the meaning. Rolando previously interned at NASA Jet Propulsion Laboratories, working on intelligent spectral artifact recognition to inform weighted averaging to pull out faint signals in galactic hyper spectral imagery.
Priyanka Khante (May 2017)
"Learning Attributes of Real-world Objects by Clustering Multimodal Sensory Data"
Job after graduation: Ph.D. student in ECE at UT Austin
Currently: Same

Priyanka's Masters thesis focused on constructing a framework for learning attributes of real-world objects via a clustering-based approach that aims to reduce the amount of human effort required in the form of labels for object categorization. She proposed a hierarchical clustering-based model that can learn the attributes of objects without any prior knowledge about them. It clusters multi-modal sensory data obtained by exploring real-world objects in an unsupervised fashion and then obtains labels for these clusters with the help of a human and uses this information to predict attributes of novel objects. She is currently a Ph.D. student at UT Austin and continues her research in the field of machine learning and robots. 
Yuchen He (December 2013)
"Localization using Natural Landmarks Off-Field for Robot Soccer"
Job after graduation: Phd student at UIUC, Language Acquisition and Robotics group
Currently: Same

During her time as a Master's student at UT, Yuchen worked on the SPL robot soccer team. Her Master's thesis was on localization with natural landmarks from off-field surroundings other than the artificial landmarks pre-defined by domain knowledge. Robots could recognize orientation by actively extracting and identifying visual features from raw images. She has interned at the Apple Siri team to improve natural language understanding, and she is now doing robotics research on iCub at the University of Illinois at Urbana-Champaign.
Alon Farchy (May 2012)
"Learning in Simulation for Real Robots"
Job after graduation: Microsoft - Windows Phone Kernel Team
Currently: Same

For his Master's thesis, Alon studied several approaches to use simulation learning to improve the walk speeds of the real Aldabaran Nao robots. Though difficult, by constraining the simulation and iteratively guiding the simulation's learning routine, the task proved to be feasible, and an improvement of about 30% walking speed was achieved. Alon previously worked at Nvidia for two internships, working to bring up the the Android operating system to the company's Tegra processors.
Neda Shahidi (August 2010)
"A Delayed Response Policy for Autonomous Intersection Management"
Website: [ www.cs.utexas.edu/~neda ]
Job after graduation: Neuroscience Research Assistant, The University of Texas at Austin Currently: Same

Neda did her Master's thesis on a delayed response policy for Autonomous Intersection Manager (AIM) as well as physical visualization of AIM using Eco-be robots. Before that she had worked on obstacle avoidance for 4-legged robots. Starting in 2010, she has been conducting research in the field of Neuroscience in the Center for Perceptual Systems at UT Austin.
Guru Hariharan (May 2004)
"News Mining Agent for Automated Stock Trading"
Job after graduation: Amazon.com
Currently: CEO, CommerceIQ
For his Master's thesis, Guru worked with Prof. Peter Stone and Prof. Maytal Tsechansky to study the correlation between stock market movement and internet news stories, using text mining techniques. After graduation, Guru joined Amazon.com. He founded and sold Boomerang Commerce, a dynamic pricing software company, to Lowes, Inc. He then founded Commerce IQ, an AI platform to help consumer brands win in retail e-commerce. Commerce IQ has raised more than $200M, the latest round led by Softbank at unicorn valuation. Reach him at gurushyam@gmail.com.
Harish Subramanian (August 2004)
"Evolutionary Algorithms in Optimization of Technical Rules for Automated Stock Trading"
Job after UT: Murex, North America Inc.
Currently: MBA Candidate at Kellogg School of Management
Harish worked with Prof. Peter Stone and Prof. Ben Kuipers to study automated stock trading strategies using intelligent combinations of simple, intuitive ``technical'' trading rules. Since he graduated from UT, Harish has worked in the financial software industry, mostly recently at Murex, which develops derivatives trading platforms and risk management software. He is currently pursuing an MBA at Kellogg School of Management, Northwestern University; where his areas of focus are Entrepreneurship & Innovation and Finance. His current interests are in product commercialization and new venture financing. He can be reached at harish.subramanian@gmail.com.
William Yue (December 2024)
"Towards General Purpose Robots at Scale: Lifelong Learning and Learning to Use Memory"
Job after graduation: TBA
Currently: Same

Website: [ https://williamyue37.github.io ]
William's undergraduate thesis focused on developing memory mechanisms and lifelong learning capabilities needed for deploying general-purpose robots at scale in everyday environments.
Stephane Hatgis-Kessell (December 2023)
"Models of Human Preference for Learning Reward Functions; A New Approach, Perspective, and Direction for Research"
Job after graduation: Ph.D. student in Computer Science at Stanford University
Currently: Same

Website: [ https://stephanehk.github.io ]
Stephane's undergraduate thesis introduced a new model of human preferences used for learning more aligned reward functions from human preferences. His thesis additionally reframed influential prior work in light of this finding, and posed a novel direction for improving existing methods for learning from human preferences.
Brahma S. Pavse (May 2019)
"RIDM: Reinforced Inverse Dynamics Modeling for Learning from a Single Observed Demonstration"
Job after graduation: Masters student at UT Austin
Currently: Ph.D. student at the University of Wisconsin-Madison

Website: [ https://brahmasp.github.io ]
Brahma's undergraduate thesis introduced a method that combines reinforcement learning and imitation from observation to learn an inverse dynamics model to imitate (and improve upon) an expert's behavior given a single expert demonstration, with no access to the expert's actions, and with no task-specific domain knowledge in the state space. Additionally, his work showed that we can use a PID controller as an inverse dynamics model. The method outperformed other techniques on various robot control tasks.
Harsh Goyal (May 2019)
"Holistic Action Transform"
Job after graduation: Software Engineer at Google
Currently: same
Harsh's undergraduate thesis shows that customizing policy optimization methods to suit the simulation optimization methods in a sim2real setting can improve the performance of the learned policy in a real environment. This work proposed a method for simulation optimization for the setting where CMA-ES is used for policy optimization.
Sean Geiger (May 2019)
"Sample-efficient Imitation from Observation on a Robotic Arm"
Job after graduation: Software engineer at Apple
Currently: same
Sean's undergraduate thesis explored a technique for achieving sample-efficiency in generative adversarial imitation from observation.
John Fang (May 2019)
"Black-Box Optimization of Parameterized Link-Dependent Road Tolling"
Job after graduation: Master's student at Carnegie Mellon University
Currently: same
John's undergraduate thesis explored the application of several different black-box optimization techniques in the micro-tolling setting. This work showed that particular optimization techniques can greatly improve upon previous work in terms of both the system performance and the data efficiency of the optimization. 
Avilash Rath (May 2019)
"Learning Social Behavior from Human Feedback in Ad Hoc Teamwork"
Avilash's undergraduate thesis explores a method to use positive and negative feedback from a human to accelerate agent learning of effective and social behavior. This work is one of the first to put forth a fast and reliable method for ad hoc agent teams to learn both effective and social behavior.
Virin Tamprateep (May 2017)
"Of Mice and Mazes: Simulating Mice Behavior with Reinforcement Learning"
Job after graduation: Software Engineer at Microsoft
Currently: same
Virin's undergraduate honors thesis explored the extent to which instantiations of standard model-free reinforcement learning algorithms can approximate the behavior of mice learning in a maze environment.
Yuqian Jiang (December 2016)
"Efficient Symbolic Task Planning for Multiple Mobile Robots"
Job after graduation: Ph.D. student at UT Austin
Currently: same
Yuqian's undergraduate honors thesis first compares the performance of two task planners of different formalisms in a robot navigation domain, and then presents an algorithm to efficiently plan for a team of mobile robots while minimizing total expected costs.
Pato Lankenau (August 2016)
"Virtour: Telepresence System for Remotely Operated Building Tours"
Job after graduation: Software engineer, Apple
Currently: Software engineer, Apple
Pato's undergraduate honors thesis introduced a virtual telepresence system that enables public users to remotely operate and spectate building tours using the BWI robot platform. Pato interned at Google and later interned at Apple before joining them as part of a distributed systems team. In his free time Pato enjoys dancing West Coast Swing.
Mike Depinet (May 2014)
"Keyframe Sampling, Optimization, and Behavior Integration: A New Longest Kick in the RoboCup 3D Simulation League"
Job after graduation: Software engineer, Google
Currently: same
Mike's undergraduate honors thesis demonstrates a procedure for mimicking, improving, and incorporating existing behaviors of an observed robot. The work detailed in the thesis was instrumental in UT Austin Villa's victory at RoboCup 2014, through which a paper based on the thesis was published. Mike is now a software engineer at Google, working on the Newstand app in the Play store.
Dustin Carlino (December 2013)
"Approximately Orchestrated Routing and Transportation Analyzer: City-scale traffic simulation and control schemes"
Job after graduation: Software engineer, Google
Currently: Speculative cartographer at A/B Street
Dustin's undergraduate honors thesis introduces a new agent-based traffic simulator for studying traffic control schemes for autonomous vehicles. It led to two first-authored publications. He continues development on the simulator at http://www.aorta-traffic.org. Dustin has interned at Facebook and Google, working on distributed systems.
Chris Donahue (December 2013)
"Applications of Genetic Programming to Digital Audio Synthesis"
Website: [ https://chrisdonahue.com ]
Job after graduation: PhD student at UCSD in computer music
Currently: Assistant Professor, Computer Science Department, Carnegie Mellon University
Chris's undergraduate honors thesis employed genetic programming and CMA-ES to optimize sound synthesis algorithms to mimic acoustic and electronic instruments. Additionally, he authored a VST audio plugin for interactive genetic programming of synthesized timbres. Chris's subsequent research focuses on using machine learning to build music technology that enables a broader set of users to engage with music on a deeper level.
Adrian Lopez-Mobilia (May 2012)
" Inverse Kinematics Kicking in the Humanoid RoboCup Simulation League"
Job after graduation: Programmer White Whale Games
Currently: same
Adrian's undergraduate honors thesis focused on the kicking system used in UT Austin Villa's winning entry to the 2011 and 2012 RoboCup 3D Simulation League competitions. He is currently the programmer for a small video game startup company, White Whale Games, working on a mobile game called God of Blades. Adrian completed two REU's at Trinity University and has interned at Starmount Systems and Microsoft.
Nick Collins (May 2012)
"Transformation of Robot Model to Facilitate Optimization of Locomotion"
Job after graduation: Software Engineer, Facebook
Currently: Working on open source projects
Nick's undergraduate honors thesis studies how to enable a simulated humanoid robot to stand up from a fallen position, and how to generalize such a skill to robots with different physical chacteristics. Nick has interned at at HP, Qualcomm, and Facebook.
Chau Nguyen (December 2009)
"Constructing Drivability Maps Using Laser Range Finders for Autonomous Vehicles"
Job after graduation: Masters student at UT Austin
Currently: Ph.D. student at Cornell
Chau's undergraduate honors thesis aims at using data from a 3D laser range sensor on an autonomous vehicle to improve the vehicle's capability to recognize drivable road segments. Chau interned at IBM, Cisco, and Facebook.
Adam Setapen (May 2009)
"Exploiting Human Motor Skills for Training Bipedal Robots"
Website: [ www.adamsetapen.com ]
Job after graduation: Masters student at UT Austin
Currently: Ph.D. student at MIT Media Lab
Adam is a Ph.D. student researching human-robotic interaction, specifically investigating ways to exploit the ability of a human to quickly train robots. Adam has interned at TRACLabs, Amazon.com, and the University of Virginia Medical Center. In his free time, he enjoys biking, racquetball, and playing cello and guitar.
Tarun Nimmagadda (May 2008)
"Building an Autonomous Ground Traffic System"
Website: [ www.mutualmobile.com ]
Job After Graduation: Co-Founder of SparkPhone in Austin, TX
Currently: Chief Operating Officer of Mutual Mobile in Austin, TX
Tarun's undergraduate thesis detailed his contributions to obstacle tracking in UT's entry to the DARPA Urban Grand Challenge and the Autonomous Intersection Management project. He is now working on launching SparkPhone - an application that provides users with 80-90% cheaper international calls versus directly dialing through your carrier. Additionally, unlike VOIP services, SparkPhone does not require that you have a WiFi or data connection when making phone calls because all SparkPhone calls go through the cell network. Over the past year Tarun has built several iPhone apps - one of which (HangTime) was just named by PC World as the dumbest iPhone app of all time.
Ryan Madigan (May 2007)
"Creating a High-Level Control Module for an Autonomous Mobile Robot Operating in an Urban Environment"
Job after graduation: Software/Systems Engineer at USAA in San Antonio, TX
Currently: US Air Force officer
Ryan's undergraduate honors thesis detailed his contribution to UT's entry in the 2007 DARPA Urban Challenge competition. His work mostly centered on the high-level control and decision-making aspects of an autonomous vehicle as it navigates through city streets. After graduating, Ryan took a position as a software engineer at USAA, where he has designed and implemented significant enhancements to the authentication and content capture components of usaa.com. Even so, his passion for robotics still continues today. Ryan is currently pursuing an MBA in Finance from UTSA.
Jan Ulrich (May 2006)
"An Analysis of the 2005 TAC SCM Finals"
MSc in Computer Science, University of British Columbia, 2008
Website: [ optemo.com/team ]
Job after graduation: Graduate student at University of British Columbia Currently: Co-founder of start-up company "Optemo"
Jan's thesis analyzed the 2005 TAC SCM Finals. This was part of the preparatory work that led the TacTex team to win the 2006 TAC SCM Championship. At UBC Jan became interested in natural language processing and wrote a dissertation titled "Supervised Machine Learning for Email Thread Summarization". Subsequently Jan co-founded a start-up company called Optemo that uses artificial intelligence to assist online shoppers by providing an example-centric product navigation solution.
Irvin Hwang (May 2005)
"Discovering Conditions for Intermediate Reinforcement with Causal Models"
Irvin's undergraduate honors thesis dealt with accelerating reinforcement learning by automatically applying intermediate reward using causal models. His thesis received the Sun Microsystems Award for Excellence in Computer Sciences/Computer Engineering Research at the University of Texas undergraduate research forum. Irvin is currently studying reinforcement learning as a Ph.D. student in the computational cognitive neuroscience group at Princeton.
Ellie Lin (December 2003)
"Creation of a Fine Controlled Action for a Robot"
Job after UT: Graduate student at Carnegie Mellon University
Currently: Teacher in the Pittsburgh Public Schools (CAPA)
Ellie's undergraduate honors thesis focussed on enabling robots to perform action sequences that have low tolerance for error. After graduating, Ellie completed her M.S. in robotics and M.A.T. in secondary mathematics. She currently teaches math at the Pittsburgh performing arts school (CAPA) and hopes to introduce the joy of robotics to her students.