Course Project
This course has concluded. You can view the list of student projects from our past offerings here.
- Learning vision-based robot manipulation with deep reinforcement methods;
- Self-supervised representation learning of visual and tactile data;
- Model-based object pose estimation for 6-DoF grasping from RGB-D images.
- Improve an existing approach. You can select a paper you are interested in, reimplement it, and improve it with what you learned in the course.
- Apply an algorithm to a new problem. You will need to understand the strengths and weaknesses of an existing algorithm from research work, reimplement it, and apply it to a new problem.
- Stress test existing approaches. This kind of project involves a thorough comparison of several existing approaches to a robot learning problem.
- Design your own approach. In these kinds of projects, you come up with an entirely new approach to a specific problem. Even the problem may be something that has not been considered before.
- Mix and Match approaches. For these projects, you typically combine approaches that have been developed separately to address a larger and more complex problem.
- Join a research project. You can join an existing Robot Learning project with UT faculty and researchers. You are expected to articulate your own contributions in your project reports (more detail below).
You may work individually or pair up with one teammate on the project, and grades will be calibrated by team size. Projects of a larger scope are expected for teams of two. Your project may be related to research in another class project as long as consent is granted by instructors of both classes; however, you must clearly indicate in the project proposal, milestone, and final reports the exact portion of the project that is being counted for this course. In this case, you must prepare separate reports for each course, and submit your final report for the other course as well.
Grading Policy
The course project is worth 40% of the total grade. The following shows the breakdown:- Project Proposal (5%). Due Thu Sept 17.
- Project Milestone (5%). Due Thu Oct 15.
- Final Report (25%). Due Fri Dec 11.
- Spotlight Talk (5%). Week 15.
Project Inspirations and Resources
To inspire ideas, you might also look at recent robotics publications from top-tier conferences, as well as other resources below.
- RSS: Robotics: Science and Systems
- ICRA: IEEE International Conference on Robotics and Automation
- IROS: IEEE/RSJ International Conference on Intelligent Robots and Systems
- CORL: Conference on Robot Learning
- ICLR: International Conference on Learning Representations
- NeurIPS: Neural Information Processing Systems
- ICML: International Conference on Machine Learning
- Publications from the UT Robot Perception and Learning Lab
You may also look at popular simulated environments and robotics datasets as listed below.
Simulated Environments
- robosuite: MuJoCo-based toolkit and benchmark of learning algorithms for robot manipulation
- RoboVat: Tabletop manipulation environments in Bullet Physics
- OpenAI Gym: MuJoCo-based environments for continuous control and robotics
- AI2-THOR: open-source interactive environments for embodied AI
- RLBench: robot learning benchmark and learning environment built around V-REP
- CARLA: self-driving car simulator in Unreal Engine 4
- AirSim: simulator for autonomous vehicles built on Unreal Engine / Unity
- Interactive Gibson: interactive environment for learning robot manipulation and navigation
- AI Habitat: simulation platform for research in embodied artificial intelligence
Robotics Datasets
- Dex-Net: 3D synthetic object model dataset for object grasping
- RoboTurk: crowdsourced human demonstrations in simulation and real world
- RoboNet: video dataset for large-scale multi-robot learning
- YCB-Video: RGB-D video dataset for model-based 6D pose estimation and tracking
- nuScenes: large-scale multimodal dataset for autonomous driving