Robot shooting

Research

Our research focuses on machine learning, multiagent systems and robotics.

Humanoid Walk Optimization in Simulation and the Real World
Learning on real robots can be time consuming and costly, but things learned in simulation may not directly apply back to the real world. This research introduces Grounded Simulation Learning to address this problem by iteratively learning in simulation and on real robots. The small number of robot tests serves to direct and limit the optimization in simulation to parameters that are useful on the real robots.
Dynamic Role Assignment and Formation Positioning
We have created a decentralized system for coordinating the movement of a group of autonomous agents into specified formational positions that prevents collisions while minimizing the makespan of the formation. This work is employed in the partially observable, non-deterministic, noisy, dynamic, and limited communication setting of the RoboCup 3D simulation domain.
Omnidirectional Humanoid Walk Optimization
Some of our research has focused on optimizing parameters for a omnidirectional humanoid walk engine. This was the crucial component in our 3D simulation team winning the 2011 RoboCup competition.
Ground truth detection system for the Nao
We have used the Microsoft Kinect sensor to create a ground truth detection system for use in the SPL. This system is low-cost, portable, easy to set up and does not require markers on the robots.
Calibrating and processing camera images on the Nao
Differences in the camera from one Nao to another can sometimes be substantial enough to warrant different color lookup tables. In this work, we present a technique to allow multiple cameras to use the same color table by learning the difference in parameters such as saturation, brightness etc.
Learning to score penalty kicks
We have a used a novel model-based reinforcement learning algorithm to learn to score penalty kicks on an Aldebaran Nao humanoid robot.
Learning powerful kicks
Machine learning was applied to kicking motion process in order to optimize the power of kicking motions on the Aibo ERS-7. The resulting learned kick is shown to be more powerful than UT Austin Villa's most powerful handcoded kick.
Exploiting human motor skills to train robots
We have implemented a novel direct interface from a human in a motion capture suit to various robots, including the Aldebaran Nao humanoid and Sony AIBO quadruped.
Learning to Walk
Machine learning has proven to be an effective tool for gait optimization for the Aibos. During the fall of 2003, we used a variant of policy gradient reinforcement learning to create the fastest recorded walk on the Aibo to date.
Illumination-Invariant Planned Color Learning
To be able to deploy an agent to act autonomously in the real world over an extended period of time, the agent must be able to learn so as to be able to deal with unexpected environmental conditions. We consider the problem of enabling a robot to learn colors and adapt to illumination changes autonomously.
Illumination-Invariant Vision
Color constancy (or illumination invariance) is the ability of a visual system to recognize an object's true color across a range of variations in factors extrinsic to the object (such as lighting conditions). In this research, we consider the problem of color constancy on mobile robots.
Autonomous Sensor and Actuator Model Induction
Robots rely on models of their actions and sensors in order to successfully interact with their environment. We programmed our Aibos to learn these models simultaneously, autonomously, and without receiving any external feedback.
Learning to Acquire the Ball
Because the new ERS-7 robots are built differently than the ERS-210A robots we used earlier, adapting our approach (taking possession of the ball) to the new robots has been particularly difficult. Fortunately, we were able to extend our previous work with learning to walk to help us solve this problem.
Robust Localization
Mobile robot localization, the ability of a robot to determine its position and orientation in a global frame of reference, continues to be a major research focus in robotics. We propose a series of practical enhancements to a baseline algorithm, including the use of negative information and line observations. We empirically demonstrate that these enhancements improve accuracy and robustness.
Autonomous Color Learning
Color segmentation is a challenging subtask in computer vision. Most popular approaches are computationally expensive, involve an extensive off-line training phase and/or rely on a stationary camera. This paper presents an approach for color learning on-board a legged robot with limited computational and memory resources. A key defining feature of the approach is that it works without any labeled training data.
Robust Vision
Computer vision is a broad and significant ongoing research challenge, even when performed on an individual image or on streaming video from a high-quality atationary camera with abundant computational resources. When faced with streaming video from a lower-quality, rapidly, jerkily-moving camera and limited computational resources, the challenge only increases. We present our implementation of a real-time vision system on a mobile robot platform.
Learning to Play Keepaway
Some of our research in simulated soccer has focused on applying machine learning techniques to a subtask of soccer called Keepaway. In Keepaway, one team tries to maintain possession of the ball in a fixed playing region while the opposing team tries to steal the ball.
Giving Advice Based On Opponent Modeling
One part of our research in simulated soccer explores the problem of designing an agent to give strategic opponent-specific advice to soccer players in a standardized language. Our approaches to this problem involve both online and offline learning.
Continual Area Sweeping
A continuous area sweeping task is one in which a robot (or group of robots) must repeatedly visit all points in a fixed area, possibly with non-uniform frequency, as specified by a task-dependent performance criterion. Examples of problems that need continuous area sweeping are trash removal in a large building and routine surveillance. We present a formulation this problem and explore algorithms to address it.
Model-Based Vision
By using a model of its environment, a robot can perform its vision and localization processing more efficiently and accurately. In one model-based approach, the robot's visual processing is dramatially sped up by the use of selective visual attention. In the second, we compare two methods for vision and localization on a legged robot, one based on the robot's expectations, and the other based on object detection and Monte Carlo localization.
Person Tracking on Mobile Robots
To make our assistant robot track a particular person among many people, we developed a real-time face recognition method that is invariant to pose variations and illumination changes. A shirt color tracker backs up the face recognizer and interacts with it to improve the performance of the overall system.
Valid CSS!
Valid XHTML 1.0!