The goal of this assignment is to help you gain an understanding of the issues involved in esimating the robot's position from its observations in a noisy world. At the same time, you will learn about a common approach to this problem called particle filtering. This assignment will not require the use of the physical robots as everything will be done in simulation.
The assignment is divided into three parts. In Part I, you will become familiar with the simulator and run some tests with a completed player binary. In Part II, you will implement a naive solution to the localization problem that will motivate the need for a more sophisticated approach. Finally, in Part III, you will complete the missing part of an existing particle filtering implementation yielding a better solution than the naive approach. Some parts of the assignment may take more time than others. Please at least browse the entire assignment before beginning and pace yourself accordingly.
This assigment makes use of a simulator that was built by members of the UT Austin Villa team to test the robots' localization algorithm more easily than would be possible on the physical robots. The simulator is a java server with a visual display similar to the one used in UT Assist. A client consists of some wrapper code around the robot's localization code to communicate with the server via UDP messages.
The simulator keeps track of the robot's state and sends the appropriate high-level vision observations to the client. The client is responsible for deciding the robot's behavior and must communicate its actions to the server. The simulator adds gaussian noise to observations and action effects.
Take the following steps to get started with the simulator.
tar xzvf localization.tar.gz
cd localization/Simulator javac *.java
java Server &
cd .. ./simclient-solution
The robot should appear on the simulator's field display as a dark blue pair of triangles. This shape is the robot's actual pose (position and orientation) in the simulated world. The light blue shape is the robot's internal estimate of its pose. The small white dots on the field show the positions of the particles in the agent's particle filtering algorithm. (The orientation of the particles is not displayed.)
The player will attempt to make a figure 8 around the field. It is possible to move the player around the field by clicking on the black dot in the center of the robot's body with your left mouse button, dragging it to a different area, and dropping it. To rotate the robot, click on the black dot with the right mouse button and move the mouse up and down.
For each step of the simulation, the server will be printing messages to stdout that look something like the following:
Inst: D: 29.4, A: 0.4 || Avg: D: 204.6, A: 3.8This line displays the instantaneous and average error in the robot's pose estimate. D is the distance error in mm. A is the angle error in degrees. The average will start to be computed as soon as the client connects. To reset the average, click on the Reset button below the field display in the simulator gui.
The client supplied in the assignment package has all of the functionality that you will eventually have to implement yourself in parts II and III. Before you do any coding yourself, it will be helpful to see what you can expect from a working solution. In this section you will be asked to run some tests with the solution client and answer a few questions.
./simclient-solution -nThis client will use standard particle filtering without using any triangulation values during localization. Again, let the robot complete a figure 8 then drag the player varying distances away from its location.
./simclient-solution -sThis client does not use particle filtering at all. Instead, it keeps a single estimate of its pose that is updated using triangulation values and motion updates. This is the naive algorithm that you will be implementing in part II. Again, let the the player do a figure 8, then disrupt it with movements of varying distance.
In this section you will implement a strawman localization algorithm that maintains a single position estimate. This estimate is updated by the robot's motion and by triangulating its position using the distances and angles to landmarks. Here is the basic pseudocode for the algorithm that you will implement:
Inputs: robot_motion, observation_history, pose Output: pose for each step pose = pose + robot_motion estimates = {} for each combination of 2 or 3 beacons in observation_history est = calculate_position_from_landarks(beacons) if quality(est) > threshold estimates = estimates U est avg_estimate = average(estimates) pose = (1-alpha) * pose + alpha * avg_estimate
Every time step, the localization algorithm uses the translational and rotational velocities of the robot to calculate the robot's change in pose. These displacements are given relative to the robot's frame of reference: positive X is to the robot's right, positive Y is to robot's front. The displacement should be translated into the global frame of reference given the robot's pose then added to the current estimate.
Next, the algorithm uses the distances and angles to landmarks observed recently to calculate triangulation estimates. You may choose to do 2 beacon triangulation using distance and angle information, 3 beacon triangulation using just angle information, or both. The solution binary uses both. An estimate is calculated for each pair or triple of landmarks. If an estimate is good enough according to a quality metric that you construct, it will be stored and used subsequently in the calculation of an average estimate. Keep in mind that while positions can be averaged in the obvious way, angles cannot typically be averaged using a standard arithmetic mean.
Alpha was chosen to be 0.02 in the solution client, but this value was
chosen without experimentation. You may find a value of alpha that
works better for you. A good value for threshold will depend
on how you choose to evaluate the quality of a triangulation estimate.
Hint: See section 7.4.2 of the 2004 tech report for an example
of how to evaluate the quality of a triangulation estimate
The majority of the above algorithm should be implemented in the function UpdateWorldStateSingleEstimate in the file player/Brain/Localization/PFLocalization.cc. This function makes a call to getReseedEstimates, which currently does nothing. It must be changed to calculate and store the triangulation estimates, given the observations. Again, you may do 2-beacon or 3-beacon triangulation, or both. When you are ready to test:
cd localization/player/SimClient make
./simclient -s
You should make an effort to do as well as (if not better than) the
solution binary when run in single estimate mode.
Q: What is the average error after completing 3 figure 8s?
Q: How quickly does the player recover from large unmodeled movements?
Q: What value of alpha gave you the best tradeoff?
In this section, the task is to take an almost complete particle filtering implementation and fill in a few missing pieces. In addition to the triangulation routines implemented in the previous section, you will need to implement the observation update step of the particle filtering algorithm.
In the observation update, each particle's probability is updated according to the likelihood of the observations made in the current frame given that the particle represents the robot's true pose. You will have to devise a similarity metric that uses the expected and observed distances and angles to all landmarks in the current frame to calculate a new probability for that particle where a value close to 0 means that the observations are unlikely from that pose and a value close to 1 means that the observations are likeley given the particle's pose.
The observation update should be implemented in the function updateProbabilities in the file player/Brain/Localization/Particles.cc. You are given the expected and observed distances and angles for every landmark/particle pair. For each particle, you must calculate a newProb. Do not worry about filtering this value to keep the probability change small. This step is done for you by the call to adjustProbability. When you are ready to test:
./simclient -nQ: What is the average error after completing 3 figure 8s?
./simclientQ: By how much is the recovery time improved?
Please submit your completed source code and compiled binary along with a README text file containing your answers to all of the above questions in a tarball attached to an email to Dan and Peter (stronger@cs and pstone@cs). Please also send the README as plain text.
Page maintained by Peter
Stone and Dan
Stronger
Questions? Send me mail