Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Reinforcement Learning for RoboCup-Soccer Keepaway

Reinforcement Learning for RoboCup-Soccer Keepaway.
Peter Stone, Richard S. Sutton, and Gregory Kuhlmann.
Adaptive Behavior, 13(3):165–188, 2005.
Contains material that was previously published in an ICML-2001 paper and a RoboCup 2003 Symposium paper.
Some simulations of keepaway referenced in the paper and keepaway software.

Download

[PDF]1.2MB  [postscript]2.0MB  

Abstract

RoboCup simulated soccer presents many challenges to reinforcement learning methods, including a large state space, hidden and uncertain state, multiple independent agents learning simultaneously, and long and variable delays in the effects of actions. We describe our application of episodic SMDP Sarsa(lambda) with linear tile-coding function approximation and variable lambda to learning higher-level decisions in a keepaway subtask of RoboCup soccer. In keepaway, one team, ``the keepers,'' tries to keep control of the ball for as long as possible despite the efforts of ``the takers.'' The keepers learn individually when to hold the ball and when to pass to a teammate. Our agents learned policies that significantly outperform a range of benchmark policies. We demonstrate the generality of our approach by applying it to a number of task variations including different field sizes and different numbers of players on each team.

BibTeX Entry

@Article{AB05,
        Author="Peter Stone and Richard S. Sutton and Gregory Kuhlmann",
        Title="Reinforcement Learning for {R}obo{C}up-Soccer Keepaway",
        journal="Adaptive Behavior",
	volume="13",number="3",
        year = "2005", pages="165--188",
    abstract={
              RoboCup simulated soccer presents many challenges to
              reinforcement learning methods, including a large state
              space, hidden and uncertain state, multiple independent
              agents learning simultaneously, and long and variable
              delays in the effects of actions.  We describe our
              application of episodic SMDP Sarsa(lambda) with linear
              tile-coding function approximation and variable
              lambda to learning higher-level decisions in a
              keepaway subtask of RoboCup soccer.  In keepaway, one
              team, ``the keepers,'' tries to keep control of the ball
              for as long as possible despite the efforts of ``the
              takers.''  The keepers learn individually when to hold
              the ball and when to pass to a teammate. Our agents
              learned policies that significantly outperform a range
              of benchmark policies.  We demonstrate the generality of
              our approach by applying it to a number of task
              variations including different field sizes and different
              numbers of players on each team.
    },
    wwwnote={Contains material that was previously published in an <a href="http://www.cs.utexas.edu/~pstone/Papers/2001ml/keepaway.pdf">ICML-2001 paper </a> and a <a href="http://www.cs.utexas.edu/~pstone/Papers/2003robocup/keepaway-progress.pdf"> RoboCup 2003 Symposium paper</a>.<br>
            Some <a href="http://www.cs.utexas.edu/users/AustinVilla/sim/keepaway/">simulations of keepaway</a> referenced in the paper and keepaway software.},
}       

Generated by bib2html.pl (written by Patrick Riley ) on Tue Nov 19, 2024 10:24:39