• Classified by Topic • Classified by Publication Type • Sorted by Date • Sorted by First Author Last Name • Classified by Funding Source •
A Task Specification Language for Bootstrap Learning.
Ian
Fasel, Michael Quinlan, and Peter
Stone.
In AAAI Spring 2009 Symposium on Agents that Learn from Human Teachers, March 2009.
AAAI
Spring 2009 Symposium: Agents that Learn from Human Teachers
[PDF]407.2kB [postscript]1.9MB
Reinforcement learning (RL) is an effective framework for online learning by autonomous agents. Most RL research focuses on domain-independent learning algorithms, requiring an expert human to define the environment (state and action representation) and task to be performed (e.g. start state and reward function) on a case-by-case basis. In this paper, we describe a general language for a teacher to specify sequential decision making tasks to RL agents. The teacher may communicate properties such as start states, reward functions, termination conditions, successful execution traces, task decompositions, and other advice. The learner may then practice and learn the task on its own using any RL algorithm. We demonstrate our language in a simple BlocksWorld example and on the RoboCup soccer keepaway benchmark problem. The language forms the basis of a larger ``Bootstrap Learning'' model for machine learning, a paradigm for incremental development of complete systems through integration of multiple machine learning techniques.
@InProceedings{AAAIsymp09-fasel, author="Ian Fasel and Michael Quinlan and Peter Stone", title="A Task Specification Language for Bootstrap Learning", booktitle="AAAI Spring 2009 Symposium on Agents that Learn from Human Teachers", month="March", year="2009", abstract={Reinforcement learning (RL) is an effective framework for online learning by autonomous agents. Most RL research focuses on domain-independent learning \emph{algorithms}, requiring an expert human to define the \emph{environment} (state and action representation) and \emph{task} to be performed (e.g.\ start state and reward function) on a case-by-case basis. In this paper, we describe a general language for a teacher to specify sequential decision making tasks to RL agents. The teacher may communicate properties such as start states, reward functions, termination conditions, successful execution traces, task decompositions, and other advice. The learner may then practice and learn the task on its own using any RL algorithm. We demonstrate our language in a simple BlocksWorld example and on the RoboCup soccer keepaway benchmark problem. The language forms the basis of a larger ``Bootstrap Learning'' model for machine learning, a paradigm for incremental development of complete systems through integration of multiple machine learning techniques.}, wwwnote={<a href="http://www.aaai.org/Symposia/Spring/sss09.php">AAAI Spring 2009 Symposium: Agents that Learn from Human Teachers</a>}, }
Generated by bib2html.pl (written by Patrick Riley ) on Sun Nov 24, 2024 20:24:58