Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Value Functions for RL-Based Behavior Transfer: A Comparative Study

Value Functions for RL-Based Behavior Transfer: A Comparative Study.
Matthew E. Taylor, Peter Stone, and Yaxin Liu.
In Proceedings of the Twentieth National Conference on Artificial Intelligence, July 2005.
AAAI 2005

Download

[PDF]147.3kB  [postscript]449.9kB  

Abstract

Temporal difference (TD) learning methods have become popular reinforcement learning techniques in recent years. TD methods, relying on function approximators to generalize learning to novel situations, have had some experimental successes and have been shown to exhibit some desirable properties in theory, but have often been found slow in practice. This paper presents methods for further generalizing across tasks, thereby speeding up learning, via a novel form of behavior transfer. We compare learning on a complex task with three function approximators, a CMAC, a neural network, and an RBF, and demonstrate that behavior transfer works well with all three. Using behavior transfer, agents are able to learn one task and then markedly reduce the time it takes to learn a more complex task. Our algorithms are fully implemented and tested in the RoboCup-soccer keepaway domain.

BibTeX Entry

@InProceedings(AAAI05-transfer,
        author="Matthew E.\ Taylor and Peter Stone and Yaxin Liu",
        title="Value Functions for {RL}-Based Behavior Transfer: A Comparative Study",
        booktitle="Proceedings of the Twentieth National Conference on Artificial Intelligence",
        month="July",year="2005",
        abstract={
                  Temporal difference (TD) learning methods have
                  become popular reinforcement learning techniques in
                  recent years. TD methods, relying on function
                  approximators to generalize learning to novel
                  situations, have had some experimental successes and
                  have been shown to exhibit some desirable properties
                  in theory, but have often been found slow in
                  practice. This paper presents methods for further
                  generalizing \emph{across tasks}, thereby speeding
                  up learning, via a novel form of \emph{behavior
                  transfer}.  We compare learning on a complex task
                  with three function approximators, a CMAC, a neural
                  network, and an RBF, and demonstrate that behavior
                  transfer works well with all three.  Using behavior
                  transfer, agents are able to learn one task and then
                  markedly reduce the time it takes to learn a more
                  complex task.  Our algorithms are fully implemented
                  and tested in the RoboCup-soccer keepaway domain.
                 },
        wwwnote={<a href="http://www.aaai.org/Conferences/National/2005/aaai05.html">AAAI 2005</a>},
)

Generated by bib2html.pl (written by Patrick Riley ) on Tue Nov 19, 2024 10:24:46