Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Transferring Instances for Model-Based Reinforcement Learning

Transferring Instances for Model-Based Reinforcement Learning.
Matthew E. Taylor, Nicholas K. Jong, and Peter Stone.
In Machine Learning and Knowledge Discovery in Databases, pp. 488–505, September 2008.
Official version from Publisher's Webpage© Springer-Verlag

Download

[PDF]304.9kB  [postscript]860.1kB  

Abstract

Recent work in transfer learning has succeeded in Reinforcement learning agents typically require a significant amount of data before performing well on complex tasks. Transfer learning methods have made progress reducing sample complexity, but they have primarily been applied to model-free learning methods, not more data-efficient model-based learning methods. This paper introduces TIMBREL, a novel method capable of transferring information effectively into a model-based reinforcement learning algorithm. We demonstrate that TIMBREL can significantly improve the sample efficiency and asymptotic performance of a model-based algorithm when learning in a continuous state space. Additionally, we conduct experiments to test the limits of TIMBREL's effectiveness.

BibTeX Entry

@inproceedings(ECML08-taylor,
  author="Matthew E.\ Taylor and Nicholas K.\ Jong and Peter Stone",
  title="Transferring Instances for Model-Based Reinforcement Learning",
       booktitle="Machine Learning and Knowledge Discovery in Databases",
  month="September",
  year= "2008",
        series="Lecture Notes in Artificial Intelligence",      
        volume="5212",
  pages="488--505",
  wwwnote={<a href="http://www.ecmlpkdd2008.org/">ECML-2008</a>},
  abstract={Recent work in transfer learning has succeeded in
    Reinforcement learning agents typically require a significant
    amount of data before performing well on complex tasks.  Transfer
    learning methods have made progress reducing sample complexity,
    but they have primarily been applied to model-free learning
    methods, not more data-efficient model-based learning
    methods. This paper introduces TIMBREL, a novel method capable of
    transferring information effectively into a model-based
    reinforcement learning algorithm. We demonstrate that TIMBREL can
    significantly improve the sample efficiency and asymptotic
    performance of a model-based algorithm when learning in a
    continuous state space. Additionally, we conduct experiments to
    test the limits of TIMBREL's effectiveness.},
    wwwnote={Official version from <a href="http://dx.doi.org/978-3-540-87481-2_32">Publisher's Webpage</a>&copy Springer-Verlag},
)

Generated by bib2html.pl (written by Patrick Riley ) on Sun Nov 24, 2024 20:24:56