Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


On Learning with Imperfect Representations

On Learning with Imperfect Representations.
Shivaram Kalyanakrishnan and Peter Stone.
In Proceedings of the 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning, pp. 17–24, IEEE, April 2011.

Download

[PDF]163.8kB  [postscript]196.0kB  

Abstract

In this paper we present a perspective on the relationship between learning and representation in sequential decision making tasks. We undertake a brief survey of existing real-world applications, which demonstrates that the classical ``tabular'' representation seldom applies in practice. Specifically, several practical tasks suffer from state aliasing, and most demand some form of generalization and function approximation. Coping with these representational aspects thus becomes an important direction for furthering the advent of reinforcement learning in practice. The central thesis we present in this position paper is that in practice, learning methods specifically developed to work with imperfect representations are likely to perform better than those developed for perfect representations and then applied in imperfect-representation settings. We specify an evaluation criterion for learning methods in practice, and propose a framework for their synthesis. In particular, we highlight the degrees of ``representational bias'' prevalent in different learning methods. We reference a variety of relevant literature as a background for this introspective essay.

BibTeX Entry

@InProceedings{ADPRL11-shivaram,
  author =       "Shivaram Kalyanakrishnan and Peter Stone",
  title =        "On Learning with Imperfect Representations",
  booktitle =    "Proceedings of the 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning",
  year =         "2011",
  month =     "April",
  location = "Paris, France",
  abstract={
    In this paper we present a perspective on the relationship between
    learning and representation in sequential decision making tasks. We
    undertake a brief survey of existing real-world applications, which
    demonstrates that the classical ``tabular'' representation seldom
    applies in practice. Specifically, several practical tasks suffer
    from state aliasing, and most demand some form of generalization and
    function approximation. Coping with these representational aspects
    thus becomes an important direction for furthering the advent of
    reinforcement learning in practice.  The central thesis we present
    in this position paper is that in practice, learning methods
    specifically developed to work with imperfect representations are
    likely to perform better than those developed for perfect
    representations and then applied in imperfect-representation
    settings. We specify an evaluation criterion for learning methods in
    practice, and propose a framework for their synthesis. In
    particular, we highlight the degrees of ``representational bias''
    prevalent in different learning methods. We reference a variety of
    relevant literature as a background for this introspective essay.
  },
  ISBN =         "978-1-4244-9886-4",
  publisher = "IEEE",
  pages =     "17--24",
}

Generated by bib2html.pl (written by Patrick Riley ) on Tue Nov 19, 2024 10:24:47