Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Evolutionary Function Approximation for Reinforcement Learning

Evolutionary Function Approximation for Reinforcement Learning.
Shimon Whiteson and Peter Stone.
Journal of Machine Learning Research, 7:877–917, May 2006.
Available from journal's web page.

Download

[PDF]1.7MB  [postscript]6.1MB  

Abstract

Temporal difference methods are theoretically grounded and empirically effective methods for addressing reinforcement learning problems. In most real-world reinforcement learning tasks, TD methods require a function approximator to represent the value function. However, using function approximators requires manually making crucial representational decisions. This paper investigates evolutionary function approximation, a novel approach to automatically selecting function approximator representations that enable efficient individual learning. This method evolves individuals that are better able to learn. We present a fully implemented instantiation of evolutionary function approximation which combines NEAT, a neuroevolutionary optimization technique, with Q-learning, a popular TD method. The resulting NEAT+Q algorithm automatically discovers effective representations for neural network function approximators. This paper also presents on-line evolutionary computation, which improves the on-line performance of evolutionary computation by borrowing selection mechanisms used in TD methods to choose individual actions and using them in evolutionary computation to select policies for evaluation. We evaluate these contributions with extended empirical studies in two domains: 1) the mountain car task, a standard reinforcement learning benchmark on which neural network function approximators have previously performed poorly and 2) server job scheduling, a large probabilistic domain drawn from the field of autonomic computing. The results demonstrate that evolutionary function approximation can significantly improve the performance of TD methods and on-line evolutionary computation can significantly improve evolutionary methods. This paper also presents additional tests that offer insight into what factors can make neural network function approximation difficult in practice.

BibTeX Entry

@Article{JMLR06,
	Author="Shimon Whiteson and Peter Stone",
	title="Evolutionary Function Approximation for Reinforcement Learning",
	journal="Journal of Machine Learning Research",
	year="2006",
	pages="877--917",
	volume="7",month="May",
	abstract={
                  Temporal difference methods are theoretically
                  grounded and empirically effective methods for
                  addressing reinforcement learning problems.  In most
                  real-world reinforcement learning tasks, TD methods
                  require a function approximator to represent the
                  value function.  However, using function
                  approximators requires manually making crucial
                  representational decisions.  This paper investigates
                  \emph{evolutionary function approximation}, a novel
                  approach to automatically selecting function
                  approximator representations that enable efficient
                  individual learning.  This method \emph{evolves}
                  individuals that are better able to \emph{learn}.
                  We present a fully implemented instantiation of
                  evolutionary function approximation which combines
                  NEAT, a neuroevolutionary optimization technique,
                  with Q-learning, a popular TD method.  The resulting
                  NEAT+Q algorithm automatically discovers effective
                  representations for neural network function
                  approximators.  This paper also presents
                  \emph{on-line evolutionary computation}, which
                  improves the on-line performance of evolutionary
                  computation by borrowing selection mechanisms used
                  in TD methods to choose individual actions and using
                  them in evolutionary computation to select policies
                  for evaluation.  We evaluate these contributions
                  with extended empirical studies in two domains: 1)
                  the mountain car task, a standard reinforcement
                  learning benchmark on which neural network function
                  approximators have previously performed poorly and
                  2) server job scheduling, a large probabilistic
                  domain drawn from the field of autonomic computing.
                  The results demonstrate that evolutionary function
                  approximation can significantly improve the
                  performance of TD methods and on-line evolutionary
                  computation can significantly improve evolutionary
                  methods.  This paper also presents additional tests
                  that offer insight into what factors can make neural
                  network function approximation difficult in
                  practice.},
    wwwnote = {Available from <a href="http://jmlr.csail.mit.edu/papers/v7/whiteson06a.html">journal's web page</a>.},
}	

Generated by bib2html.pl (written by Patrick Riley ) on Tue Nov 19, 2024 10:24:39