Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Policy Evaluation in Continuous MDPs with Efficient Kernelized Gradient Temporal Difference

Policy Evaluation in Continuous MDPs with Efficient Kernelized Gradient Temporal Difference.
Alec Koppel, Garrett Warnell, Ethan Stump, Peter Stone, and Alejandro Ribeiro.
IEEE Transactions on Automatic Control, 66(4):1856–63, April 2021.
official online version

Download

[PDF]648.8kB  

Abstract

We consider policy evaluation in infinite-horizon discounted Markov decisionproblems (MDPs) with continuous compact state and action spaces. We reformulatethis task as a compositional stochastic program with a function-valued decisionvariable that belongs to a reproducing kernel Hilbert space (RKHS). We approachthis problem via a new functional generalization of stochastic quasi-gradientmethods operating in tandem with stochastic sparse subspace projections. Theresult is an extension of gradient temporal difference learning that yieldsnonlinearly parameterized value function estimates of the solution to theBellman evaluation equation. We call this method Parsimonious Kernel GradientTemporal Difference (PKGTD) Learning. Our main contribution is amemory-efficient non-parametric stochastic method guaranteed to convergeexactly to the Bellman fixed point with probability $1$ with attenuatingstep-sizes under the hypothesis that it belongs to the RKHS. Further, withconstant step-sizes and compression budget, we establish mean convergence to aneighborhood and that the value function estimates have finite complexity. Inthe Mountain Car domain, we observe faster convergence to lower Bellman errorsolutions than existing approaches with a fraction of the required memory.

BibTeX Entry

@article{IEEETAC2020-koppel,
  author={Alec Koppel and Garrett Warnell and Ethan Stump and Peter Stone and Alejandro Ribeiro},
  journal={{IEEE} Transactions on Automatic Control}, 
  title={Policy Evaluation in Continuous MDPs with Efficient Kernelized Gradient Temporal Difference}, 
  year={2021},
  month="April",
  volume="66",
  number="4",
  pages="1856--63",
  doi="10.1109/TAC.2020.3029315",
  wwwnote={<a href="https://ieeexplore.ieee.org/document/9216519">official online version</a>},
  abstract={We consider policy evaluation in infinite-horizon discounted Markov decision
problems (MDPs) with continuous compact state and action spaces. We reformulate
this task as a compositional stochastic program with a function-valued decision
variable that belongs to a reproducing kernel Hilbert space (RKHS). We approach
this problem via a new functional generalization of stochastic quasi-gradient
methods operating in tandem with stochastic sparse subspace projections. The
result is an extension of gradient temporal difference learning that yields
nonlinearly parameterized value function estimates of the solution to the
Bellman evaluation equation.  We call this method Parsimonious Kernel Gradient
Temporal Difference (PKGTD) Learning. Our main contribution is a
memory-efficient non-parametric stochastic method guaranteed to converge
exactly to the Bellman fixed point with probability $1$ with attenuating
step-sizes under the hypothesis that it belongs to the RKHS. Further, with
constant step-sizes and compression budget, we establish mean convergence to a
neighborhood and that the value function estimates have finite complexity. In
the Mountain Car domain, we observe faster convergence to lower Bellman error
solutions than existing approaches with a fraction of the required memory.}
}

Generated by bib2html.pl (written by Patrick Riley ) on Tue Nov 19, 2024 10:24:38