Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Feature Selection for Value Function Approximation Using Bayesian Model Selection

Feature Selection for Value Function Approximation Using Bayesian Model Selection.
Tobias Jung and Peter Stone.
In The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, September 2009.

Download

[PDF]746.9kB  [postscript]2.3MB  [slides.pdf]957.5kB  

Abstract

Feature selection in reinforcement learning (RL), i.e. choosing basis functions such that useful approximations of the unknown value function can be obtained, is one of the main challenges in scaling RL to real-world applications. Here we consider the Gaussian process based framework GPTD for approximate policy evaluation, and propose feature selection through marginal likelihood optimization of the associated hyperparameters. Our approach has two appealing benefits: (1) given just sample transitions, we can solve the policy evaluation problem fully automatically (without looking at the learning task, and, in theory, independent of the dimensionality of the state space), and (2) model selection allows us to consider more sophisticated kernels, which in turn enable us to identify relevant subspaces and eliminate irrelevant state variables such that we can achieve substantial computational savings and improved prediction performance.

BibTeX Entry

@InProceedings{ECML09-jung,
	author="Tobias Jung and Peter Stone",
	title="Feature Selection for Value Function Approximation Using Bayesian Model Selection",
	booktitle="The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases",
	month="September",
	year="2009",
	abstract={Feature selection in reinforcement learning (RL), i.e. choosing
		basis functions such that useful approximations of the unknown value
		function can be obtained, is one of the main challenges in scaling RL to
		real-world applications. Here we consider the Gaussian process based
		framework GPTD for approximate policy evaluation, and propose feature
		selection through marginal likelihood optimization of the associated
		hyperparameters. Our approach has two appealing benefits: (1) given just
		sample transitions, we can solve the policy evaluation problem fully
		automatically (without looking at the learning task, and, in theory,
		independent of the dimensionality of the state space), and (2) model
		selection allows us to consider more sophisticated kernels, which in turn
		enable us to identify relevant subspaces and eliminate irrelevant state
		variables such that we can achieve substantial computational savings and
		improved prediction performance.
	},
}

Generated by bib2html.pl (written by Patrick Riley ) on Tue Nov 19, 2024 10:24:45