Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


A Neuroevolution Approach to General Atari Game Playing

A Neuroevolution Approach to General Atari Game Playing.
Matthew Hausknecht, Joel Lehman, Risto Miikkulainen, and Peter Stone.
IEEE Transactions on Computational Intelligence and AI in Games, 2014.

Download

[PDF]1.3MB  [postscript]3.1MB  

Abstract

This article addresses the challenge of learning to play many different video games with little domain-specific knowledge. Specifically, it introduces a neuro-evolution approach to general Atari 2600 game playing. Four neuro-evolution algorithms were paired with three different state representations and evaluated on a set of 61 Atari games. The neuro-evolution agents represent different points along the spectrum of algorithmic sophistication - including weight evolution on topologically fixed neural networks (Conventional Neuro-evolution), Covariance Matrix Adaptation Evolution Strategy (CMA-ES), evolution of network topology and weights (NEAT), and indirect network encoding (HyperNEAT). State representations include an object representation of the game screen, the raw pixels of the game screen, and seeded noise (a comparative baseline). Results indicate that direct-encoding methods work best on compact state representations while indirect-encoding methods (i.e. HyperNEAT) allow scaling to higher-dimensional representations (i.e. the raw game screen). Previous approaches based on temporal-difference learning had trouble dealing with the large state spaces and sparse reward gradients often found in Atari games. Neuro-evolution ameliorates these problems and evolved policies achieve state-of-the-art results, even surpassing human high scores on three games. These results suggest that neuro-evolution is a promising approach to general video game playing.

BibTeX Entry

@article{TCIAIG13-mhauskn,
  author = {Matthew Hausknecht and Joel Lehman and Risto Miikkulainen and Peter Stone},
  title = {A Neuroevolution Approach to General Atari Game Playing},
  journal = {IEEE Transactions on Computational Intelligence and AI in Games},
  year = {2014},
  abstract = {This article addresses the challenge of learning to play many different video games with little domain-specific knowledge. Specifically, it introduces a neuro-evolution approach to general Atari 2600 game playing. Four neuro-evolution algorithms were paired with three different state representations and evaluated on a set of 61 Atari games. The neuro-evolution agents represent different points along the spectrum of algorithmic sophistication - including weight evolution on topologically fixed neural networks (Conventional Neuro-evolution), Covariance Matrix Adaptation Evolution Strategy (CMA-ES), evolution of network topology and weights (NEAT), and indirect network encoding (HyperNEAT). State representations include an object representation of the game screen, the raw pixels of the game screen, and seeded noise (a comparative baseline). Results indicate that direct-encoding methods work best on compact state representations while indirect-encoding methods (i.e.\ HyperNEAT) allow scaling to higher-dimensional representations (i.e.\ the raw game screen). Previous approaches based on temporal-difference learning had trouble dealing with the large state spaces and sparse reward gradients often found in Atari games. Neuro-evolution ameliorates these problems and evolved policies achieve state-of-the-art results, even surpassing human high scores on three games. These results suggest that neuro-evolution is a promising approach to general video game playing.},
}

Generated by bib2html.pl (written by Patrick Riley ) on Tue Nov 19, 2024 10:24:39