Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Lucid Dreaming for Experience Replay: Refreshing Past States with the Current Policy

Lucid Dreaming for Experience Replay: Refreshing Past States with the Current Policy.
Yunshu Du, Garrett Warnell, Assefaw Gebremedhin, Peter Stone, and Matthew E. Taylor.
Neural Computing and Applications, May 2021.

Download

[PDF]2.2MB  

Abstract

Experience replay (ER) improves the data efficiency of off-policy reinforcement learning (RL) algorithms by allowing an agent to store and reuse its past experiences in a replay buffer. While many techniques have been proposed to enhance ER by biasing how experiences are sampled from the buffer, thus far they have not considered strategies for refreshing experiences inside the buffer. In this work, we introduce Lucid Dreaming for Experience Replay (LiDER), a conceptually new framework that allows replay experiences to be refreshed by leveraging the agent's current policy. LiDER consists of three steps: First, LiDER moves an agent back to a past state. Second, from that state, LiDER then lets the agent execute a sequence of actions by following its current policy---as if the agent were ``dreaming'' about the past and can try out different behaviors to encounter new experiences in the dream. Third, LiDER stores and reuses the new experience if it turned out better than what the agent previously experienced, i.e., to refresh its memories. LiDER is designed to be easily incorporated into off-policy, multi-worker RL algorithms that use ER; we present in this work a case study of applying LiDER to an actor--critic-based algorithm. Results show LiDER consistently improves performance over the baseline in six Atari 2600 games. Our open-source implementation of LiDER and the data used to generate all plots in this work are available at https://github.com/duyunshu/lucid-dreaming-for-exp-replay.

BibTeX Entry

@article{NCAA21-Du,
author={Yunshu Du and Garrett Warnell and Assefaw Gebremedhin and Peter Stone and Matthew E. Taylor},
title={Lucid Dreaming for Experience Replay: Refreshing Past States with the Current Policy},
journal={Neural Computing and Applications},
year={2021},
month={May},
day={25},
abstract={
          Experience replay (ER) improves the data efficiency of
          off-policy reinforcement learning (RL) algorithms by
          allowing an agent to store and reuse its past experiences in
          a replay buffer. While many techniques have been proposed to
          enhance ER by biasing how experiences are sampled from the
          buffer, thus far they have not considered strategies for
          refreshing experiences inside the buffer. In this work, we
          introduce Lucid Dreaming for Experience Replay (LiDER), a
          conceptually new framework that allows replay experiences to
          be refreshed by leveraging the agent's current policy. LiDER
          consists of three steps: First, LiDER moves an agent back to
          a past state. Second, from that state, LiDER then lets the
          agent execute a sequence of actions by following its current
          policy---as if the agent were ``dreaming'' about the past
          and can try out different behaviors to encounter new
          experiences in the dream. Third, LiDER stores and reuses the
          new experience if it turned out better than what the agent
          previously experienced, i.e., to refresh its memories. LiDER
          is designed to be easily incorporated into off-policy,
          multi-worker RL algorithms that use ER; we present in this
          work a case study of applying LiDER to an
          actor--critic-based algorithm. Results show LiDER
          consistently improves performance over the baseline in six
          Atari 2600 games. Our open-source implementation of LiDER
          and the data used to generate all plots in this work are
          available at
          https://github.com/duyunshu/lucid-dreaming-for-exp-replay.},
issn={1433-3058},
doi={10.1007/s00521-021-06104-5},
url={https://doi.org/10.1007/s00521-021-06104-5},
}

Generated by bib2html.pl (written by Patrick Riley ) on Tue Nov 19, 2024 10:24:38