Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


RIDM: Reinforced Inverse Dynamics Modeling for Learning from a Single Observed Demonstration

RIDM: Reinforced Inverse Dynamics Modeling for Learning from a Single Observed Demonstration.
Brahma Pavse, Faraz Torabi, Josiah Hanna, Garrett Warnell, and Peter Stone.
IEEE Robotics and Automation Letters (RA-L), 5:6262–69, October 2020.
Video of the experiments; 13-minute video presentation.

Download

[PDF]405.1kB  [slides.pptx]115.4MB  

Abstract

Augmenting reinforcement learning with imitation learning is often hailed as a method by which to improve upon learning from scratch. However, most existing methods for integrating these two techniques are subject to several strong assumptions---chief among them that information about demonstrator actions is available. In this paper, we investigate the extent to which this assumption is necessary by introducing and evaluating reinforced inverse dynamics modeling (RIDM), a novel paradigm for combining imitation from observation (IfO) and reinforcement learning with no dependence on demonstrator action information. Moreover, RIDM requires only a single demonstration trajectory and is able to operate directly on raw (unaugmented) state features. We find experimentally that RIDM performs favorably compared to a baseline approach for several tasks in simulation as well as for tasks on a real UR5 robot arm. Experiment videos can be found at https://sites.google.com/view/ridm-reinforced-inverse-dynami.

BibTeX Entry

@article{RAL20-pavse,
  author = {Brahma Pavse and Faraz Torabi and Josiah Hanna and Garrett Warnell and Peter Stone},
  title = {RIDM: Reinforced Inverse Dynamics Modeling for Learning from a Single Observed Demonstration},
  Journal = {{IEEE} Robotics and Automation Letters (RA-L)},
  wwwnote={Presented at International Conference on Intelligent Robots and Systems ({IROS})\\
   A preliminary version was presented at the <i>Imitation, Intent, and Interaction</i> (I3) Workshop at ICML 2019},
  year = {2020},
  month = {October},
  volume= {5}, issue={4},
  pages="6262--69",
  issn="2377-3766",
  doi="10.1109/LRA.2020.3010750",
  wwwnote={<a href="https://sites.google.com/view/ridm-reinforced-inverse-dynami">Video of the experiments</a>; <a href="https://www.youtube.com/watch?v=QDReBWEoDtE">13-minute video presentation</a>.},
  abstract = {
Augmenting reinforcement learning with imitation learning is often hailed as a 
method by which to improve upon learning from scratch. However, most existing 
methods for integrating these two techniques are subject to several strong 
assumptions---chief among them that information about demonstrator actions is 
available. In this paper, we investigate the extent to which this assumption 
is necessary by introducing and evaluating reinforced inverse dynamics 
modeling (RIDM), a novel paradigm for combining imitation from observation 
(IfO) and reinforcement learning with no dependence on demonstrator action 
information. Moreover, RIDM requires only a single demonstration trajectory 
and is able to operate directly on raw (unaugmented) state features. We find 
experimentally that RIDM performs favorably compared to a baseline approach 
for several tasks in simulation as well as for tasks on a real UR5 robot arm. 
Experiment videos can be found at 
https://sites.google.com/view/ridm-reinforced-inverse-dynami.
  },
}

Generated by bib2html.pl (written by Patrick Riley ) on Tue Nov 19, 2024 10:24:38