Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Reinforced Grounded Action Transformation for Sim-to-Real Transfer

Reinforced Grounded Action Transformation for Sim-to-Real Transfer.
Haresh Karnan, Siddharth Desai, Josiah P. Hanna, Garrett Warnell, and Peter Stone.
In IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS 2020), October 2020.
14-minute video presentation.

Download

[PDF]506.6kB  

Abstract

Robots can learn to do complex tasks in simulation, but often, learned behaviors fail to transfer well to the real world due to simulator imperfections (the “reality gap”). Some existing solutions to this sim-to-real problem, such as Grounded Action Transformation (GAT), use a small amount of real world experience to minimize the reality gap by “grounding” the simulator. While very effective in certain scenarios, GAT is not robust on problems that use complex function approximation techniques to model a policy. In this paper, we introduce Reinforced Grounded Action Transformation (RGAT), a new sim-to-real technique that uses Reinforcement Learning (RL) not only to update the target policy in simulation, but also to perform the grounding step itself. This novel formulation allows for end-to-end training during the grounding step, which, compared to GAT, produces a better grounded simulator. Moreover, we show experimentally in several MuJoCo domains that our approach leads to successful transfer for policies modeled using neural networks

BibTeX Entry

@InProceedings{IROS20-Karnan,
author = {Haresh Karnan and Siddharth Desai and Josiah P. Hanna and Garrett Warnell and Peter Stone},
title = {Reinforced Grounded Action Transformation for Sim-to-Real Transfer},
abstract = {Robots can learn to do complex tasks in simulation, but often, learned behaviors fail to transfer well to the real world due to simulator imperfections (the “reality gap”). Some existing solutions to this sim-to-real problem, such as Grounded Action Transformation (GAT), use a small amount of real world experience to minimize the reality gap by “grounding” the simulator. While very effective in certain scenarios, GAT is not robust on problems that use complex function approximation techniques to model a policy. In this paper, we introduce Reinforced Grounded Action Transformation (RGAT), a new sim-to-real technique that uses Reinforcement Learning (RL) not only to update the target policy in simulation, but also to perform the grounding step itself. This novel formulation allows for end-to-end training during the grounding step, which, compared to GAT, produces a better grounded simulator. Moreover, we show experimentally in several MuJoCo domains that our approach leads to successful transfer for policies modeled using neural networks},
booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS 2020)},
month = {October},
year = {2020},
location = {Las Vegas, NV, USA},
wwwnote={<a href="https://youtu.be/mInoJkzBP9M">14-minute video presentation</a>.},
}

Generated by bib2html.pl (written by Patrick Riley ) on Sun Nov 24, 2024 20:24:54