• Classified by Topic • Classified by Publication Type • Sorted by Date • Sorted by First Author Last Name • Classified by Funding Source •
Behavioral Cloning from Observation.
Faraz Torabi, Garrett
Warnell, and Peter Stone.
In Proceedings of the 27th International
Joint Conference on Artificial Intelligence (IJCAI), July 2018.
Also available from arXiv
[PDF]2.3MB [slides.pptx]26.0MB
Humans often learn how to perform tasks via imitation: they observe others perform a task, and then very quickly infer the appropriate actions to take based on their observations. While extending this paradigm to autonomous agents is a well-studied problem in general, there are two particular aspects that have largely been overlooked: (1) that the learning is done from observation only (i.e., without explicit action information), and (2) that the learning is typically done very quickly. In this work, we propose a two-phase, autonomous imitation learning technique called behavioral cloning from observation (BCO), that aims to provide improved performance with respect to both of these aspects. First, we allow the agent to acquire experience in a self-supervised fashion. This experience is used to develop a model which is then utilized to learn a particular task by observing an expert perform that task without the knowledge of the specific actions taken. We experimentally compare BCO to imitation learning methods, including the state-of-the-art, generative adversarial imitation learning (GAIL) technique, and we show comparable task performance in several different simulation domains while exhibiting increased learning speed after expert trajectories become available.
@InProceedings{IJCAI2018-torabi, author = {Faraz Torabi and Garrett Warnell and Peter Stone}, title = {Behavioral Cloning from Observation}, booktitle = {Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI)}, location = {Stockholm, Sweden}, month = {July}, year = {2018}, abstract = { Humans often learn how to perform tasks via imitation: they observe others perform a task, and then very quickly infer the appropriate actions to take based on their observations. While extending this paradigm to autonomous agents is a well-studied problem in general, there are two particular aspects that have largely been overlooked: (1) that the learning is done from observation only (i.e., without explicit action information), and (2) that the learning is typically done very quickly. In this work, we propose a two-phase, autonomous imitation learning technique called behavioral cloning from observation (BCO), that aims to provide improved performance with respect to both of these aspects. First, we allow the agent to acquire experience in a self-supervised fashion. This experience is used to develop a model which is then utilized to learn a particular task by observing an expert perform that task without the knowledge of the specific actions taken. We experimentally compare BCO to imitation learning methods, including the state-of-the-art, generative adversarial imitation learning (GAIL) technique, and we show comparable task performance in several different simulation domains while exhibiting increased learning speed after expert trajectories become available. }, wwwnote={Also available from <a href="https://arxiv.org/abs/1805.01954">arXiv</a>}, }
Generated by bib2html.pl (written by Patrick Riley ) on Sun Nov 24, 2024 20:24:54