• Sorted by Date • Classified by Publication Type • Classified by Topic • Sorted by First Author Last Name •
Haresh Karnan, Garrett Warnell, Xuesu Xiao, and Peter Stone. VOILA: Visual-Observation-Only Imitation Learning for Autonomous Navigation. In International Conference on Robotics and Automation, 2022, May 2022.
Poster, Video
While imitation learning for vision based autonomous mobile robot navigation has recently received a great deal of attention in the research community, existing approaches typically require state action demonstrations that were gathered using the deployment platform. However, what if one cannot easily outfit their platform to record these demonstration signals or worse yet the demonstrator does not have access to the platform at all? Is imitation learning for vision based autonomous navigation even possible in such scenarios? In this work, we hypothesize that the answer is yes and that recent ideas from the Imitation from Observation (IfO) literature can be brought to bear such that a robot can learn to navigate using only ego centric video collected by a demonstrator, even in the presence of viewpoint mismatch. To this end, we introduce a new algorithm, Visual Observation only Imitation Learning for Autonomous navigation (VOILA), that can successfully learn navigation policies from a single video demonstration collected from a physically different agent. We evaluate VOILA in the photorealistic AirSim simulator and show that VOILA not only successfully imitates the expert, but that it also learns navigation policies that can generalize to novel environments. Further, we demonstrate the effectiveness of VOILA in a real world setting by showing that it allows a wheeled Jackal robot to successfully imitate a human walking in an environment using a video recorded using a mobile phone camera.
@InProceedings{ICRA22-karnanA, author = {Haresh Karnan and Garrett Warnell and Xuesu Xiao and Peter Stone}, title = {VOILA: Visual-Observation-Only Imitation Learning for Autonomous Navigation}, booktitle = {International Conference on Robotics and Automation, 2022}, location = {Online}, month = {May}, year = {2022}, abstract = {While imitation learning for vision based autonomous mobile robot navigation has recently received a great deal of attention in the research community, existing approaches typically require state action demonstrations that were gathered using the deployment platform. However, what if one cannot easily outfit their platform to record these demonstration signals or worse yet the demonstrator does not have access to the platform at all? Is imitation learning for vision based autonomous navigation even possible in such scenarios? In this work, we hypothesize that the answer is yes and that recent ideas from the Imitation from Observation (IfO) literature can be brought to bear such that a robot can learn to navigate using only ego centric video collected by a demonstrator, even in the presence of viewpoint mismatch. To this end, we introduce a new algorithm, Visual Observation only Imitation Learning for Autonomous navigation (VOILA), that can successfully learn navigation policies from a single video demonstration collected from a physically different agent. We evaluate VOILA in the photorealistic AirSim simulator and show that VOILA not only successfully imitates the expert, but that it also learns navigation policies that can generalize to novel environments. Further, we demonstrate the effectiveness of VOILA in a real world setting by showing that it allows a wheeled Jackal robot to successfully imitate a human walking in an environment using a video recorded using a mobile phone camera.}, wwwnote={<a href="https://drive.google.com/file/d/1N2xoqUl-ZjGIVJY9Judg7ftsRMcR9HxD/view?usp=sharing">Poster</a>, <a href="https://www.youtube.com/watch?v=aFspSnjnw-k&ab_channel=Hareshkarnan">Video</a>} }
Generated by bib2html.pl (written by Patrick Riley ) on Tue Nov 19, 2024 10:29:30