• Classified by Topic • Classified by Publication Type • Sorted by Date • Sorted by First Author Last Name • Classified by Funding Source •
Skeletal Feature Compensation for Imitation Learning with Embodiment Mismatch.
Eddy Hudson, Garrett
Warnell, Faraz Torabi, and Peter
Stone.
In International Conference on Robotics and Automation (ICRA), May 2022.
Presentation
Video
Learning from demonstrations in the wild (e.g. YouTube videos) is a tantalizing goal in imitation learning. However, for this goal to be achieved, imitation learning algorithms must deal with the fact that the demonstrators and learners may have bodies that differ from one another. This condition — "embodiment mismatch" — is ignored by many recent imitation learning algorithms. Our proposed imitation learning technique, SILEM (Skeletal feature compensation for Imitation Learning with Embodiment Mismatch), addresses a particular type of embodiment mismatch by introducing a learned affine transform to compensate for differences in the skeletal features obtained from the learner and expert. We create toy domains based on PyBullet’s HalfCheetah and Ant to assess SILEM’s benefits for this type of embodiment mismatch. We also provide qualitative and quantitative results on more realistic problems — teaching simulated humanoid agents, including Atlas from Boston Dynamics, to walk by observing human demonstrations.
@InProceedings{icra22-hudson, author={Eddy Hudson and Garrett Warnell and Faraz Torabi and Peter Stone}, booktitle={International Conference on Robotics and Automation (ICRA)}, title={Skeletal Feature Compensation for Imitation Learning with Embodiment Mismatch}, month={May}, year={2022}, location={Philadelphia, USA}, abstract={Learning from demonstrations in the wild (e.g. YouTube videos) is a tantalizing goal in imitation learning. However, for this goal to be achieved, imitation learning algorithms must deal with the fact that the demonstrators and learners may have bodies that differ from one another. This condition â "embodiment mismatch" â is ignored by many recent imitation learning algorithms. Our proposed imitation learning technique, SILEM (Skeletal feature compensation for Imitation Learning with Embodiment Mismatch), addresses a particular type of embodiment mismatch by introducing a learned affine transform to compensate for differences in the skeletal features obtained from the learner and expert. We create toy domains based on PyBulletâs HalfCheetah and Ant to assess SILEMâs benefits for this type of embodiment mismatch. We also provide qualitative and quantitative results on more realistic problems â teaching simulated humanoid agents, including Atlas from Boston Dynamics, to walk by observing human demonstrations.}, wwwnote={<a href="https://www.youtube.com/watch?v=Git3ccvCIGA">Presentation Video</a>}, }
Generated by bib2html.pl (written by Patrick Riley ) on Sun Nov 24, 2024 20:24:53