• Classified by Topic • Classified by Publication Type • Sorted by Date • Sorted by First Author Last Name • Classified by Funding Source •
Wait, That Feels Familiar: Learning to Extrapolate Human Preferences for Preference-Aligned Path Planning.
Haresh,
Karnan; Elvin, Yang; Garrett, Warnell; Joydeep, Biswas; Peter, and Stone.
In
International Conference on Robotics and Automation, May 2024.
Autonomous mobility tasks such as last-mile delivery require reasoning aboutoperator-indicated preferences over terrains on which the robot should navigateto ensure both robot safety and mission success. However, coping with out ofdistribution data from novel terrains or appearance changes due to lightingvariations remains a fundamental problem in visual terrain-adaptive navigation.Existing solutions either require labor-intensive manual data re-collection andlabeling or use hand-coded reward functions that may not align with operatorpreferences. In this work, we posit that operator preferences for visually novelterrains, which the robot should adhere to, can often be extrapolated fromestablished terrain preferences within the inertial-proprioceptive-tactiledomain. Leveraging this insight, we introduce Preference extrApolation forTerrain-awarE Robot Navigation (PATERN), a novel framework for extrapolatingoperator terrain preferences for visual navigation. PATERN learns to mapinertial-proprioceptive-tactile measurements from the robot’s observations to arepresentation space and performs nearest-neighbor search in this space toestimate operator preferences over novel terrains. Through physical robotexperiments in outdoor environments, we assess PATERN’s capability to extrapolatepreferences and generalize to novel terrains and challenging lighting conditions.Compared to baseline approaches, our findings indicate that PATERN robustlygeneralizes to diverse terrains and varied lighting conditions, while navigatingin a preference-aligned manner.
@InProceedings{karnanicra2024, author = {Haresh and Karnan; Elvin and Yang; Garrett and Warnell; Joydeep and Biswas; Peter and Stone}, title = {Wait, That Feels Familiar: Learning to Extrapolate Human Preferences for Preference-Aligned Path Planning}, booktitle = {International Conference on Robotics and Automation}, year = {2024}, month = {May}, location = {Yokohama, Japan}, abstract = {Autonomous mobility tasks such as last-mile delivery require reasoning about operator-indicated preferences over terrains on which the robot should navigate to ensure both robot safety and mission success. However, coping with out of distribution data from novel terrains or appearance changes due to lighting variations remains a fundamental problem in visual terrain-adaptive navigation. Existing solutions either require labor-intensive manual data re-collection and labeling or use hand-coded reward functions that may not align with operator preferences. In this work, we posit that operator preferences for visually novel terrains, which the robot should adhere to, can often be extrapolated from established terrain preferences within the inertial-proprioceptive-tactile domain. Leveraging this insight, we introduce Preference extrApolation for Terrain-awarE Robot Navigation (PATERN), a novel framework for extrapolating operator terrain preferences for visual navigation. PATERN learns to map inertial-proprioceptive-tactile measurements from the robot’s observations to a representation space and performs nearest-neighbor search in this space to estimate operator preferences over novel terrains. Through physical robot experiments in outdoor environments, we assess PATERN’s capability to extrapolate preferences and generalize to novel terrains and challenging lighting conditions. Compared to baseline approaches, our findings indicate that PATERN robustly generalizes to diverse terrains and varied lighting conditions, while navigating in a preference-aligned manner. }, }
Generated by bib2html.pl (written by Patrick Riley ) on Sun Nov 24, 2024 20:24:53