Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Using Human-Inspired Signals to Disambiguate Navigational Intentions

Using Human-Inspired Signals to Disambiguate Navigational Intentions.
Justin Hart, Reuth Mirsky, Xuesu Xiao, Stone Tejeda, Bonny Mahajan, Jamin Goo, Kathryn Baldauf, Sydney Owen, and Peter Stone.
In Proceedings of the 12th International Conference on Social Robotics (ICSR), November 2020.
Video presentation

Download

[PDF]3.2MB  

Abstract

People are proficient at communicating their intentions in order to avoid conflicts when navigating in narrow, crowded environments. Mobile robots, on the other hand, often lack both the ability to interpret human intentions and the ability to clearly communicate their own intentions to people sharing their space. This work addresses the second of these points, leveraging insights about how people implicitly communicate with each other through gaze to enable mobile robots to more clearly signal their navigational intention. We present a human study measuring the importance of gaze in coordinating people's navigation. This study is followed by the development of a virtual agent head which is added to a mobile robot platform. Comparing the performance of a robot with a virtual agent head against one with an LED turn signal demonstrates its ability to impact people's navigational choices, and that people more easily interpret the gaze cue than the LED turn signal.

BibTeX Entry

@inproceedings{ICSR2020-HART,
  title={Using Human-Inspired Signals to Disambiguate Navigational Intentions},
  author={Justin Hart and Reuth Mirsky and Xuesu Xiao and Stone Tejeda and Bonny Mahajan and Jamin Goo and Kathryn Baldauf and Sydney Owen and Peter Stone},
  booktitle={Proceedings of the 12th International Conference on Social Robotics (ICSR)},
  abstract={People are proficient at communicating their intentions in order to avoid conflicts when navigating in narrow, crowded environments. Mobile robots, on the other hand, often lack both the ability to interpret human intentions and the ability to clearly communicate their own intentions to people sharing their space. This work addresses the second of these points, leveraging insights about how people implicitly communicate with each other through gaze to enable mobile robots to more clearly signal their navigational intention. We present a human study measuring the importance of gaze in coordinating people's navigation. This study is followed by the development of a virtual agent head which is added to a mobile robot platform. Comparing the performance of a robot with a virtual agent head against one with an LED turn signal demonstrates its ability to impact people's navigational choices, and that people more easily interpret the gaze cue than the LED turn signal.},
wwwnote={<a href="https://www.youtube.com/watch?v=rQziUQro9BU">Video presentation</a>},
  month={November},
  location={Golden, Colorado},
  year={2020}
}

Generated by bib2html.pl (written by Patrick Riley ) on Tue Nov 19, 2024 10:24:42