Publications: Language and Robotics
Embodied robots have the potential to better understand and communicate with humans in natural language due to their ability to sense their environment through multiple modalities such as vision, audio, haptics and proprioception. They can also move and influence the world through their actions, enabling more active exploration of the environment. Our research explores how multimodal perceptual information can be used to better understand language, and motor skills can be used to actively engage with humans to learn natural language through interaction, particularly through dialog and games such as "I Spy."
- Natural Language Can Help Bridge the Sim2Real Gap
[Details] [PDF] [Slides (PDF)] [Poster] [Video]
Albert Yu, Adeline Foote, Raymond Mooney, and Roberto Martín-Martín
In Robotics, Science and Systems (RSS), July 2024.
- A Survey of Robotic Language Grounding: Tradeoffs Between Symbols and Embeddings
[Details] [PDF] [Slides (PDF)] [Poster]
Vanya Cohen, Jason Xinyu Liu, Raymond Mooney, Stefanie Tellex, David Watkins
In International Joint Conference on Artificial Intelligence (IJCAI), August 2024.
- CAPE: Corrective Actions from Precondition Errors using Large Language Models
[Details] [PDF]
Shreyas Sundara Raman, Vanya Cohen, Ifrah Idrees, Eric Rosen, Raymond Mooney, Stefanie Tellex, and David Paulius
In International Conference on Robotics and Automation (ICRA), May 2024.
- Using Both Demonstrations and Language Instructions to Efficiently Learn Robotic Tasks
[Details] [PDF] [Video]
Albert Yu, Raymond J. Mooney
In International Conference on Learning Representations, May 2023.
- End-to-End Learning to Follow Language Instructions with Compositional Policies
[Details] [PDF] [Poster]
Vanya Cohen, Geraud Nangue Tasse, Nakul Gopalan, Steven James, Ray Mooney, Benjamin Rosman
In Workshop on Language and Robot Learning at CoRL 2022, December 2022.
- Planning with Large Language Models via Corrective Re-prompting
[Details] [PDF]
Shreyas Sundara Raman, Vanya Cohen, Eric Rosen, Ifrah Idrees, David Paulius, Stefanie Tellex
In Foundation Models for Decision Making Workshop at NeurIPS 2022, January 2022.
- Zero-shot Task Adaptation using Natural Language
[Details] [PDF]
Prasoon Goyal, Raymond J. Mooney, Scott Niekum
In Arxiv, June 2021.
- Using Natural Language to Aid Task Specification in Sequential Decision Making Problems
[Details] [PDF] [Slides (PDF)] [Video]
Prasoon Goyal
October 2021. Ph.D. Proposal.
- Supervised Attention from Natural Language Feedback for Reinforcement Learning
[Details] [PDF]
Clara Cecilia Cannon
Masters Thesis, Department of Computer Science, The University of Texas at Austin, May 2021.
- Dialog as a Vehicle for Lifelong Learning of Grounded Language Understanding Systems
[Details] [PDF] [Slides (PDF)]
Aishwarya Padmakumar
PhD Thesis, Department of Computer Science, The University of Texas at Austin, August 2020.
- PixL2R: Guiding Reinforcement Learning using Natural Language by Mapping Pixels to Rewards
[Details] [PDF]
Prasoon Goyal, Scott Niekum, Raymond J. Mooney
In 4th Conference on Robot Learning (CoRL), November 2020. Also presented on the 1st Language in Reinforcement Learning (LaReL) Workshop at ICML, July 2020 (Best Paper Award), the 6th Deep Reinforcement Learning Workshop at Neural Information Processing Systems (NeurIPS), Dec 2020.
- Evaluating the Robustness of Natural Language Reward Shaping Models to Spatial Relations
[Details] [PDF] [Slides (PPT)] [Slides (PDF)]
Antony Yun
May 2020. Undergraduate Honors Thesis, Computer Science Department, University of Texas at Austin.
- Jointly Improving Parsing and Perception for Natural Language Commands through Human-Robot Dialog
[Details] [PDF]
Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Nick Walker, Yuqian Jiang, Harel Yedidsion, Justin Hart, Peter Stone, Raymond J. Mooney
The Journal of Artificial Intelligence Research (JAIR), 67:327-374, February 2020.
- Optimal Use Of Verbal Instructions For Multi-Robot Human Navigation Guidance
[Details] [PDF] [Slides (PDF)] [Video]
Harel Yedidsion, Jacqueline Deans, Connor Sheehan, Mahathi Chillara, Justin Hart, Peter Stone, and Raymond J. Mooney
In Proceedings of the Eleventh International Conference on Social Robotics, 133-143, 2019. Springer.
- Using Natural Language for Reward Shaping in Reinforcement Learning
[Details] [PDF] [Slides (PDF)] [Poster]
Prasoon Goyal, Scott Niekum, Raymond J. Mooney
In Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, China, August 2019.
- Improving Grounded Natural Language Understanding through Human-Robot Dialog
[Details] [PDF]
Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Nick Walker, Yuqian Jiang, Harel Yedidsion, Justin Hart, Peter Stone, and Raymond J. Mooney
In IEEE International Conference on Robotics and Automation (ICRA), Montreal, Canada, May 2019.
- Improved Models and Queries for Grounded Human-Robot Dialog
[Details] [PDF]
Aishwarya Padmakumar
October 2018. PhD Proposal, Department of Computer Science, The University of Texas At Austin.
- Interaction and Autonomy in RoboCup@Home and Building-Wide Intelligence
[Details] [PDF]
Justin Hart, Harel Yedidsion, Yuqian Jiang, Nick Walker, Rishi Shah, Jesse Thomason, Aishwarya Padmakumar, Rolando Fernandez, Jivko Sinapov, Raymond Mooney, Peter Stone
In Artificial Intelligence (AI) for Human-Robot Interaction (HRI) symposium, AAAI Fall Symposium Series, Arlington, Virginia, October 2018.
- Jointly Improving Parsing and Perception for Natural Language Commands through Human-Robot Dialog
[Details] [PDF]
Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Nick Walker, Yuqian Jiang, Harel Yedidsion, Justin Hart, Peter Stone, and Raymond J. Mooney
In Late-breaking Track at the SIGDIAL Special Session on Physically Situated Dialogue (RoboDIAL-18), Melbourne, Australia, July 2018.
- Jointly Improving Parsing and Perception for Natural Language Commands through Human-Robot Dialog
[Details] [PDF]
Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Nick Walker, Yuqian Jiang, Harel Yedidsion, Justin Hart, Peter Stone, and Raymond J. Mooney
In RSS Workshop on Models and Representations for Natural Human-Robot Communication (MRHRC-18). Robotics: Science and Systems (RSS), June 2018.
- Continually Improving Grounded Natural Language Understanding through Human-Robot Dialog
[Details] [PDF]
Jesse Thomason
PhD Thesis, Department of Computer Science, The University of Texas at Austin, April 2018.
- Guiding Exploratory Behaviors for Multi-Modal Grounding of Linguistic Descriptions
[Details] [PDF]
Jesse Thomason, Jivko Sinapov, Raymond Mooney, Peter Stone
In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18) , February 2018.
- Opportunistic Active Learning for Grounding Natural Language Descriptions
[Details] [PDF]
Jesse Thomason and Aishwarya Padmakumar and Jivko Sinapov and Justin Hart and Peter Stone and Raymond J. Mooney
In Sergey Levine and Vincent Vanhoucke and Ken Goldberg, editors, Proceedings of the 1st Annual Conference on Robot Learning (CoRL-17), 67--76, Mountain View, California, November 2017. PMLR.
- Guiding Interaction Behaviors for Multi-modal Grounded Language Learning
[Details] [PDF]
Jesse Thomason and Jivko Sinapov and Raymond J. Mooney
In Proceedings of the Workshop on Language Grounding for Robotics at ACL 2017 (RoboNLP-17), Vancouver, Canada, August 2017.
- Integrated Learning of Dialog Strategies and Semantic Parsing
[Details] [PDF]
Aishwarya Padmakumar and Jesse Thomason and Raymond J. Mooney
In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017), 547--557, Valencia, Spain, April 2017.
- BWIBots: A platform for bridging the gap between AI and human--robot interaction research
[Details] [PDF]
Piyush Khandelwal and Shiqi Zhang and Jivko Sinapov and Matteo Leonetti and Jesse Thomason and Fangkai Yang and Ilaria Gori and Maxwell Svetlik and Priyanka Khante and Vladimir Lifschitz and J. K. Aggarwal and Raymond Mooney and Peter Stone
The International Journal of Robotics Research, 2017.
- Continuously Improving Natural Language Understanding for Robotic Systems through Semantic Parsing, Dialog, and Multi-modal Perception
[Details] [PDF]
Jesse Thomason
November 2016. PhD proposal, Department of Computer Science, The University of Texas at Austin.
- Learning Multi-Modal Grounded Linguistic Semantics by Playing "I Spy"
[Details] [PDF]
Jesse Thomason and Jivko Sinapov and Maxwell Svetlik and Peter Stone and Raymond J. Mooney
In Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI-16), 3477--3483, New York City, 2016.
- Learning to Interpret Natural Language Commands through Human-Robot Dialog
[Details] [PDF]
Jesse Thomason and Shiqi Zhang and Raymond Mooney and Peter Stone
In Proceedings of the 2015 International Joint Conference on Artificial Intelligence (IJCAI), 1923--1929, Buenos Aires, Argentina, July 2015.
- Grounded Language Learning Models for Ambiguous Supervision
[Details] [PDF] [Slides (PPT)]
Joo Hyun Kim
PhD Thesis, Department of Computer Science, University of Texas at Austin, December 2013.
- Adapting Discriminative Reranking to Grounded Language Learning
[Details] [PDF] [Slides (PPT)]
Joohyun Kim and Raymond J. Mooney
In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL-2013), 218--227, Sofia, Bulgaria, August 2013.
- Generative Models of Grounded Language Learning with Ambiguous Supervision
[Details] [PDF] [Slides (PPT)]
Joohyun Kim
Technical Report, PhD proposal, Department of Computer Science, The University of Texas at Austin, June 2012.
- Unsupervised PCFG Induction for Grounded Language Learning with Highly Ambiguous Supervision
[Details] [PDF]
Joohyun Kim and Raymond J. Mooney
In Proceedings of the Conference on Empirical Methods in Natural Language Processing and Natural Language Learning (EMNLP-CoNLL '12), 433--444, Jeju Island, Korea, July 2012.
- Fast Online Lexicon Learning for Grounded Language Acquisition
[Details] [PDF] [Slides (PPT)]
David L. Chen
In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL-2012), 430--439, July 2012.
- Learning Language from Ambiguous Perceptual Context
[Details] [PDF] [Slides (PPT)]
David L. Chen
PhD Thesis, Department of Computer Science, University of Texas at Austin, May 2012. 196.
- Learning to Interpret Natural Language Navigation Instructions from Observations
[Details] [PDF] [Slides (PPT)]
David L. Chen and Raymond J. Mooney
In Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI-2011), 859-865, August 2011.
- Panning for Gold: Finding Relevant Semantic Content for Grounded Language Learning
[Details] [PDF] [Slides (PDF)]
David L. Chen and Raymond J. Mooney
In Proceedings of Symposium on Machine Learning in Speech and Language Processing (MLSLP 2011), June 2011.
- Generative Alignment and Semantic Parsing for Learning from Ambiguous Supervision
[Details] [PDF]
Joohyun Kim and Raymond J. Mooney
In Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010), 543--551, Beijing, China, August 2010.
- Training a Multilingual Sportscaster: Using Perceptual Context to Learn Language
[Details] [PDF]
David L. Chen, Joohyun Kim, Raymond J. Mooney
Journal of Artificial Intelligence Research, 37:397--435, 2010.
- Learning Language from Perceptual Context
[Details] [PDF] [Slides (PPT)]
David L. Chen
December 2009. Ph.D. proposal, Department of Computer Sciences, University of Texas at Austin.
- Learning to Sportscast: A Test of Grounded Language Acquisition
[Details] [PDF] [Slides (PPT)] [Video]
David L. Chen and Raymond J. Mooney
In Proceedings of the 25th International Conference on Machine Learning (ICML), Helsinki, Finland, July 2008.
- Guiding a Reinforcement Learner with Natural Language Advice: Initial Results in RoboCup Soccer
[Details] [PDF]
Gregory Kuhlmann, Peter Stone, Raymond J. Mooney, and Jude W. Shavlik
In The AAAI-2004 Workshop on Supervisory Control of Learning and Adaptive Systems, July 2004.