CS 395T:
Grounded Natural Language Processing

How to read research articles (background papers recommended by Prof. Matt Lease)

  1. S. Keshav. How to Read a Paper. U. Waterloo, February 17, 2016.
  2. Alan Smith. 1990. The Task of the Referee.

Research Papers

Papers to be read and presented by students. A presentation date is given at the beginning for each paper.
  1. [1/29] Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, and Joseph Turian. Experience Grounds Language. EMNLP 2020.
  2. [1/29] Md. Zakir Hossain, Ferdous Sohel, Mohd Fairuz Shiratuddin, Hamid Laga, A Comprehensive Survey of Deep Learning for Image Captioning, ACM Computing Surveys (October 2018).
  3. [2/3] Zheng Yang, Bing Han, Xinbo Gao, Zhi-Hui Zhan, Eye-movement-prompted large image captioning model, Pattern Recognition Volume 159, March 2025.
  4. [2/3] Subhashini Venugopalan and Marcus Rohrbach and Jeff Donahue and Raymond J. Mooney and Trevor Darrell and Kate Saenko, Sequence to Sequence -- Video to Text, In Proceedings of the 2015 International Conference on Computer Vision (ICCV-15), Santiago, Chile, December 2015.
  5. [2/5] Xin Wang, Jiawei Wu, Junkun Chen, Lei Li, Yuan-Fang Wang, and William Yang Wang, VATEX: A Large-Scale, High-Quality Multilingual Dataset for Video-and-Language Research , Proceedings of the 17th CVF/IEEE International Conference on Computer Vision (ICCV 2019), Seoul, Korea.
  6. [2/5] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, VQA: Visual Question Answering, International Conference on Computer Vision (ICCV), 2015.
  7. [2/10] Danna Gurari, Qing Li, Abigale J. Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P. Bigham. VizWiz Grand Challenge: Answering Visual Questions from Blind People , IEEE Computer Vision and Pattern Recognition (CVPR) 2018.
  8. [2/10] Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee. ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks, NeurIPS, 2019.
  9. [2/12] Hao Tan, Mohit Bansal, LXMERT: Learning Cross-Modality Encoder Representations from Transformers, EMNLP 2019.
  10. [2/12] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever, Learning Transferable Visual Models From Natural Language Supervision, 2021.
  11. [2/17] Rowan Zellers, Ximing Lu, Jack Hessel, Youngjae Yu, Jae Sung Park, Jize Cao, Ali Farhadi, Yejin Choi. MERLOT: Multimodal Neural Script Knowledge Models, NeurIPS 2021.
  12. [2/17] Xi Chen et al., PaLI: A Jointly-Scaled Multilingual Language-Image Model, ICLR 2023.
  13. [2/19] Tristan Thrush, Ryan Jiang, Max Bartolo, Amanpreet Singh, Adina Williams, Douwe Kiela, Candace Ross, Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality, CVPR 2022.
  14. [2/19] Shengbang Tong, Zhuang Liu, Yuexiang Zhai, Yi Ma, Yann LeCun, Saining Xie, Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs, CVPR 2024.
  15. [2/24] Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sunderhauf, Ian Reid, Stephen Gould, Anton van den Hengel. Vision-and-Language Navigation: Interpreting Visually-Grounded Navigation Instructions in Real Environments. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 3674-3683
  16. [2/24] Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, Dieter Fox, ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks, CVPR 2020.
  17. [2/26] Michael Ahn, et al., Do As I Can, Not As I Say: Grounding Language in Robotic Affordances, 2022.
  18. [2/26] Anthony Brohan et al., RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control, 2023.
  19. [3/3] Moo Jin Kim, Karl Pertsch, Siddharth Karamcheti, Ted Xiao, Ashwin Balakrishna, Suraj Nair, Rafael Rafailov, Ethan Foster, Grace Lam, Pannag Sanketi, Quan Vuong, Thomas Kollar, Benjamin Burchfiel, Russ Tedrake, Dorsa Sadigh, Sergey Levine, Percy Liang, Chelsea Finn, OpenVLA: An Open-Source Vision-Language-Action Model, Conference on Robot Learning (CoRL) 2024.
  20. [3/3] Suneel Belkhale, Tianli Ding, Ted Xiao, Pierre Sermanet, Quon Vuong, Jonathan Tompson, Yevgen Chebotar, Debidatta Dwibedi, Dorsa Sadigh, RT-H: Action Hierarchies Using Language, Robotics Science and Systems (RSS), 2024.
  21. [3/5] David Harwath, Adria Recasens, Didac Suris, Galen Chuang, Antonio Torralba, and James Glass, Jointly Discovering Visual Objects and Spoken Words from Raw Sensory Input, Proceedings of the European Conference on Computer Vision (ECCV), 2018.
  22. [3/5] Zhifei Xie, Changqiao Wu, Mini-Omni2: Towards Open-source GPT-4o with Vision, Speech and Duplex Capabilities arxiv, 2024.
  23. [3/10] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, Mohammad Norouzi, Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding, arxiv, May 2022.
  24. [3/10] Weijia Shi, Xiaochuang Han, Chunting Zhou, Weixin Liang, Xi Victoria Lin, Luke Zettlemoyer, Lili Yu, LMFusion: Adapting Pretrained Language Models for Multimodal Generation, arxiv, 2024.
  25. [3/12] Weijie Kong, et al., HunyuanVideo: A Systematic Framework For Large Video Generative Models, arxiv, 2025.
  26. [3/12] Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, Tao Chen, MotionGPT: Human Motion as a Foreign Language, 2023.
  27. [3/24] Yue Yang, Fan-Yun Sun, Luca Weihs, Eli VanderBilt, Alvaro Herrasti, Winson Han, Jiajun Wu, Nick Haber, Ranjay Krishna, Lingjie Liu, Chris Callison-Burch, Mark Yatskar, Aniruddha Kembhavi, Christopher Clark, Holodeck: Language Guided Generation of 3D Embodied AI Environments CVPR 2024.

Guest Lectures

Class Project Presentations

  1. April 14
  2. April 16
  3. April 21
  4. April 23
  5. April 28