UTCS Artificial Intelligence
courses
talks/events
demos
people
projects
publications
software/data
labs
areas
admin
Generating Question Relevant Captions to Aid Visual Question Answering (2019)
Jialin Wu
, Zeyuan Hu,
Raymond J. Mooney
Visual question answering (VQA) and image captioning require a shared body of general knowledge connecting language and vision. We present a novel approach to improve VQA performance that exploits this connection by jointly generating captions that are targeted to help answer a specific visual question. The model is trained using an existing caption dataset by automatically determining question-relevant captions using an online gradient-based method. Experimental results on the VQA v2 challenge demonstrates that our approach obtains state-of-the-art VQA performance (e.g. 68.4% on the Test-standard set using a single model) by simultaneously generating question-relevant captions.
View:
PDF
Citation:
In
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL)
, Florence, Italy, August 2019.
Bibtex:
@inproceedings{wu:acl19, title={Generating Question Relevant Captions to Aid Visual Question Answering}, author={Jialin Wu and Zeyuan Hu and Raymond J. Mooney}, booktitle={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL)}, month={August}, address={Florence, Italy}, url="http://www.cs.utexas.edu/users/ai-labpub-view.php?PubID=127759", year={2019} }
Presentation:
Slides (PPT)
People
Raymond J. Mooney
Faculty
mooney [at] cs utexas edu
Jialin Wu
Ph.D. Alumni
jialinwu [at] utexas edu
Areas of Interest
Language and Vision
Labs
Machine Learning