UT Austin Villa Publications

Sorted by DateClassified by Publication TypeClassified by TopicSorted by First Author Last Name

Learning a Policy for Opportunistic Active Learning

Aishwarya Padmakumar, Peter Stone, and Raymond J. Mooney. Learning a Policy for Opportunistic Active Learning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-18), Brussels, Belgium, November 2018.

Download

[PDF]394.3kB  

Abstract

Active learning identifies data points to label that are expected to be the most useful in improving a supervised model. Opportunistic active learning incorporates active learning into interactive tasks that constrain possible queries during interactions. Prior work has shown that opportunistic active learning can be used to improve grounding of natural language descriptions in an interactive object retrieval task. In this work, we use reinforcement learning for such an object retrieval task, to learn a policy that effectively trades off task completion with model improvement that would benefit future tasks.

BibTeX

@inproceedings{EMNLP18-padmakumar,
title={Learning a Policy for Opportunistic Active Learning},
author={Aishwarya Padmakumar and Peter Stone and Raymond J. Mooney},
booktitle={Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-18)},
month={November},
address={Brussels, Belgium},
url="http://www.cs.utexas.edu/users/ml/papers/padmakumar.emnlp18.pdf",
year={2018},
abstract={
      Active learning identifies data points to label that are expected to be the
      most useful in improving a supervised model. Opportunistic active learning
      incorporates active learning into interactive tasks that constrain possible
      queries during interactions. Prior work has shown that opportunistic active
      learning can be used to improve grounding of natural language descriptions
      in an interactive object retrieval task. In this work, we use reinforcement
      learning for such an object retrieval task, to learn a policy that
      effectively trades off task completion with model improvement that would
      benefit future tasks. 
},
location={Brussels, Belgium}
}

Generated by bib2html.pl (written by Patrick Riley ) on Sun Nov 24, 2024 20:30:02