Semi-Supervised Learning for Semantic Parsing using Support Vector Machines (2007)
We present a method for utilizing unannotated sentences to improve a semantic parser which maps natural language (NL) sentences into their formal meaning representations (MRs). Given NL sentences annotated with their MRs, the initial supervised semantic parser learns the mapping by training Support Vector Machine (SVM) classifiers for every production in the MR grammar. Our new method applies the learned semantic parser to the unannotated sentences and collects unlabeled examples which are then used to retrain the classifiers using a variant of transductive SVMs. Experimental results show the improvements obtained over the purely supervised parser, particularly when the annotated training set is small.
View:
PDF, PS
Citation:
In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, Short Papers (NAACL/HLT-2007), pp. 81--84, Rochester, NY, April 2007.
Bibtex:

Presentation:
Slides (PPT)
Rohit Kate Postdoctoral Alumni katerj [at] uwm edu
Raymond J. Mooney Faculty mooney [at] cs utexas edu