Publications: 2013
- University of Texas at Austin KBP 2013 Slot Filling System: Bayesian Logic Programs for Textual Inference
[Details] [PDF]
Yinon Bentor and Amelia Harrison and Shruti Bhosale and Raymond Mooney
In Proceedings of the Sixth Text Analysis Conference (TAC 2013), 2013.This document describes the University of Texas at Austin 2013 system for the Knowledge Base Population (KBP) English Slot Filling (SF) task. The UT Austin system builds upon the output of an existing relation extractor by augmenting relations that are explicitly stated in the text with ones that are inferred from the stated relations using probabilistic rules that encode commonsense world knowledge. Such rules are learned from linked open data and are encoded in the form of Bayesian Logic Programs (BLPs), a statistical relational learning framework based on directed graphical models. In this document, we describe our methods for learning these rules, estimating their associated weights, and performing probabilistic and logical inference to infer unseen relations. In the KBP SF task, our system was able to infer several unextracted relations, but its performance was limited by the base level extractor.
ML ID: 299
- YouTube2Text: Recognizing and Describing Arbitrary Activities Using Semantic Hierarchies and Zero-shot Recognition
[Details] [PDF] [Poster]
Sergio Guadarrama, Niveda Krishnamoorthy, Girish Malkarnenkar, Subhashini Venugopalan, Raymond Mooney, Trevor Darrell, Kate Saenko
In Proceedings of the 14th International Conference on Computer Vision (ICCV-2013), 2712--2719, Sydney, Australia, December 2013.Despite a recent push towards large-scale object recognition, activity recognition remains
limited to narrow domains and small vocabularies of actions. In this paper, we tackle
the challenge of recognizing and describing activities "in-the-wild". We present a
solution that takes a short video clip and outputs a brief sentence that sums up
the main activity in the video, such as the actor, the action, and its object. Unlike
previous work, our approach works on out-of-domain actions: it does not require
training videos of the exact activity. If it cannot find an accurate prediction
for a pre-trained model, it finds a less specific answer that is also plausible from a
pragmatic standpoint. We use semantic hierarchies learned from the data to help to choose
an appropriate level of generalization, and priors learned from web-scale natural language
corpora to penalize unlikely combinations of actors/actions/objects; we also use a
web-scale language model to "fill in" novel verbs, i.e. when the verb does not appear in
the training set. We evaluate our method on a large YouTube corpus and demonstrate it is
able to generate short sentence descriptions of video clips better than baseline approaches.
ML ID: 295
- A Multimodal LDA Model Integrating Textual, Cognitive and Visual Modalities
[Details] [PDF]
Stephen Roller and Sabine Schulte im Walde
In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP 2013), 1146--1157, Seattle, WA, October 2013.Recent investigations into grounded models of
language have shown that holistic views of
language and perception can provide higher
performance than independent views. In this
work, we improve a two-dimensional multimodal version of Latent Dirichlet Allocation
(Andrews et al., 2009) in various ways. (1) We
outperform text-only models in two different
evaluations, and demonstrate that low-level
visual features are directly compatible with
the existing model. (2) We present a novel
way to integrate visual features into the LDA
model using unsupervised clusters of images.
The clusters are directly interpretable and improve on our evaluation tasks. (3) We provide
two novel ways to extend the bimodal models to support three or more modalities. We
find that the three-, four-, and five-dimensional
models significantly outperform models using
only one or two modalities, and that nontextual modalities each provide separate, disjoint
knowledge that cannot be forced into a shared,
latent structure.
ML ID: 294
- Identifying Phrasal Verbs Using Many Bilingual Corpora
[Details] [PDF] [Poster]
Karl Pichotta and John DeNero
In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP 2013), 636--646, Seattle, WA, October 2013.We address the problem of identifying multiword expressions in a language,
focusing on English phrasal verbs. Our polyglot ranking approach
integrates frequency statistics from translated corpora in 50 different
languages. Our experimental evaluation demonstrates that combining statistical
evidence from many parallel corpora using a novel ranking-oriented boosting
algorithm produces a comprehensive set of English phrasal verbs, achieving
performance comparable to a human-curated set.
ML ID: 293
- Detecting Promotional Content in Wikipedia
[Details] [PDF] [Slides (PPT)]
Shruti Bhosale and Heath Vinicombe and Raymond J. Mooney
In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP 2013), 1851--1857, Seattle, WA, October 2013.This paper presents an approach for detecting promotional content in Wikipedia.
By incorporating stylometric features, including features based on n-gram and PCFG language
models, we demonstrate improved accuracy at identifying promotional articles, compared to using only lexical information and meta-features.
ML ID: 292
- Grounded Language Learning Models for Ambiguous Supervision
[Details] [PDF] [Slides (PPT)]
Joo Hyun Kim
PhD Thesis, Department of Computer Science, University of Texas at Austin, December 2013.Communicating with natural language interfaces is a long-standing, ultimate goal for artificial intelligence (AI) agents to pursue, eventually. One core issue toward this goal is "grounded" language learning, a process of learning the semantics of natural language with respect to relevant perceptual inputs. In order to ground the meanings of language in a real world situation, computational systems are trained with data in the form of natural language sentences paired with relevant but ambiguous perceptual contexts. With such ambiguous supervision, it is required to resolve the ambiguity between a natural language (NL) sentence and a corresponding set of possible logical meaning representations (MR).
In this thesis, we focus on devising effective models for simultaneously disambiguating such supervision and learning the underlying semantics of language to map NL sentences into proper logical MRs.
We present probabilistic generative models for learning such correspondences along with a reranking model to improve the performance further.
First, we present a probabilistic generative model that learns the mappings from NL sentences into logical forms where the true meaning of each NL sentence is one of a handful of candidate logical MRs. It simultaneously disambiguates the meaning of each sentence in the training data and learns to probabilistically map an NL sentence to its corresponding MR form depicted in a single tree structure.
We perform evaluations on the RoboCup sportscasting corpus, proving that our model is more effective than those proposed by previous researchers.
Next, we describe two PCFG induction models for grounded language learning that extend the previous grounded language learning model of Borschinger,
Jones, and Johnson (2011). Borschinger et al.'s approach works well in situations of limited ambiguity, such as in the sportscasting task. However, it does not scale well to highly ambiguous situations when there are large sets of potential meaning possibilities for each sentence, such as in the navigation instruction following task first studied by Chen and Mooney (2011). The two models we present overcome such limitations by employing a learned semantic lexicon as a basic correspondence unit between NL and MR for PCFG rule generation.
Finally, we present a method of adapting discriminative reranking to grounded language learning in order to improve the performance of our proposed generative models. Although such generative models are easy to implement and are intuitive, it is not always the case that generative models perform best, since they are maximizing the joint probability of data and model, rather than directly maximizing conditional probability. Because we do not have gold-standard references for training a secondary conditional reranker, we incorporate weak supervision of evaluations against the perceptual world during the process of improving model performance.
All these approaches are evaluated on the two publicly available domains that have been actively used in many other grounded language learning studies. Our methods demonstrate consistently improved performance over those of previous studies in the domains with different languages; this proves that our methods are language-independent and can be generally applied to other grounded learning problems as well. Further possible applications of the presented approaches include summarized machine translation tasks and learning from real perception data assisted by computer vision and robotics.
ML ID: 291
- Generating Natural-Language Video Descriptions Using Text-Mined Knowledge
[Details] [PDF] [Slides (PPT)]
Niveda Krishnamoorthy, Girish Malkarnenkar, Raymond J. Mooney, Kate Saenko, Sergio Guadarrama
In Proceedings of the NAACL HLT Workshop on Vision and Language (WVL '13), 10--19, Atlanta, Georgia, July 2013.We present a holistic data-driven technique that generates natural-language
descriptions for videos. We combine the output of state-of-the-art object and
activity detectors with ``real-world'' knowledge to select the most probable
subject-verb-object triplet for describing a video. We show that this
knowledge, automatically mined from web-scale text corpora, enhances the
triplet selection algorithm by providing it contextual information and leads to
a four-fold increase in activity identification. Unlike previous methods, our
approach can annotate arbitrary videos without requiring the expensive
collection and annotation of a similar training video corpus. We evaluate our
technique against a baseline that does not use text-mined knowledge and show
that humans prefer our descriptions 61% of the time.
ML ID: 290
- Using Both Latent and Supervised Shared Topics for Multitask Learning
[Details] [PDF] [Slides (PDF)]
Ayan Acharya, Aditya Rawal, Raymond J. Mooney, Eduardo R. Hruschka
In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD), 369--384, Prague, Czech Republic, September 2013.This paper introduces two new frameworks, Doubly Supervised Latent Dirichlet
Allocation (DSLDA) and its non-parametric variation (NP-DSLDA), that
integrate two different types of supervision: topic labels and category labels.
This approach is particularly useful for multitask learning, in which both
latent and supervised topics are shared between multiple
categories. Experimental results on both document and image
classification show that both types of supervision improve the performance of
both DSLDA and NP-DSLDA and that sharing both latent and supervised topics
allows for better multitask learning.
ML ID: 289
- Real-World Semi-Supervised Learning of POS-Taggers for Low-Resource Languages
[Details] [PDF]
Dan Garrette and Jason Mielens and Jason Baldridge
In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL-2013), 583--592, Sofia, Bulgaria, August 2013.Developing natural language processing tools for low-resource languages often requires creating resources from scratch. While a variety of semi-supervised methods exist for training from incomplete data, there are open questions regarding what types of training data should be used and how much is necessary. We discuss a series of experiments designed to shed light on such questions in the context of part-of-speech tagging. We obtain timed annotations from linguists for the low-resource languages Kinyarwanda and Malagasy (as well as English) and evaluate how the amounts of various kinds of data affect performance of a trained POS-tagger. Our results show that annotation of word types is the most important, provided a sufficiently capable semi-supervised learning infrastructure is in place to project type information onto a raw corpus. We also show that finite-state morphological analyzers are effective sources of type information when few labeled examples are available.
ML ID: 288
- Online Inference-Rule Learning from Natural-Language Extractions
[Details] [PDF] [Poster]
Sindhu Raghavan and Raymond J. Mooney
In Proceedings of the 3rd Statistical Relational AI (StaRAI-13) workshop at AAAI '13, July 2013.In this paper, we consider the problem of learning commonsense
knowledge in the form of first-order rules from incomplete and noisy
natural-language extractions produced by an off-the-shelf information
extraction (IE) system. Much of the information conveyed in text must
be inferred from what is explicitly stated since easily inferable
facts are rarely mentioned. The proposed rule learner accounts for
this phenomenon by learning rules in which the body of the rule
contains relations that are usually explicitly stated, while the head
employs a less-frequently mentioned relation that is easily
inferred. The rule learner processes training examples in an online
manner to allow it to scale to large text corpora. Furthermore, we
propose a novel approach to weighting rules using a curated lexical
ontology like WordNet. The learned rules along with their parameters
are then used to infer implicit information using a Bayesian Logic
Program. Experimental evaluation on a machine reading testbed
demonstrates the efficacy of the proposed methods.
ML ID: 287
- Adapting Discriminative Reranking to Grounded Language Learning
[Details] [PDF] [Slides (PPT)]
Joohyun Kim and Raymond J. Mooney
In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL-2013), 218--227, Sofia, Bulgaria, August 2013.We adapt discriminative reranking to improve the performance of grounded
language acquisition, specifically the task of learning to follow navigation
instructions from observation. Unlike conventional reranking used in syntactic
and semantic parsing, gold-standard reference trees are not naturally available
in a grounded setting. Instead, we show how the weak supervision of response
feedback (e.g. successful task completion) can be used as an alternative,
experimentally demonstrating that its performance is comparable to training on
gold-standard parse trees.
ML ID: 286
- Montague Meets Markov: Deep Semantics with Probabilistic Logical Form
[Details] [PDF] [Slides (PPT)]
I. Beltagy, Cuong Chau, Gemma Boleda, Dan Garrette, Katrin Erk, Raymond Mooney
In Proceedings of the Second Joint Conference on Lexical and Computational Semantics (*Sem-2013), 11--21, Atlanta, GA, June 2013.We combine logical and distributional representations of natural language
meaning by transforming distributional similarity judgments into weighted
inference rules using Markov Logic Networks (MLNs). We show that this
framework supports both judging sentence similarity and
recognizing textual entailment by appropriately adapting the MLN
implementation of logical connectives. We also show that distributional phrase
similarity, used as textual inference rules created on the fly, improves its
performance.
ML ID: 285
- A Formal Approach to Linking Logical Form and Vector-Space Lexical Semantics
[Details] [PDF]
Dan Garrette, Katrin Erk, Raymond J. Mooney
In Harry Bunt, Johan Bos, and Stephen Pulman, editors, Computing Meaning, 27--48, Berlin, 2013. Springer.First-order logic provides a powerful and flexible mechanism for representing natural language semantics. However, it is an open question of how best to integrate it with uncertain, weighted knowledge, for example regarding word meaning. This paper describes a mapping between predicates of logical form and points in a vector space. This mapping is then used to project distributional inferences to inference rules in logical form. We then describe first steps of an approach that uses this mapping to recast first-order semantics into the probabilistic models that are part of Statistical Relational AI. Specifically, we show how Discourse Representation Structures can be combined with distributional models for word meaning inside a Markov Logic Network and used to successfully perform inferences that take advantage of logical concepts such as negation and factivity as well as weighted information on word meaning in context.
ML ID: 284
- Learning a Part-of-Speech Tagger from Two Hours of Annotation
[Details] [PDF] [Slides (PDF)] [Video]
Dan Garrette, Jason Baldridge
In Proceedings of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT-13), 138--147, Atlanta, GA, June 2013.Most work on weakly-supervised learning for part-of-speech taggers has been based on unrealistic assumptions about the amount and quality of training data. For this paper, we attempt to create true low-resource scenarios by allowing a linguist just two hours to annotate data and evaluating on the languages Kinyarwanda and Malagasy.
Given these severely limited amounts of either type supervision (tag dictionaries) or token supervision (labeled sentences), we are able to dramatically improve the learning of a hidden Markov model through our method of automatically generalizing the annotations, reducing noise, and inducing word-tag frequency information.
ML ID: 283
- Generating Natural-Language Video Descriptions Using Text-Mined Knowledge
[Details] [PDF] [Slides (PPT)]
Niveda Krishnamoorthy, Girish Malkarnenkar, Raymond J. Mooney, Kate Saenko, Sergio Guadarrama
In Proceedings of the 27th AAAI Conference on Artificial Intelligence (AAAI-2013), 541--547, July 2013.We present a holistic data-driven technique that generates natural-language
descriptions for videos. We combine the output of state-of-the-art object and
activity detectors with "real-world" knowledge to select the most probable
subject-verb-object triplet for describing a video. We show that this
knowledge, automatically mined from web-scale text corpora, enhances the
triplet selection algorithm by providing it contextual information and leads to
a four-fold increase in activity identification. Unlike previous methods, our
approach can annotate arbitrary videos without requiring the expensive
collection and annotation of a similar training video corpus. We evaluate our
technique against a baseline that does not use text-mined knowledge and show
that humans prefer our descriptions 61 percent of the time.
ML ID: 282