With better natural language semantic representations, computers can do more
applications more efficiently as a result of better understanding of natural text.
However, no single semantic representation at this
time fulfills all requirements needed for a satisfactory representation.
Logic-based representations like first-order logic capture many of the linguistic phenomena
using logical constructs, and they come with standardized inference mechanisms,
but standard first-order logic fails to capture the ``graded'' aspect of meaning in languages.
Distributional models use
contextual similarity to predict the ``graded'' semantic similarity of words and
phrases but they do not adequately capture logical structure.
In addition, there are a few recent attempts to combine both representations either
on the logic side (still, not a graded representation), or in the distribution
side(not full logic).
We propose using probabilistic logic to represent natural language semantics
combining the expressivity and the automated inference of logic,
and the gradedness of distributional representations.
We evaluate this semantic representation on two tasks, Recognizing Textual Entailment (RTE) and Semantic Textual Similarity (STS). Doing RTE and STS better is an indication of a better semantic understanding.
Our system has three main components,
1. Parsing and Task Representation,
2. Knowledge Base Construction, and
3. Inference
The input natural sentences of the RTE/STS task are mapped to logical form using Boxer
which is a rule based system built on top of a CCG parser,
then they are used to formulate the RTE/STS problem in probabilistic logic.
Then, a knowledge base is represented as weighted inference rules
collected from different sources like WordNet
and on-the-fly lexical rules from distributional semantics.
An advantage of using probabilistic logic is that more rules can be added from
more resources easily by mapping them to logical rules and weighting them appropriately.
The last component is the inference, where we solve the probabilistic logic inference
problem using an appropriate probabilistic logic tool
like Markov Logic Network (MLN), or Probabilistic Soft Logic (PSL).
We show how to solve the inference problems in MLNs efficiently for RTE using
a modified closed-world assumption and a new inference algorithm,
and how to adapt MLNs and PSL for STS by relaxing conjunctions.
Experiments show that our semantic representation can handle RTE and STS reasonably well.
For the future work, our short-term goals are
1. better RTE task representation and finite domain handling,
2. adding more inference rules, precompiled and on-the-fly,
3. generalizing the modified closed-world assumption,
4. enhancing our inference algorithm for MLNs, and
5. adding a weight learning step to better adapt the weights.
On the longer-term, we would like to apply our semantic
representation to the question answering task,
support generalized quantifiers,
contextualize WordNet rules we use,
apply our semantic representation to languages other than English,
and implement a probabilistic logic Inference Inspector that can
visualize the proof structure.
PhD proposal, Department of Computer Science, The University of Texas at Austin.