Monday, November 29th, 12:00pm
Coffee at 11:45am
ACES 2.302 Avaya Auditorium
|
Modeling the Mirror System: From Hand Movements to Language
Prof. Michael A. Arbib [homepage]
Departments of Computer Science and Neuroscience
University of Southern California
The mirror system in the macaque monkey is a set of neurons each of which
is active both when the monkey makes certain actions and observes others
(human or monkey) make similar actions. Brain imaging suggests that humans
have such a mirror system as well, with a key part in or near Broca's
area, a key player in the brain's mechanisms for language. This grounds
the mirror system hypothesis tracing an evolutionary path from a mirror
system for grasping via imitation, pantomime, protosign and protospeech to
language. The talk will provide an exposition and critique of these ideas
which will be informed by analysis of computational models which probe the
development and function of the mirror system in the macaque. The debate
between Arbib and Davis & MacNeilage on whether protospeech needed the
scaffolding of protosign will be reviewed, briefly.
About the speaker:
Michael A. Arbib is the Fletcher Jones Professor of Computer Science,
as well as a Professor of Biological Sciences, Biomedical Engineering,
Electrical Engineering, Neuroscience and Psychology at the University
of Southern California (USC). He has also been named as one of a small
group of "University Professors" at USC in recognition of his
contributions across many disciplines. He received his Ph.D. in
Mathematics from MIT in 1963. He is the author or editor of more than
30 books, including "Brains, Machines and Mathematics" (McGraw-Hill,
1964), "Neural Organization: Structure, Function, and Dynamics" (with
Peter Erdi and John Szentagothai, MIT Press, 1998), and the edited
volume "The Handbook of Brain Theory and Neural Networks" (MIT Press,
Second Edition, 2003).
Jointly sponsored by the Departments of Computer Sciences and
Communication Sciences and Disorders.
|
Friday, November 19th, 3:00pm
Coffee at 2:45pm
ACES 2.302 Avaya Auditorium
|
Three Challenges for Machine Learning Research
Prof. Thomas G. Dietterich [homepage]
School of Electrical Engineering and Computer Science
Oregon State University
Over the past 25 years, machine learning research has made huge
progress on the problem of supervised learning. This talk will argue
that now is the time to consider three new directions.
The first direction, which is already being pursued by many groups, is
Structural Supervised Learning in which the input instances are no
longer independent but instead are related by some kind of sequential,
spatial, or graphical structure. A variety of methods are being
developed, including hidden Markov support vector machines,
conditional random fields, and sliding window techniques.
The second new direction is Transfer Learning in which something is
learned on one task that can help with a second, separate task. This
includes transfer of learned facts, learned features, and learned
ontologies.
The third new direction is Deployable Learning Systems. Today's
learning systems are primarily operated offline by machine learning
experts. They provide an excellent way of constructing certain kinds
of AI systems (e.g., speech recognizers, handwriting recognizers, data
mining systems, etc.). But it is rare to see learning systems that
can be deployed in real applications in which learning takes place
on-line and without expert intervention. Deployed learning systems
must deal with such problems as changes in the number, quality, and
semantics of input features, changes in the output classes, and
changes in the underlying probability distribution of instances.
There are also difficult software engineering issues that must be
addressed in order to make learning systems maintainable after they
are deployed.
About the speaker:
Dr. Dietterich (AB Oberlin College 1977; MS University of Illinois
1979; PhD Stanford University 1984) joined the Oregon State University
faculty in January 1985. In 1987, he was named a Presidential Young
Investigator for the NSF. In 1990, he published, with Dr. Jude
Shavlik, the book entitled Readings in Machine Learning, and he also
served as the Technical Program Co-Chair of the National Conference on
Artificial Intelligence (AAAI-90). From 1992-1998 he held the
position of Executive Editor of the journal Machine Learning. The
American Association for Artificial Intelligence named him a Fellow in
1994, and the Association for Computing Machinery did the same in
2003. In 2000, he co-founded a new, free electronic journal: The
Journal of Machine Learning Research. He served as Technical Program
Chair of the Neural Information Processing Systems (NIPS) conference
in 2000 and General Chair in 2001. He currently President of the
International Machine Learning Society, a member of the DARPA
Information Science and Technology Study Group, and he also serves on
the Board of Trustees of the NIPS Foundation.
|
Thursday, November 18th, 3:30pm
Coffee at 3:15pm
Taylor 3.128
|
A Probabilistic Approach to Accelerating Path-Finding in Large Semantic Graphs
Dr. Tina Eliassi-Rad [homepage]
Center for Applied Scientific Computing
Lawrence Livermore National Laboratory
The majority of real-world graphs contain semantics.
That is, they encode meaningful entities and relationships in their
vertices and edges, respectively. Moreover, such graphs have semantic
types associated with their vertices and edges. These types provide
an ontology (or a schema) graph (i.e., they encode the types of the
vertices that may be connected via a given edge type). In this paper,
we use ontological information, probability theory, and artificial
intelligence (AI) search techniques to reduce and prioritize the
search space between a source vertex and a destination vertex for
path-finding tasks in large semantic graphs. Specifically, we
introduce two probabilistic heuristics that utilize a semantic graph's
ontological information. We embed our heuristics into A* and compare
their performances to breadth-first search and the simple
non-probabilistic A* search. We test our heuristics on large
synthetic and real ontologies and semantic graphs with real-world
properties (such as graphs with "scale-free" or "small-world"
topologies). Our experimental results illustrate the merits of our
approach.
About the speaker:
Tina Eliassi-Rad joined the Center for Applied Scientific Computing at
Lawrence Livermore National Laboratory as a computer scientist in
September 2001. Her research interests include machine learning,
knowledge discovery and data mining, artificial intelligence, text and
web mining, information retrieval and extraction, intelligent software
agents, bioinformatics, intrusion detection, and E-commerce.
She earned a Ph.D. in Computer Sciences (with a minor in Mathematical
Statistics) at the University of Wisconsin-Madison in 2001. She
completed her M.S. in Computer Science at the University of Illinois
at Urbana-Champaign in 1995, and her B.S. in Computer Sciences at the
University of Wisconsin-Madison in 1993.
|
Friday, November 12th, 3:00pm
Coffee at 2:45pm
ACES 2.402
|
Answer Set Programming and Design of Deliberative Agents
Prof. Michael Gelfond [homepage]
Department of Computer Science
Texas Tech University
Answer set programming (ASP) is a new declarative programming paradigm
suitable for solving a large range of problems related to knowledge
representation and search. ASP begins by encoding relevant domain
knowledge as a logic program, P, whose connectives are understood in
accordance with the answer set (stable model) semantics of logic
programming. In the second stage of the ASP programming process, a
programming task is reduced to finding the answer sets of a logic program
P + R where R is normally a simple program corresponding to this task.
The answer sets are found with the help of answer set solvers -
programming systems implementing various answer set finding algorithms.
During the last few years the answer set programming paradigm seems to
have crossed the boundaries of AI and has started to attract people in
various areas of computer science. In this talk I will discuss the use of
ASP for the design and implementation of software components of
deliberative agents capable of reasoning, planning and acting in a
changing environment. The basic idea will be illustrated by discussing
the use of ASP for the development of a decision support system for the
Space Shuttle.
About the speaker:
Michael Gelfond received his PhD from the Institute of Mathematics of the
Academy of Sciences, St. Petersburg, in 1974. He is currently a professor
in the Computer Science Department at Texas Tech University. Michael
Gelfond is a fellow of AAAI, Area Editor for the International Journal of
Logic Programming, and the Executive Editor of the Journal of Logic and
Computation.
|
Thursday, November 11th, 4:00pm
Coffee at 3:45pm
CBA 3.202
|
Classification and Learning with Networked Data: some observations
and results
Prof. Foster Provost [homepage]
Dept. of Information, Operations and Management Sciences
Leonard N. Stern School of Business
New York University
As information systems record and provide access to increasing amounts
of data, connections between entities become available for analysis.
Customer accounts are linked by communications and other transactions.
Organizatons are linked by joint activities. Text documents are
hyperlinked. Such networked data create opportunities for improving
classification. For example, for detecting fraud a common and
successful strategy is to use transactions to link a questionable
account to previous fraudulent activity. Document classification can
be improved by considering hyperlink structure. Marketing can change
dramatically when customer communication is taken into account. In
this talk I will focus on two unique characteristics of classification
with networked data. (1) Knowing the classifications of some entities
in the network can improve the classification of others.
(2) Very-high-cardinality categorical attributes (e.g., identifiers)
can be used effectively in learned models. I will discuss methods for
taking advantage of these characteristics, and will demonstrate them
on various real and synthetic data sets.
(Joint work with Claudia Perlich and Sofus Macskassy)
About the speaker:
Foster Provost is Associate Professor of Information Systems and NEC
Faculty Fellow at New York University's Stern School of Business. He
is Editor-in-Chief of the journal Machine Learning, and a founding
board member of the International Machine Learning Society. Professor
Provost's recent research focuses on mining networked data, economic
machine learning, and applications of machine learning and data
mining. Previously, at NYNEX/Bell Atlantic Science and Technology, he
studied a variety of applications of machine learning to
telecommunications problems including fraud detection, network
diagnosis and monitoring, and customer contact management.
|
Thursday, October 28th, 11:00am
Coffee at 10:45am
ACES 2.402
|
Location Estimation for Activity Recognition
Prof. Dieter Fox [homepage]
Department of Computer Science and Engineering
University of Washington
Knowledge of a person's location provides important context
information for many pervasive computing applications. Beyond this,
location information is extremely helpful for estimating a person's
high-level activities. In this talk we show how Bayesian filtering can
be applied to estimate the location of a person using sensors such as
GPS, infrared, or WiFi. The techniques track a person on graph
structures that represent a street map or a skeleton of the free space
in a building. In the context of GPS, we show how such a graph
representation can be embedded into a hierarchical activity model that
learns and infers a user's daily movements through the community. The
model uses multiple levels of abstraction in order to bridge the gap
between raw GPS measurements and high level information such as a
user's mode of transportation or her goal.
About the speaker:
Dieter Fox is an Assistant Professor of Computer Science & Engineering
at the University of Washington, Seattle. He obtained his Ph.D. from
the University of Bonn, Germany. Before joining UW, he spent two
years as a postdoctoral researcher at the CMU Robot Learning Lab. His
research focuses on probabilistic state estimation in robotics and
activity recognition. He received various awards, including an NSF
CAREER award and best paper awards at major robotics and artificial
intelligence conferences.
|
Friday, October 22nd, 3:00pm
Coffee at 2:45pm
ACES 2.402
|
Learning From Knowledge - Getting Cyc to
Build Itself
A recording of this talk is available, please contact one of the
organizers if you would like to borrow it.
Dr. Michael Witbrock [homepage]
Cycorp, Inc.
For the past twenty years, human beings have been painstakingly adding
formally represented knowledge to Cyc. While this knowledge base has
been usefully applied to several real-world and research problems, it
is insufficient to approach the eventual goal of a fully functioning
Artificial Intelligence. One of the original premises of the Cyc
project was that one could only acquire knowledge from a base of
knowledge; you can't learn anything unless you know something. We're
now in a position to put that premise to the test. In this talk, I'll
describe the Cyc system, and how we are applying its current knowledge
base, NL capability, and inference power to the problem of automated
knowledge acquisition.
About the speaker:
Dr. Michael Witbrock (Cycorp), has a PhD in Computer Science from
Carnegie Mellon University, and currently is Vice President for
Research at Cycorp. Before joining Cycorp, in 2001, to direct its
knowledge formation and dialogue processing efforts he had been
Principal Scientist at Terra Lycos, working on integrating statistical
and knowledge based approaches to understanding web user behavior, a
research scientist at Just Systems Pittsburgh Research Center, working
on statistical summarization, and a systems scientist at Carnegie
Mellon on the Informedia spoken document information retrieval
project. He also performed dissertation work in the area of speaker
modeling. He is author of numerous publications in areas ranging
across neural networks, parallel computer architecture, multimedia
information retrieval, web browser design, genetic design,
computational linguistics and speech recognition.
|
Friday, October 15th, 3:00pm
Coffee at 2:45pm
ACES 2.302 Avaya Auditorium
|
Putting Meaning into Your Trees
Prof. Martha Palmer [homepage]
Computer and Information Sciences Department
University of Pennsylvania
The current success of applications of machine learning techniques to tasks
such as part-of-speech tagging and parsing has kindled the hope that these
same techniques might have equal or greater success in other areas such as
lexical semantics. Advances in automated and semi-automated methods of
acquiring lexical semantics would release the field from its dependence on
well-defined sub-domains and enable broad-coverage natural language
processing. However, supervised machine learning requires large amounts of
publicly available training data, and a prerequisite for this training data
is general agreement on which elements should be tagged and with what
tags. With respect to lexical semantics, this type of general agreement
has been strikingly elusive.
A recent consensus on a task-oriented level of semantic representation to
be layered on top of the existing Penn Treebank syntactic structures has
been achieved. This level, know as the Proposition Bank, or PropBank,
consists of argument labels for the semantic roles of individual verbs and
similar predicating expressions such as participial modifiers and
nominalizations. This talk will describe the PropBank verb semantic role
annotation being done at Penn for both English and Chinese. The annotation
process will be discussed as well as the use of existing lexical resources
such as WordNet, Levin classes and VerbNet. Similar projects include the
FrameNet Project at Berkeley and the Prague Tectogrammatics
project. PropBank annotation is shallower than the Prague Tectogrammatics
project and more broad coverage than FrameNet, in that every verb instance
in the corpus has to be annotated.
The talk will also briefly describe progress in developing automatic
semantic role labelers based on this training data and investigations into
the role of sense distinctions in improving performance.
About the speaker:
Martha Palmer is an Associate Professor in the Computer and
Information Sciences Department of the University of Pennsylvania. She
has been a member of the Advisory Committee for the DARPA TIDES
program, the Chair of SIGLEX, the Chair of SIGHAN, and is now
Vice-President of the Association for Computational Linguistics. Her
early work on lexically based semantic interpretation formed the basis
of the successful DARPA-funded message processing system, Pundit, and
fostered a continuing interest in Information Extraction (ACE) and
Machine Translation (TIDES). Her interest in lexical semantics and
verb classes also led to her involvement in SENSEVAL and the
development of English VerbNet and the English, Chinese and Korean
Proposition Banks.
|
Friday, October 1st, 11:00am
Coffee at 10:30am
ACES 2.302 Avaya Auditorium
|
Building a New Kind of Body Monitoring
Company around Machine Learning
A recording of this talk is available, please contact one of the
organizers if you would like to borrow it.
Dr. Astro Teller [homepage]
BodyMedia, Inc.
One trillion dollars of US healthcare costs per year are directly
attributable to people's lifestyle choices and our country spends less
than 5% of that addressing this issue. What if there was an
unobtrusive, accurate way to gather the physical and mental states of
people in their natural environments, in real time and over long
periods of time? If such information could be obtained, we could
start to address the fundamental issue in health and wellness:
behavior modification. This talk is a tour through five years of
challenges and discoveries building a wearable body monitoring
business using machine learning techniques. The talk will cover
challenges gathering data, building body state models, validating the
models with the medical community, and will place AI within the larger
context of the company, BodyMedia, and the healthcare, wellness, and
fitness industries.
About the speaker:
A respected scientist, seasoned entrepreneur, and award-wining
novelist, Dr. Astro Teller's endeavors all grow out of a passion for
the transformative nature of intelligent technologies. Dr. Teller is
currently the CEO of BodyMedia, Inc, the leading company in
unobtrusive wearable body monitoring. Past work has taken him
through a previous CEO position, teaching and researching at Stanford
University, numerous patents, a Hertz fellowship, a range of technical
and non-technical articles and books, and $22M in raised capital.
Dr. Teller holds a BS in computer science and an MS in symbolic and
heuristic computation, both from Stanford University. Dr. Teller
completed his Ph.D. in computer science at Carnegie Mellon University.
|
Thursday, September 30th, 2:00pm
Coffee at 1:30pm
ACES 2.302 Avaya Auditorium
|
Machine Learning for Personalized Wireless Portals
Dr. Michael Pazzani [homepage]
Information and Intelligent Systems Division
National Science Foundation
People have access to vast stores of information on the World Wide Web
ranging from online publications to electronic commerce. All this
information, however, used to be accessible only while users are tethered to
a computer at home or in an office. Wireless data and voice access to this
vast store allows unprecedented access to information from any location at
any time. The presentation of this information must be tailored to the
constraints of mobile devices. Although browsing and searching are the
acceptable methods of locating information on the wired web, those
operations soon become cumbersome and inefficient in the wireless setting
and impossible in voice interfaces. Small screens, slower connections, high
latency, limited input capabilities, and the serial nature of voice
interfaces present new challenges. This talk focuses on personalization
techniques that are essential for the usability of handheld wireless devices.
About the speaker:
Michael J. Pazzani is the Director of the Information and Intelligent
Systems Division of the National Science Foundation. He received his Ph.D.
in Computer Science from UCLA and is on leave from a full professorship at the
University of California, Irvine where he also served as department chair of
Information and Computer Science at UCI for five years. Dr. Pazzani serves
on the Board of Regents of the National Library of Medicine. He is a fellow
of the American Association of Artificial Intelligence and has published
numerous papers in machine learning, personalization, information retrieval,
and cognitive science.
|
Friday, September 3rd, 3:00pm
Coffee at 2:30pm
ACES 2.302 Avaya Auditorium
|
Prof. Jeffrey
Mark Siskind [homepage]
School of Electrical and Computer Engineering
Purdue University
Probabilistic Context-Free Grammars (PCFGs) induce
distributions over
strings. Strings can be viewed as observations that are maps from
indices to terminals. The domains of such maps are totally ordered
and the terminals are discrete. We extend PCFGs to induce densities
over observations with unordered domains and continuous-valued
terminals. We call our extension Spatial Random Tree Grammars
(SRTGs). While SRTGs are context sensitive, the inside-outside
algorithm can be extended to support exact likelihood calculation, MAP
estimates, and ML estimation updates in polynomial time on SRTGs. We
call this extension the center-surround algorithm. SRTGs extend
mixture models by adding hierarchal structure that can vary across
observations. The center-surround algorithm can recover the structure
of observations, learn structure from observations, and classify
observations based on their structure. We have used SRTGs and the
center-surround algorithm to process both static images and dynamic
video. In static images, SRTGs have been trained to distinguish
houses from cars. In dynamic video, SRTGs have been trained to
distinguish entering from exiting. We demonstrate how the structural
priors provided by SRTGs support these tasks.
Joint work with Charles Bouman, Shawn Brownfield, Bingrui Foo,
Mary
Harper, Ilya Pollak, and James Sherman.
About the speaker:
Jeffrey Mark Siskind received the B.A. degree in computer science from
the Technion, Israel Institute of Technology in 1979, the S.M. degree
in computer science from MIT in 1989, and the Ph.D. degree in computer
science from MIT in 1992. He did a postdoctoral fellowship at the
University of Pennsylvania Institute for Research in Cognitive Science
from 1992 to 1993. He was an assistant professor at the University of
Toronto Department of Computer Science from 1993 to 1995, a senior
lecturer at the Technion Department of Electrical Engineering in 1996,
a visiting assistant professor at the University of Vermont Department
of Computer Science and Electrical Engineering from 1996 to 1997, and
a research scientist at NEC Research Institute, Inc. from 1997 to
2001. He joined the Purdue University School of Electrical and
Computer Engineering in 2002 where he is currently an associate
professor. His research interests include machine vision, artificial
intelligence, cognitive science, computational linguistics, child
language acquisition, and programming languages and compilers.
|