Bootstrap Learning of Foundational Representations
Common sense, and hence most other human knowledge, is built on
knowledge of a few foundational domains, such as space, time, action,
objects, causality, and so on. We are investigating how this
knowledge can be learned from unsupervised sensorimotor experience.
We assume that an agent, human or robot, starts with a low-level
ontology for describing its sensorimotor interaction with the world.
We call this the "pixel level". William James called it the "blooming
buzzing confusion". The learning task is to create useful
higher-level representations for space, time, actions, objects, etc,
to support effective planning and action in the world.
- It is tempting to try to escape this problem by assuming that the
foundational representations are innate when the individual is born,
and so need not be learned. But this only pushes the learning problem
onto the species, which must learn this knowledge over evolutionary
time. We believe that in many ways developmental learning and
evolutionary learning are similar, except that search is depth-first
in one and breadth-first in the other. So we pretend that all
learning is done by the individual, and postpone the decision of where
to place the evolutionary/developmental boundary.
- We also assume that the learning agent has access to some
collection of domain-independent statistical learning methods. Our
research strategy is to use methods that appear necessary to
accomplish the learning task; later, we will attempt to minimize the
set of pre-existing methods required to support the learning
process.
The basic idea behind bootstrap learning is to compose multiple
machine learning methods, using weak but general unsupervised or
delayed-reinforcement learning methods to create the prerequisites for
applying stronger but more specific learning methods such as abductive
inference or supervised learning.
An important common theme of all this work is the learning of
a higher level ontology of places, objects, and their relationships,
based on the low-level "pixel ontology" of direct experience.
These learning methods create new symbols and categories,
solving the symbol grounding problem for these symbols, and
defining the symbols in terms of the agent's own experience,
not the experience of an external teacher or programmer.
Selected Publications
-
Jonathan Mugan and Benjamin Kuipers. 2007.
Learning distinctions and rules in a continuous world
through active exploration..
7th International Conference on Epigenetic Robotics (Epirob-07).
-
Jefferson Provost and Benjamin Kuipers. 2007.
Self-organizing distinctive state abstraction using options..
7th International Conference on Epigenetic Robotics (Epirob-07).
- Joseph Modayil and Benjamin Kuipers. 2007
Autonomous Development of a Grounded Object Ontology by a Learning Robot.
Proceedings of the Twenty-Second National Conference on
Artificial Intelligence (AAAI-07)
- Jefferson Provost, Benjamin J. Kuipers and Risto Miikkulainen. 2006.
Developing navigation behavior through self-organizing
distinctive state abstraction.
Connection Science 18(2), 2006.
- Benjamin Kuipers, Patrick Beeson, Joseph Modayil, and Jefferson Provost. 2006.
Bootstrap learning of foundational representations.
Connection Science, 18(2), 2006.
- Patrick Beeson, Nicholas K. Jong, and Benjamin Kuipers. 2005.
Towards autonomous topological place detection using the Extended Voronoi
Graph.
IEEE International Conference on Robotics and
Automation (ICRA-05).
- Benjamin Kuipers. 2005.
Consciousness: drinking from the firehose of experience.
National Conference on Artificial Intelligence (AAAI-05).
- Joseph Modayil and Benjamin Kuipers. 2004.
Bootstrap learning for object discovery.
IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS-04).
- Benjamin Kuipers and Patrick Beeson. 2002.
Bootstrap learning for place recognition.
Proceedings of the Eighteenth National Conference on
Artificial Intelligence (AAAI-02).
- David Pierce and Benjamin Kuipers. 1997.
Map learning with uninterpreted sensors and effectors.
Artificial Intelligence 92: 169-229, 1997.
The full set of papers on
bootstrap learning is available.
Work described here has taken place in the Intelligent Robotics Lab at
the Artificial Intelligence Laboratory, The University of Texas at
Austin. Research of the Intelligent Robotics lab is supported in part
by grants from the Texas Advanced Research Program (3658-0170-2007),
the National Science Foundation (IIS-0413257, IIS-0713150, and
IIS-0750011), and from the National Institutes of Health (EY016089).
[QR home]
BJK