Structured Exploration for Reinforcement Learning (2010)
Reinforcement Learning (RL) offers a promising approach towards achiev- ing the dream of autonomous agents that can behave intelligently in the real world. Instead of requiring humans to determine the correct behaviors or suf- ficient knowledge in advance, RL algorithms allow an agent to acquire the necessary knowledge through direct experience with its environment. Early algorithms guaranteed convergence to optimal behaviors in limited domains, giving hope that simple, universal mechanisms would allow learning agents to succeed at solving a wide variety of complex problems. In practice, the field of RL has struggled to apply these techniques successfully to the full breadth and depth of real-world domains. This thesis extends the reach of RL techniques by demonstrating the synergies among certain key developments in the literature. The first of these developments is model-based exploration, which facilitates theoretical conver- gence guarantees in finite problems by explicitly reasoning about an agent’s certainty in its understanding of its environment. A second branch of research studies function approximation, which generalizes RL to infinite problems by artificially limiting the degrees of freedom in an agent’s representation of its environment. The final major advance that this thesis incorporates is hier- archical decomposition, which seeks to improve the efficiency of learning by endowing an agent’s knowledge and behavior with the gross structure of its environment. Each of these ideas has intuitive appeal and sustains substantial inde- pendent research efforts, but this thesis defines the first RL agent that com- bines all their benefits in the general case. In showing how to combine these techniques effectively, this thesis investigates the twin issues of generalization and exploration, which lie at the heart of efficient learning. This thesis thus lays the groundwork for the next generation of RL algorithms, which will allow scientific agents to know when it suffices to estimate a plan from current data and when to accept the potential cost of running an experiment to gather new data.
View:
PDF
Bibtex:

Presentation:
Slides (PDF)
Nicholas Jong Ph.D. Alumni nickjong [at] me com