• Classified by Topic • Classified by Publication Type • Sorted by Date • Sorted by First Author Last Name • Classified by Funding Source •
Autonomous Transfer for Reinforcement Learning.
Matthew E. Taylor,
Gregory Kuhlmann, and Peter
Stone.
In The Seventh International Joint Conference on Autonomous Agents and Multiagent Systems, May 2008.
AAMAS-2008
[PDF]233.3kB [postscript]391.7kB
Recent work in transfer learning has succeeded in making reinforcement learning algorithms more efficient by incorporating knowledge from previous tasks. However, such methods typically must be provided either a full model of the tasks or an explicit relation mapping one task into the other. An autonomous agent may not have access to such high-level information, but would be able to analyze its experience to find similarities between tasks. In this paper we introduce Modeling Approximate State Transitions by Exploiting Regression (MASTER), a method for automatically learning a mapping from one task to another through an agent's experience. We empirically demonstrate that such learned relationships can significantly improve the speed of a reinforcement learning algorithm in a series of Mountain Car tasks. Additionally, we demonstrate that our method may also assist with the difficult problem of task selection for transfer.
@InProceedings{AAMAS08-taylor, author="Matthew E.\ Taylor and Gregory Kuhlmann and Peter Stone", title="Autonomous Transfer for Reinforcement Learning", booktitle="The Seventh International Joint Conference on Autonomous Agents and Multiagent Systems", month="May", year="2008", abstract={Recent work in transfer learning has succeeded in making reinforcement learning algorithms more efficient by incorporating knowledge from previous tasks. However, such methods typically must be provided either a full model of the tasks or an explicit relation mapping one task into the other. An autonomous agent may not have access to such high-level information, but would be able to analyze its experience to find similarities between tasks. In this paper we introduce Modeling Approximate State Transitions by Exploiting Regression (MASTER), a method for automatically learning a mapping from one task to another through an agent's experience. We empirically demonstrate that such learned relationships can significantly improve the speed of a reinforcement learning algorithm in a series of Mountain Car tasks. Additionally, we demonstrate that our method may also assist with the difficult problem of task selection for transfer.}, wwwnote={<a href="http://gaips.inesc-id.pt/aamas2008/">AAMAS-2008</a>}, }
Generated by bib2html.pl (written by Patrick Riley ) on Tue Nov 19, 2024 10:24:45