• Classified by Topic • Classified by Publication Type • Sorted by Date • Sorted by First Author Last Name • Classified by Funding Source •
Model-Based Meta Automatic Curriculum Learning.
Zifan Xu, Yulin
Zhang, Shahaf S. Shperberg, Reuth Mirsky, Yuqian
Jiang, Bo Liu, and Peter Stone.
In
The Second Conference on Lifelong Learning Agents (CoLLAs), August 2023.
Video
presentation
Curriculum learning (CL) has been widely explored to facilitate the learning of hard-exploration tasks in reinforcement learning (RL) by training a sequence of easier tasks, often called a curriculum. While most curricula are built either manually or automatically based on heuristics, e.g. choosing a training task which is barely beyond the current abilities of the learner, the fact that similar tasks might benefit from similar curricula motivates us to explore meta-learning as a technique for curriculum generation or teaching for a distribution of similar tasks. This paper formulates the meta CL problem that requires a meta-teacher to generate the curriculum which will assist the student to train toward any given target task from a task distribution based on the similarity of these tasks to one another. We propose a model-based meta automatic curriculum learning algorithm (MM-ACL) that learns to predict the performance improvement on one task when the student is trained on another, given the current status of the student. This predictor can then be used to generate the curricula for different target tasks. Our empirical results demonstrate that MM-ACL outperforms the state-of-the-art CL algorithms in a grid-world domain and a more complex visual-based navigation domain in terms of sample efficiency.
@InProceedings{CoLLAs2023-ZIFAN, author = {Zifan Xu and Yulin Zhang and Shahaf S. Shperberg and Reuth Mirsky and Yuqian Jiang and Bo Liu and Peter Stone}, title = {Model-Based Meta Automatic Curriculum Learning}, booktitle = {The Second Conference on Lifelong Learning Agents (CoLLAs)}, location = {Montreal, Canada}, month = {August}, year = {2023}, abstract = { Curriculum learning (CL) has been widely explored to facilitate the learning of hard-exploration tasks in reinforcement learning (RL) by training a sequence of easier tasks, often called a curriculum. While most curricula are built either manually or automatically based on heuristics, e.g. choosing a training task which is barely beyond the current abilities of the learner, the fact that similar tasks might benefit from similar curricula motivates us to explore meta-learning as a technique for curriculum generation or teaching for a distribution of similar tasks. This paper formulates the meta CL problem that requires a meta-teacher to generate the curriculum which will assist the student to train toward any given target task from a task distribution based on the similarity of these tasks to one another. We propose a model-based meta automatic curriculum learning algorithm (MM-ACL) that learns to predict the performance improvement on one task when the student is trained on another, given the current status of the student. This predictor can then be used to generate the curricula for different target tasks. Our empirical results demonstrate that MM-ACL outperforms the state-of-the-art CL algorithms in a grid-world domain and a more complex visual-based navigation domain in terms of sample efficiency. }, wwwnote = {<a href="https://utexas.box.com/s/nrb7a5ky4q0ww8aumd42xfpylxgn98or">Video presentation</a>}, }
Generated by bib2html.pl (written by Patrick Riley ) on Tue Nov 19, 2024 10:24:41