• Classified by Topic • Classified by Publication Type • Sorted by Date • Sorted by First Author Last Name • Classified by Funding Source •
Evolving Keepaway Soccer Players through Task Decomposition.
Shimon
Whiteson, Nate Kohl, Risto Miikkulainen,
and Peter Stone.
Machine Learning, 59(1):5–30, May 2005.
Some videos of the agents before and after
learning referenced in the paper.
The publisher's official
version
An earlier version appeared in the proceedings of The Genetic
and Evolutionary Computation Conference 2003 (GECCO-2003)
[PDF]278.8kB [postscript]566.8kB
Complex control tasks can often be solved by decomposing them into hierarchies of manageable subtasks. Such decompositions require designers to decide how much human knowledge should be used to help learn the resulting components. On one hand, encoding human knowledge requires manual effort and may incorrectly constrain the learner's hypothesis space or guide it away from the best solutions. On the other hand, it may make learning easier and enable the learner to tackle more complex tasks. This article examines the impact of this trade-off in tasks of varying difficulty. A space laid out by two dimensions is explored: 1) how much human assistance is given and 2) how difficult the task is. In particular, the neuroevolution learning algorithm is enhanced with three different methods for learning the components that result from a task decomposition. The first method, coevolution, is mostly unassisted by human knowledge. The second method, layered learning, is highly assisted. The third method, concurrent layered learning, is a novel combination of the first two that attempts to exploit human knowledge while retaining some of coevolution's flexibility. Detailed empirical results are presented comparing and contrasting these three approaches on two versions of a complex task, namely robot soccer keepaway, that differ in difficulty of learning. These results confirm that, given a suitable task decomposition, neuroevolution can master difficult tasks. Furthermore, they demonstrate that the appropriate level of human assistance depends critically on the difficulty of the problem.
@Article{MLJ05, author="Shimon Whiteson and Nate Kohl and Risto Miikkulainen and Peter Stone", title="Evolving Keepaway Soccer Players through Task Decomposition", journal="Machine Learning", year="2005",month="May", volume="59",number="1",pages="5--30", abstract=" Complex control tasks can often be solved by decomposing them into hierarchies of manageable subtasks. Such decompositions require designers to decide how much human knowledge should be used to help learn the resulting components. On one hand, encoding human knowledge requires manual effort and may incorrectly constrain the learner's hypothesis space or guide it away from the best solutions. On the other hand, it may make learning easier and enable the learner to tackle more complex tasks. This article examines the impact of this trade-off in tasks of varying difficulty. A space laid out by two dimensions is explored: 1) how much human assistance is given and 2) how difficult the task is. In particular, the neuroevolution learning algorithm is enhanced with three different methods for learning the components that result from a task decomposition. The first method, coevolution, is mostly unassisted by human knowledge. The second method, layered learning, is highly assisted. The third method, concurrent layered learning, is a novel combination of the first two that attempts to exploit human knowledge while retaining some of coevolution's flexibility. Detailed empirical results are presented comparing and contrasting these three approaches on two versions of a complex task, namely robot soccer keepaway, that differ in difficulty of learning. These results confirm that, given a suitable task decomposition, neuroevolution can master difficult tasks. Furthermore, they demonstrate that the appropriate level of human assistance depends critically on the difficulty of the problem.", wwwnote={Some <a href="http://nn.cs.utexas.edu/pages/research/keepaway-movies/keepaway.html">videos of the agents before and after learning</a> referenced in the paper.<br> The <a href="http://dx.doi.org/10.1007/s10994-005-0460-9">publisher's official version</a><br>An earlier version appeared in the proceedings of <a href="http://gal4.ge.uiuc.edu:8080/GECCO-2003/">The Genetic and Evolutionary Computation Conference 2003</a> (GECCO-2003)}, }
Generated by bib2html.pl (written by Patrick Riley ) on Sun Nov 24, 2024 20:24:50