• Sorted by Date • Classified by Publication Type • Classified by Topic • Sorted by First Author Last Name •
Mazda Ahmadi and Peter Stone. Instance-Based Action Models for Fast Action Planning. In Ubbo Visser, Fernando Ribeiro, Takeshi Ohashi, and Frank Dellaert, editors, RoboCup-2007: Robot Soccer World Cup XI, Lecture Notes in Artificial Intelligence, pp. 1–16, Springer Verlag, Berlin, 2008.
BEST PAPER AWARD WINNER at RoboCup International Symposium.
Official version from Publisher's Webpage© Springer-Verlag
[PDF]466.3kB [postscript]3.0MB
Two main challenges of robot action planning in real domains are uncertain action effects and dynamic environments. In this paper, an instance-based action model is learned empirically by robots trying actions in the environment. Modeling the action planning problem as a Markov decision process, the action model is used to build the transition function. In static environments, standard value iteration techniques are used for computing the optimal policy. In dynamic environments, an algorithm is proposed for fast replanning, which updates a subset of state-action values computed for the static environment. As a test-bed, the goal scoring task in the RoboCup 4-legged scenario is used. The algorithms are validated in the problem of planning kicks for scoring goals in the presence of opponent robots. The experimental results both in simulation and on real robots show that the instance-based action model boosts performance over using parametric models as done previously, and also incremental replanning significantly improves over original off-line planning.
@incollection(LNAI2007-ahmadi, author="Mazda Ahmadi and Peter Stone", title="Instance-Based Action Models for Fast Action Planning", booktitle= "{R}obo{C}up-2007: Robot Soccer World Cup {XI}", Editor="Ubbo Visser and Fernando Ribeiro and Takeshi Ohashi and Frank Dellaert", Publisher="Springer Verlag",address="Berlin",year="2008", series="Lecture Notes in Artificial Intelligence", volume="5001", pages="1--16", abstract={ Two main challenges of robot action planning in real domains are uncertain action effects and dynamic environments. In this paper, an instance-based action model is learned empirically by robots trying actions in the environment. Modeling the action planning problem as a Markov decision process, the action model is used to build the transition function. In static environments, standard value iteration techniques are used for computing the optimal policy. In dynamic environments, an algorithm is proposed for fast replanning, which updates a subset of state-action values computed for the static environment. As a test-bed, the goal scoring task in the RoboCup 4-legged scenario is used. The algorithms are validated in the problem of planning kicks for scoring goals in the presence of opponent robots. The experimental results both in simulation and on real robots show that the instance-based action model boosts performance over using parametric models as done previously, and also incremental replanning significantly improves over original off-line planning. }, wwwnote={<b>BEST PAPER AWARD WINNER</b> at RoboCup International Symposium.<br>Official version from <a href="http://dx.doi.org/10.1007/978-3-540-68847-1_1">Publisher's Webpage</a>© Springer-Verlag}, )
Generated by bib2html.pl (written by Patrick Riley ) on Tue Nov 19, 2024 10:29:30