• Classified by Topic • Classified by Publication Type • Sorted by Date • Sorted by First Author Last Name • Classified by Funding Source •
To Teach or not to Teach? Decision Making Under Uncertainty in Ad Hoc Teams.
Peter
Stone and Sarit Kraus.
In The Ninth International Conference on Autonomous
Agents and Multiagent Systems (AAMAS), International Foundation for Autonomous Agents and Multiagent Systems, May 2010.
supplemental material cited in the paper,
including a proof and an algorithm.
AAMAS 2010
[PDF]180.6kB [postscript]285.3kB
In typical multiagent teamwork settings, the teammates are either programmed together, or are otherwise provided with standard communication languages and coordination protocols. In contrast, this paper presents an ad hoc team setting in which the teammates are not pre-coordinated, yet still must work together in order to achieve their common goal(s). We represent a specific instance of this scenario, in which a teammate has limited action capabilities and a fixed and known behavior, as a finite-horizon, cooperative $k$-armed bandit. In addition to motivating and studying this novel ad hoc teamwork scenario, the paper contributes to the $k$-armed bandits literature by characterizing the conditions under which certain actions are potentially optimal, and by presenting a polynomial dynamic programming algorithm that solves for the optimal action when the arm payoffs come from a discrete distribution.
@InProceedings{AAMAS10-adhoc, author="Peter Stone and Sarit Kraus", title = {To Teach or not to Teach? Decision Making Under Uncertainty in Ad Hoc Teams}, booktitle = "The Ninth International Conference on Autonomous Agents and Multiagent Systems (AAMAS)", location = "Toronto, Canada", month = "May", year = "2010", publisher = {International Foundation for Autonomous Agents and Multiagent Systems}, abstract = { In typical multiagent \emph{teamwork} settings, the teammates are either programmed together, or are otherwise provided with standard communication languages and coordination protocols. In contrast, this paper presents an \emph{ad hoc team} setting in which the teammates are not pre-coordinated, yet still must work together in order to achieve their common goal(s). We represent a specific instance of this scenario, in which a teammate has limited action capabilities and a fixed and known behavior, as a finite-horizon, cooperative $k$-armed bandit. In addition to motivating and studying this novel ad hoc teamwork scenario, the paper contributes to the $k$-armed bandits literature by characterizing the conditions under which certain actions are potentially optimal, and by presenting a polynomial dynamic programming algorithm that solves for the optimal action when the arm payoffs come from a discrete distribution. }, wwwnote={<a href="http://www.cs.utexas.edu/~pstone/Papers/2010aamas/supplemental.pdf">supplemental material</a> cited in the paper, including a proof and an algorithm.<br> <a href="http://www.cse.yorku.ca/AAMAS2010/">AAMAS 2010</a>}, }
Generated by bib2html.pl (written by Patrick Riley ) on Tue Nov 19, 2024 10:24:45