Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Value Function Decomposition for Iterative Design of Reinforcement Learning Agents

Value Function Decomposition for Iterative Design of Reinforcement Learning Agents.
James MacGlashan, Evan Archer, Alisa Devlic, Takuma Seno, Craig Sherstan, Peter R. Wurman, and Peter Stone.
In Conference on Neural Information Processing Systems (NeurIPS), December 2022.
5-minute Video Presentation; the slides

Download

[PDF]11.6MB  

Abstract

Designing reinforcement learning (RL) agents is typically a difficult process that requires numerous design iterations. Learning can fail for a multitude of reasons, and standard RL methods provide too few tools to provide insight into the exact cause. In this paper, we show how to integrate value decomposition into a broad class of actor-critic algorithms and use it to assist in the iterative agent-design process. Value decomposition separates a reward function into distinct components and learns value estimates for each. These value estimates provide insight into an agent's s learning and decision-making process and enable new training methods to mitigate common problems. As a demonstration, we introduce SAC-D, a variant of soft actor-critic (SAC) adapted for value decomposition. SAC-D maintains similar performance to SAC, while learning a larger set of value predictions. We also introduce decomposition-based tools that exploit this information, including a new reward influence metric, which measures each reward component's effect on agent decision-making. Using these tools, we provide several demonstrations of decomposition's use in identifying and addressing problems in the design of both environments and agents. Value decomposition is broadly applicable and easy to incorporate into existing algorithms and workflows, making it a powerful tool in an RL practitioner's toolbox.

BibTeX Entry

@InProceedings{NeurIPS22-James,
  author = {James MacGlashan and Evan Archer and Alisa Devlic and Takuma Seno and Craig Sherstan and Peter R.\ Wurman and Peter Stone},
  title = {Value Function Decomposition for Iterative Design of Reinforcement Learning Agents},
  booktitle = {Conference on Neural Information Processing Systems (NeurIPS)},
  location = {New Orleans, LA},
  month = {December},
  year = {2022},
  abstract = {
    Designing reinforcement learning (RL) agents is typically a
    difficult process that requires numerous design
    iterations. Learning can fail for a multitude of reasons, and
    standard RL methods provide too few tools to provide insight into
    the exact cause. In this paper, we show how to integrate value
    decomposition into a broad class of actor-critic algorithms and
    use it to assist in the iterative agent-design process. Value
    decomposition separates a reward function into distinct components
    and learns value estimates for each. These value estimates provide
    insight into an agent's s learning and decision-making process and
    enable new training methods to mitigate common problems. As a
    demonstration, we introduce SAC-D, a variant of soft actor-critic
    (SAC) adapted for value decomposition. SAC-D maintains similar
    performance to SAC, while learning a larger set of value
    predictions. We also introduce decomposition-based tools that
    exploit this information, including a new reward influence metric,
    which measures each reward component's effect on agent
    decision-making. Using these tools, we provide several
    demonstrations of decomposition's use in identifying and
    addressing problems in the design of both environments and
    agents. Value decomposition is broadly applicable and easy to
    incorporate into existing algorithms and workflows, making it a
    powerful tool in an RL practitioner's toolbox.
    },
  wwwnote={<a href="https://recorder-v3.slideslive.com/#/share?share=73813&s=979b8d77-cd2c-46fe-87b7-13824cb666eb">5-minute Video Presentation</a>; <a href="https://docs.google.com/presentation/d/1IZeA05uGq03nn_THeY0Hlua4cnZjVF5mqcXg2s_pACo/edit#slide=id.g16c90e8925d_0_35">the slides</a>},
}

Generated by bib2html.pl (written by Patrick Riley ) on Tue Nov 19, 2024 10:24:41