Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Building Minimal and Reusable Causal State Abstractions for Reinforcement Learning

Building Minimal and Reusable Causal State Abstractions for Reinforcement Learning.
Zizhao Wang, Caroline Wang, Xuesu Xiao, Yuke Zhu, and Peter Stone.
In AAAI Conference on Artificial Intelligence, February 2024.

Download

[PDF]4.5MB  

Abstract

Two desiderata of reinforcement learning (RL) algorithms are the ability tolearn from relatively little experience and the ability to learn policies thatgeneralize to a range of problem specifications. In factored state spaces, oneapproach towards achieving both goals is to learn state abstractions, which onlykeep the necessary variables for learning the tasks at hand. This paper introduces Causal Bisimulation Modeling (CBM), a method that learns thecausal relationships in the dynamics and reward functions for each task toderive a minimal, task-specific abstraction. CBM leverages and improves implicit modeling to train a high-fidelity causal dynamics model that can be reused for all tasks in the same environment. Empirical validation on manipulation environments and Deepmind Control Suite reveals that CBM's learned implicit dynamics models identify the underlying causal relationships and state abstractions more accurately than explicit ones. Furthermore, the derived state abstractions allow a task learner to achieve near-oracle levels of sample efficiency.

BibTeX Entry

@InProceedings{cbm-wang-aaai24,
  author   = {Zizhao Wang and Caroline Wang and Xuesu Xiao and Yuke Zhu and Peter Stone},
  title    = {Building Minimal and Reusable Causal State Abstractions for Reinforcement Learning},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year     = {2024},
  month    = {February},
  location = {Vancouver, Canada},
  abstract = {
Two desiderata of reinforcement learning (RL) algorithms are the ability to
learn from relatively little experience and the ability to learn policies that
generalize to a range of problem specifications. In factored state spaces, one
approach towards achieving both goals is to learn state abstractions, which only
keep the necessary variables for learning the tasks at hand. 
This paper introduces Causal Bisimulation Modeling (CBM), a method that learns the
causal relationships in the dynamics and reward functions for each task to
derive a minimal, task-specific abstraction. 
CBM leverages and improves implicit modeling to train a high-fidelity causal 
dynamics model that can be reused for all tasks in the same environment. 
Empirical validation on manipulation environments and Deepmind Control Suite 
reveals that CBM's learned implicit dynamics models identify the underlying causal 
relationships and state abstractions more accurately than explicit ones. 
Furthermore, the derived state abstractions allow a task learner to achieve 
near-oracle levels of sample efficiency.
  },
}

Generated by bib2html.pl (written by Patrick Riley ) on Tue Nov 19, 2024 10:24:41