Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


LIBERO: Benchmarking Knowledge Transfer in Lifelong Robot Learning

LIBERO: Benchmarking Knowledge Transfer in Lifelong Robot Learning.
Bo Liu, Yifeng Zhu, Chongkai Gao, Yihao Feng, Qiang Liu, Yuke Zhu, and Peter Stone.
In 37th Conference on Neural Information Processing Systems (NeurIPS 2023) Track on Datasets and Benchmarks, December 2023.

Download

[PDF]37.6MB  [poster.pdf]2.6MB  

Abstract

Lifelong learning offers a promising paradigm of building a generalist agent thatlearns and adapts over its lifespan. Unlike traditional lifelong learningproblems in image and text domains, which primarily involve the transfer ofdeclarative knowledge of entities and concepts, lifelong learning indecision-making (LLDM) also necessitates the transfer of procedural knowledge,such as actions and behaviors. To advance research in LLDM, we introduce LIBERO,a novel benchmark of lifelong learning for robot manipulation. Specifically,LIBERO highlights five key research topics in LLDM: 1) how to efficientlytransfer declarative knowledge, procedural knowledge, or the mixture of both; 2)how to design effective policy architectures and 3) effective algorithms forLLDM; 4) the robustness of a lifelong learner with respect to task ordering; and5) the effect of model pretraining for LLDM. We develop an extendible proceduralgeneration pipeline that can in principle generate infinitely many tasks. Forbenchmarking purpose, we create four task suites (130 tasks in total) that we useto investigate the above-mentioned research topics. To support sample-efficientlearning, we provide high-quality human-teleoperated demonstration data for alltasks. Our extensive experiments present several insightful or even unexpecteddiscoveries: sequential fine-tuning outperforms existing lifelong learningmethods in forward transfer, no single visual encoder architecture excels at alltypes of knowledge transfer, and naive supervised pretraining can hinder agents’performance in the subsequent LLDM.

BibTeX Entry

@InProceedings{liu_zhu_NeurIPS2023,
  author   = {Bo Liu and Yifeng Zhu and Chongkai Gao and Yihao Feng and Qiang Liu and Yuke Zhu and Peter Stone},
  title    = {LIBERO: Benchmarking Knowledge Transfer in Lifelong Robot Learning},
  booktitle = {37th Conference on Neural Information Processing Systems (NeurIPS 2023) Track on Datasets and Benchmarks},
  year     = {2023},
  month    = {December},
  location = {New Orleans, United States},
  abstract = {Lifelong learning offers a promising paradigm of building a generalist agent that
learns and adapts over its lifespan. Unlike traditional lifelong learning
problems in image and text domains, which primarily involve the transfer of
declarative knowledge of entities and concepts, lifelong learning in
decision-making (LLDM) also necessitates the transfer of procedural knowledge,
such as actions and behaviors. To advance research in LLDM, we introduce LIBERO,
a novel benchmark of lifelong learning for robot manipulation. Specifically,
LIBERO highlights five key research topics in LLDM: 1) how to efficiently
transfer declarative knowledge, procedural knowledge, or the mixture of both; 2)
how to design effective policy architectures and 3) effective algorithms for
LLDM; 4) the robustness of a lifelong learner with respect to task ordering; and
5) the effect of model pretraining for LLDM. We develop an extendible procedural
generation pipeline that can in principle generate infinitely many tasks. For
benchmarking purpose, we create four task suites (130 tasks in total) that we use
to investigate the above-mentioned research topics. To support sample-efficient
learning, we provide high-quality human-teleoperated demonstration data for all
tasks. Our extensive experiments present several insightful or even unexpected
discoveries: sequential fine-tuning outperforms existing lifelong learning
methods in forward transfer, no single visual encoder architecture excels at all
types of knowledge transfer, and naive supervised pretraining can hinder agents’
performance in the subsequent LLDM. 
  },
}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Aug 14, 2024 12:58:50