Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


FLaRe: Achieving Masterful and Adaptive Robot Policies with Large-Scale Reinforcement Learning Fine-Tuning

FLaRe: Achieving Masterful and Adaptive Robot Policies with Large-Scale Reinforcement Learning Fine-Tuning.
Jiaheng Hu, Rose Hendrix, Ali Farhadi, Aniruddha Kembhavi, Roberto Martín-Martín, Peter Stone, Kuo-Hao Zeng, and Kiana Ehsani.
In ICRA, May 2025.

Download

[PDF]9.2MB  

Abstract

In recent years, the Robotics field has initiated several efforts toward buildinggeneralist robot policies through large-scale multi-task Behavior Cloning.However, direct deployments of these policies have led to unsatisfactoryperformance, where the policy struggles with unseen states and tasks. How can webreak through the performance plateau of these models and elevate theircapabilities to new heights? In this paper, we propose FLaRe, a large-scaleReinforcement Learning fine-tuning framework that integrates robust pre-trainedrepresentations, large-scale training, and gradient stabilization techniques. Ourmethod aligns pre-trained policies towards task completion, achievingstate-of-the-art (SoTA) performance both on previously demonstrated and onentirely novel tasks and embodiments. Specifically, on a set of long-horizonmobile manipulation tasks, FLaRe achieves an average success rate of 79.5/100 inunseen environments, with absolute improvements of +23.6 in simulation and+30.7 on real robots over prior SoTA methods. By utilizing only sparse rewards,our approach can enable generalizing to new capabilities beyond the pretrainingdata with minimal human effort. Moreover, we demonstrate rapid adaptation to newembodiments and behaviors with less than a day of fine-tuning. Videos, code, andappendix can be found on the project website at robot-flare.github.io

BibTeX Entry

@InProceedings{hu_flare25,
  author   = {Jiaheng Hu and  Rose Hendrix and Ali Farhadi and Aniruddha Kembhavi and Roberto Mart{\'\i}n-Mart{\'\i}n and Peter Stone and Kuo-Hao Zeng and Kiana Ehsani},
  title    = {FLaRe: Achieving Masterful and Adaptive Robot Policies with Large-Scale Reinforcement Learning Fine-Tuning},
  booktitle = {ICRA},
  year     = {2025},
  month    = {May},
  location = {Atlanta, USA},
  abstract = {In recent years, the Robotics field has initiated several efforts toward building
generalist robot policies through large-scale multi-task Behavior Cloning.
However, direct deployments of these policies have led to unsatisfactory
performance, where the policy struggles with unseen states and tasks. How can we
break through the performance plateau of these models and elevate their
capabilities to new heights? In this paper, we propose FLaRe, a large-scale
Reinforcement Learning fine-tuning framework that integrates robust pre-trained
representations, large-scale training, and gradient stabilization techniques. Our
method aligns pre-trained policies towards task completion, achieving
state-of-the-art (SoTA) performance both on previously demonstrated and on
entirely novel tasks and embodiments. Specifically, on a set of long-horizon
mobile manipulation tasks, FLaRe achieves an average success rate of 79.5/100 in
unseen environments, with absolute improvements of +23.6 in simulation and
+30.7 on real robots over prior SoTA methods. By utilizing only sparse rewards,
our approach can enable generalizing to new capabilities beyond the pretraining
data with minimal human effort. Moreover, we demonstrate rapid adaptation to new
embodiments and behaviors with less than a day of fine-tuning. Videos, code, and
appendix can be found on the project website at robot-flare.github.io
  },
}

Generated by bib2html.pl (written by Patrick Riley ) on Sat Mar 08, 2025 23:11:31