Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Walking and falling: Using robot simulations to model the role of errors in infant walking

Walking and falling: Using robot simulations to model the role of errors in infant walking.
Ossmy, Ori, Han, Danyang, MacAlpine, Patrick, Hoch, Justine, Stone, Peter, and Adolph, Karen E..
Developmental Science, 27:e13449, September 2023.
Available from the publisher's webpage

Download

(unavailable)

Abstract

What is the optimal penalty for errors in infant skill learning? Behavioral analyses indicate that errors are frequent but trivial as infants acquire foundational skills. In learning to walk, for example, falling is commonplace but appears to incur only a negligible penalty. Behavioral data, however, cannot reveal whether a low penalty for falling is beneficial for learning to walk. Here, we used a simulated bipedal robot as an embodied model to test the optimal penalty for errors in learning to walk. We trained the robot to walk using 12,500 independent simulations on walking paths produced by infants during free play and systematically varied the penalty for falling -- a level of precision, control, and magnitude impossible with real infants. When trained with lower penalties for falling, the robot learned to walk farther and better on familiar, trained paths and better generalized its learning to novel, untrained paths. Indeed, zero penalty for errors led to the best performance for both learning and generalization. Moreover, the beneficial effects of a low penalty were stronger for generalization than for learning. Robot simulations corroborate prior behavioral data and suggest that a low penalty for errors helps infants learn foundational skills (e.g., walking, talking, and social interactions) that require immense flexibility, creativity, and adaptability. Research Highlights During infant skill acquisition, errors are commonplace but appear to incur a low penalty; when learning to walk, for example, falls are frequent but trivial. To test the optimal penalty for errors, we trained a simulated robot to walk using real infant paths and systematically manipulated the penalty for falling. Lower penalties in training led to better performance on familiar, trained paths and on novel untrained paths, and zero penalty was most beneficial. Benefits of a low penalty were stronger for untrained than for trained paths, suggesting that discounting errors facilitates acquiring skills that require immense flexibility and generalization.

BibTeX Entry

@article{Ossmy2023,
  author = {Ossmy, Ori and Han, Danyang and MacAlpine, Patrick and Hoch, Justine and Stone, Peter and Adolph, Karen E.},
  title = {Walking and falling: Using robot simulations to model the role of errors in infant walking},
  journal = {Developmental Science},
  pages = {e13449},
  volume={27},
  issue={2},
  year="2023",
  month="September",
  keywords = {error, falling, penalty, reinforcement learning, simulated robot, walking},
  doi = {https://doi.org/10.1111/desc.13449},
  url = {https://onlinelibrary.wiley.com/doi/abs/10.1111/desc.13449},
  eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1111/desc.13449},
  abstract = {
              What is the optimal penalty for errors in infant skill
              learning? Behavioral analyses indicate that errors are
              frequent but trivial as infants acquire foundational
              skills. In learning to walk, for example, falling is
              commonplace but appears to incur only a negligible
              penalty. Behavioral data, however, cannot reveal whether
              a low penalty for falling is beneficial for learning to
              walk. Here, we used a simulated bipedal robot as an
              embodied model to test the optimal penalty for errors in
              learning to walk. We trained the robot to walk using
              12,500 independent simulations on walking paths produced
              by infants during free play and systematically varied
              the penalty for falling -- a level of precision, control,
              and magnitude impossible with real infants. When trained
              with lower penalties for falling, the robot learned to
              walk farther and better on familiar, trained paths and
              better generalized its learning to novel, untrained
              paths. Indeed, zero penalty for errors led to the best
              performance for both learning and
              generalization. Moreover, the beneficial effects of a
              low penalty were stronger for generalization than for
              learning. Robot simulations corroborate prior behavioral
              data and suggest that a low penalty for errors helps
              infants learn foundational skills (e.g., walking,
              talking, and social interactions) that require immense
              flexibility, creativity, and adaptability. Research
              Highlights During infant skill acquisition, errors are
              commonplace but appear to incur a low penalty; when
              learning to walk, for example, falls are frequent but
              trivial. To test the optimal penalty for errors, we
              trained a simulated robot to walk using real infant
              paths and systematically manipulated the penalty for
              falling. Lower penalties in training led to better
              performance on familiar, trained paths and on novel
              untrained paths, and zero penalty was most
              beneficial. Benefits of a low penalty were stronger for
              untrained than for trained paths, suggesting that
              discounting errors facilitates acquiring skills that
              require immense flexibility and generalization.},
  wwwnote = {Available from the <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/desc.13449">publisher's webpage</a>},
}

Generated by bib2html.pl (written by Patrick Riley ) on Sun Nov 24, 2024 20:24:49