Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Dynamic Sparse Training for Deep Reinforcement Learning

Dynamic Sparse Training for Deep Reinforcement Learning.
Ghada Sokar, Elena Mocanu, Decebal Constantin Mocanu, Mykola Pechenizkiy, and Peter Stone.
In Proceedings of the 31st International Joint Conference on Artificial Intelligence, July 2022.
arXiv version with the appendix

Download

[PDF]3.7MB  [slides.pptx]16.7MB  

Abstract

Deep reinforcement learning (DRL) agents are trained through trial-and-error interactions with the environment. This leads to a long training time for dense neural networks to achieve good performance. Hence, prohibitive computation and memory resources are consumed. Recently, learning efficient DRL agents has received increasing attention. Yet, current methods focus on accelerating inference time. In this paper, we introduce for the first time a dynamic sparse training approach for deep reinforcement learning to accelerate the training process. The proposed approach trains a sparse neural network from scratch and dynamically adapts its topology to the changing data distribution during training. Experiments on continuous control tasks show that our dynamic sparse agents achieve higher performance than the equivalent dense methods, reduce the parameter count and floating-point operations (FLOPs) by 50%, and have a faster learning speed that enables reaching the performance of dense agents with 40-50% reduction in the training steps.

BibTeX Entry

@InProceedings{IJCAI22,
  author="Ghada Sokar and Elena Mocanu and Decebal Constantin Mocanu and Mykola Pechenizkiy and Peter Stone",
  title="Dynamic Sparse Training for Deep Reinforcement Learning",
  booktitle="Proceedings of the 31st International Joint Conference on Artificial Intelligence",
  location="Vienna, Austria",
  month="July",
  year="2022",
  abstract={
            Deep reinforcement learning (DRL) agents are trained
            through trial-and-error interactions with the
            environment. This leads to a long training time for dense
            neural networks to achieve good performance. Hence,
            prohibitive computation and memory resources are
            consumed. Recently, learning efficient DRL agents has
            received increasing attention. Yet, current methods focus
            on accelerating inference time. In this paper, we
            introduce for the first time a dynamic sparse training
            approach for deep reinforcement learning to accelerate the
            training process. The proposed approach trains a sparse
            neural network from scratch and dynamically adapts its
            topology to the changing data distribution during
            training. Experiments on continuous control tasks show
            that our dynamic sparse agents achieve higher performance
            than the equivalent dense methods, reduce the parameter
            count and floating-point operations (FLOPs) by 50%, and
            have a faster learning speed that enables reaching the
            performance of dense agents with 40-50% reduction in the
            training steps.
  },
  wwwnote={<a href="https://arxiv.org/pdf/2106.04217.pdf">arXiv version with the appendix</a>},
}

Generated by bib2html.pl (written by Patrick Riley ) on Tue Nov 19, 2024 10:24:41