• Classified by Topic • Classified by Publication Type • Sorted by Date • Sorted by First Author Last Name • Classified by Funding Source •
Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks.
Lemeng Wu, Bo
Liu, Peter Stone, and Qiang Liu.
In Advances in Neural Information
Processing Systems 34 (2020), December 2020.
[PDF]8.1MB [slides.pdf]744.8kB
We propose firefly neural architecture descent, a general framework for progressively and dynamically growing neural networks to jointly optimize the networks’ parameters and architectures. Our method works in a steepest descent fashion, which iteratively finds the best network within a functional neighborhood of the original network that includes a diverse set of candidate network structures. By using Taylor approximation, the optimal network structure in the neighborhood can be found with a greedy selection procedure. We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures that avoid catastrophic forgetting in continual learning. Empirically, firefly descent achieves promising results on both neural architecture search and continual learning. In particular, on a challenging continual image classification task, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
@InProceedings{NeurIPS2020-Wu, author = {Lemeng Wu and Bo Liu and Peter Stone and Qiang Liu}, title = {Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks}, booktitle = {Advances in Neural Information Processing Systems 34 (2020)}, location = {Vancouver, Canada}, month = {December}, year = {2020}, abstract = { We propose firefly neural architecture descent, a general framework for progressively and dynamically growing neural networks to jointly optimize the networksâ parameters and architectures. Our method works in a steepest descent fashion, which iteratively finds the best network within a functional neighborhood of the original network that includes a diverse set of candidate network structures. By using Taylor approximation, the optimal network structure in the neighborhood can be found with a greedy selection procedure. We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures that avoid catastrophic forgetting in continual learning. Empirically, firefly descent achieves promising results on both neural architecture search and continual learning. In particular, on a challenging continual image classification task, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods. }, }
Generated by bib2html.pl (written by Patrick Riley ) on Tue Nov 19, 2024 10:24:42