Tutorial: Evolution of Neural Networks

Risto Miikkulainen
The University of Texas at Austin and Cognizant AI Labs

Description

Neuroevolution is a new and emerging area of reinforcement learning (RL). It is particularly useful in two areas:

  1. Tasks that require memory: Whereas the traditional value-function-based approach most naturally focuses on MDP problems and on maximizing lifetime reward, neuroevolution work focuses mostly on POMDP tasks and on maximizing reward at the end of learning. Knowledge of neuroevolution should therefore be valuable for researchers and students in robotics, intelligent agents, and multiagent systems.
  2. Deep learning: The performance of deep learning depends crucially on the network architecture and hyperparameters, and several techniques have been develop to optimize them. As a population-based search technique, neuroevolution can explore the search space widely, and therefore find innovative and surprising solutions that would be difficut to find with other techniques. Researchers and students working in learning in image processing, speech, language, and prediction and modeling of complex systems, should be able to use neuroevolution to improve their results.
In this tutorial, I will review (1) neuroevolution methods that evolve fixed-topology networks, network topologies, and network construction processes for POMDP tasks, (2) ways of combining gradient-based training with evolutionary methods to discover more powerful deep learning architectures, and (3) applications of these techniques in control, robotics, artificial life, games, image processing, and language.

Presentation Materials

GECCO 2024 Slides (in 4-up pdf, with references).
GECCO 2024 Video (of the tutorial presentation)
Alife 2024 Slides (in 4-up pdf, jointly authored with Sebastian Risi and Yujin Tang)

Demos

The slides include numerous demos (i.e. animations, identified with the keyword "Demo" on the slides) but they don't run in the 4-up pdf. Hence, they are collected in this demo directory.

More demos can be found in the Neuroevolution book website.

Neuroevolution Exercise (Colab)

This excercise (authored by Yujun Tang) can be run as a notebook in Google Colab. There are three parts:

(1) Neuroevolution for control
(2) Evolutionary Model Merging
(3) Quality Diversity for Model Merging.

Instructions are given in the notebook.

Neuroevolution Exercise (NERO)

NERO is a video game where the player evolves neural network controllers for teams of non-player characters that engage in battle in a simulated environment. It will take some 30mins to get the idea, and upto a few hours of training to build complex teams. (NOTE: This is an earlier excercise; these instructions were last checked in 2021.)

Instructions for this exercise.

Further Reading

The Neuroevolution: Harnessing Creativity in AI Model Design book (MIT Press, 2025).
A survey article in Science on neuroevolution in neuroscience.
A short summary article on neuroevolution.
A survey article in Nature Machine Intelligence.
The NERO Game website.


Last modified: Sat Nov 23 21:39:21 CST 2024