Many other robotic soccer systems have been developed both in simulation and with real robots. Using a simulator based closely upon the Dynasim system [28], we previously used Memory-based Learning to allow a player to learn when to shoot and when to pass the ball [31]. We then used Neural Networks to teach a player to shoot a moving ball into the goal [35]. In the soccer server, we then layered two learned behaviors to produce a higher-level multi-agent behavior: passing [34]. Also in the soccer server Matsubara et al. used a Neural Network to allow a player to learn when to shoot and when to pass [23] (as opposed to the Memory-based technique used by us for a similar task). The RoboCup-97 simulator competition included 29 teams, many of which demonstrated novel scientific contributions, particularly in the field of multi-agent learning [16].
Robotic soccer with real robots was pioneered by the Dynamo group [29] and Asada's laboratory [2]. Recent international competitions have motivated the creation of a wide variety of robot soccer teams [14, 16].
Most previous research, both in simulation and on real robots has concentrated on individual skills with little attention paid to team coordination. A rare exception is [19] in which team coordination is evolved using genetic programming. Unfortunately, no general teamwork structure can be extracted from this work as it is evolved in a domain specific setting.