Our current research is supported in part by the NIMH Human Brain Project under grant 1R01-MH66991 (and previously by the National Science Foundation under grants IIS-9811478 and IRI-9309273). For more details, see publications in Self-Organization.
LISSOM is a biologically more realistic implementation of the SOM idea, where the weight change neighborhood is determined through competition and collaboration mediated by lateral connections (instead of a global supervisor), and weights are changed based on Hebbian learning and renormalization (instead of Euclidean distance). LISSOM was developed as a first step towards modeling biological maps (see the visual cortex page), but it also has useful properties in its own right. It is capable of self-organization roughly similar to the SOM model, but because the lateral connections decorrelate the activation patterns on the map, they form a better internal representation for visual patterns such as those in handwritten digit recognition.
In IGG, the 2-D lattice of the SOM is gradually grown one node at a time as part of the self-organizing process. The resulting network structure will also represent both the clusters in the data and their topology, and unlike with other growing SOM methods, it is planar (i.e. drawable). These properties make it a useful tool for visualizing high-dimensional data. We have applied the method to visualizing word semantics and human genetics, leading to potentially useful insights in these domains.
In the barn owl, the self-organization of the auditory map is strongly influenced by vision. In this study we showed how visual attention could filter the learning in the auditory map, resulting in maps similar to those found in experimental studies. The result provides computational evidence for a particular, simple way in which the different modalities could interact during development.
In SARDNET, a sequence of inputs is mapped to different locations on the map and gradually decayed, resulting in a compact representation of the sequence. Similar sequences are mapped to similar patterns, making it possible to perform robust speech recognition, and implement a sequence memory for sentence processing (as described in the NLP page).
If the data is strongly hierarchical, visualizing it on a flat SOM may make the hierarchy hard to see. With HFM, a hierarchy of maps is self-organized, with the high-level categories separated on top, and gradually more fine distinctions in the bottom. For example script-based story data can be visualized this way, and it forms a good foundation for episodic memory organization in story processing (see the NLP page).