At the forefront of Artificial Intelligence
  Home Articles Reviews Interviews JDK Glossary Features Discussion Search
Home » Articles » Neural Networks » Theory

Self-Organizing Nets

After a detailed look at supervised networks (see Perceptrons, Back-propagation and Associative Networks) we should look at a good example of unsupervised networks. The Kohonen network is probably the best example, because it is quite simple yet introduces the concepts of self-organization and unsupervised training easily.


There are many researchers who require biological plausibility in proposed neural network models especially since the aim of (most) networks is to emulate the brain. It is generally accepted that perceptrons, back-propagation and many other techniques are not biologically plausible.

With the demand for biological plausibility rising, the concept of self-organizing networks became a point of interest amoung researchers. Self-organizing networks could be both supervised or unsupervised, and have four additional properties:

  • Each weight is representative of a certain input.
  • Input patterns are shown to all neurons simultaneously.
  • Competitive learning: the neuron with the largest response is chosen.
  • A method of reinforcing the competitive learning.

Unsupervised Learning

Unsupervised learning allows the network to find its own energy minima (see Associative Networks for an explanation of energy) and is therefore more efficient with pattern association. Obviously, the disadvantage is it is then up to the program/user to interpret the output.

There are quite a few types of self-organizing networks, like the Instar-Outstar network, the ART-series, and the Kohonen network. For purposes of simplicity, we will look at the Kohonen network.

Kohonen Networks

The term Kohonen network is a slightly misleading one, because the researcher Teuvo Kohonen in fact researched many kinds of network, but only a small number of these are called Kohonen networks. We will look at the idea of self-organizing maps, networks that attempt to map their weights to the input data.

The Kohonen network is an n-dimensional network, where n is the number of inputs. For simplicity, we will look at a 2-dimensional networks. A schematic architecture of an example network might look like this:

Partial network

The above picture only shows a little bit of the network, but you can see how every neuron gets the same input, and there is one output per neuron. To help us visualize the problem of mapping the weights, imagine that all the weights of the network were initialized to a random value. The weights are then graphed on a standard Cartesian graph, and connected with adjacent neurons. These lines are merely schematic and do not represent connections within the net itself.

The network is trained by presenting it with random points. The neuron that has the largest response is reinforced by the learning algorithm. Furthermore, the surrounding neurons are also reinforced (this is explained in much greater depth later). This has the effect of "pulling" and "spreading" the network across the training data. Nothing beats a diagram and a Java applet at this point.

Figure 1: TL: Initial iteration, TR: 100 iterations, BL: 200 iterations, BR: 500 iterations.
Click on the image to open the applet.

This method can also be applied to other similar situations. For example, here are two Kohonen networks applied to an F-14 Tomcat and a cactus!

Figure 2: Mapping a Kohonen Network to a bitmapped image.

Rules and Operation

Now that you can visualize what the network is doing, let us look at how it does it. The basic idea behind the Kohonen network is competitive learning. The neurons are presented with the inputs, which calculate their net (weighted sum) and neuron with the closest output magnitude is chosen to receive additional training. Training, though, does not just affect the one neuron but also its neighbours.

So, how does one judge what the 'closest output magnitude' is? One way is to find the distance between the input and net of the neuron:

Notice that when applied to our 2-dimensional network, it reduces down to the standard Euclidean distance formula. So, if we want the output the closely represents the input pattern, it is the neuron with the smallest distance. Let us call the neuron with the least distance xd0. Now, remember that we change both the neuron and the neurons in its neighbourhood Nx. Nx is not constant, it can change from anything ranging between the entire network to just the 8 adjacent neurons. We will talk about the neighbourhood soon.

Kohonen learning is very simple, following a familiar equation:

Where k is the learning coefficient. So all neurons in neighbourhood Nx to neuron xd0 have their weights adjusted. So how do we adjust k and Nx during training? This is an area of much research, but Kohonen has suggest splitting the training up into two phases. Phase 1 will reduce down the learning coefficient from 0.9 to 0.1 (or similar values), and the neighbourhood will reduce from half the diameter of the network down to the immediately surrounding cells (Nx = 1). Following that, Phase 2 will reduce the learning coefficient from perhaps 0.1 to 0.0 but over double or more the number of iterations in Phase 1. The neighbourhood value is fixed at 1. You can see that the two phases allow firstly the network to quickly 'fill out the space' with the second phase fine-tuning the network to a more accurate representation of the space. Refer back to the diagram, the bottom left picture actually shows the network right after Phase 1 has finishes, with the bottom right one after the second phase is complete.

Interpreting Output and Applications

Interpreting Kohonen networks is quite easy since only one neuron will fire per input set after training. Therefore, it is a case of classifying the outputs. For example, if this neuron fires do this - or if this group of neurons fire do this etc.

Kohonen networks have been successfully applied to speech recognition, since it is after cognitive networks that inspired self-organizing networks. Kohonen networks can also be well applied to gaming - by expanding the dimensionality (number of inputs) you can create much more complicated mappings, far beyond the redundant example explained above.

Last Updated: 24/10/2004

Article content copyright © James Matthews, 2004.
 Article Toolbar
BibTeX entry


Latest News
- The Latest (03/04/2012)
- Generation5 10-year Anniversary (03/09/2008)
- New Generation5 Design! (09/04/2007)
- Happy New Year 2007 (02/01/2007)
- Where has Generation5 Gone?! (04/11/2005)

What's New?
- Back-propagation using the Generation5 JDK (07/04/2008)
- Hough Transforms (02/01/2008)
- Kohonen-based Image Analysis using the Generation5 JDK (11/12/2007)
- Modelling Bacterium using the JDK (19/03/2007)
- Modelling Bacterium using the JDK (19/03/2007)

All content copyright © 1998-2007, Generation5 unless otherwise noted.
- Privacy Policy - Legal - Terms of Use -