At the forefront of Artificial Intelligence
  Home Articles Reviews Interviews JDK Glossary Features Discussion Search

Teuvo Kohonen

Dr. Teuvo Kohonen is a key pioneer in self-organizing neural networks. His areas of research are associative memories, neural networks, and pattern recognition, in which he has published over 200 research papers. He is currently the Head of Neural Network Research at the University of Helsinki, Finland. Here is a quote from his homepage that exemplifies what Kohonen has achieved, "Since the 1960s, Professor Kohonen has introduced several new concepts to neural computing: fundamental theories of distributed associative memory and optimal associative mappings, the learning subspace method, the self-organizing feature maps (SOMs), the learning vector quantization (LVQ), novel algorithms for symbol processing like the redundant hash addressing and dynamically expanding context, and recently, emergence of invariant-feature filters in the Adaptive-Subspace SOM (ASSOM)."


G5: A lot of your research spawned from find biologically plausible neural networks. Why did you feel that biological plausibility was key to creating a neural network? What biological phenomenon did you use to base your research on?

My interests in the brain architectures and functions stem from the early 1960s when I read about the neural models for the first time. That time everybody seemed to believe that one and the same theory could apply to both biology and artificial neural networks - and there are still people who think that way. However, I might have been among the very first ones on this new wave who realized that the artificial networks need not imitate biology. In analogy, the freely rotating wheel does not occur in the biological nature because it would be difficult to make it self-repairing. No blood vessels can be put through the bearing.

Today I tend to think that in information processing too there are mathematical ideals that the nature is desperately trying to imitate by all the biological means it has available, but some technical solutions are still impossible for it.

A neural-network model, however, could be useful in explaining the brain principles, which is an intriguing objective. I think that the SOM is the most realistic of all models, since it explains the inherent order in the networks. This order is ubiquitous in the brain, but the other models donīt take it into account. It is possible to make an SOM model using biologically plausible components, even using pure network structures only, but a far more effective mechanism ensues if one includes some kind of chemical messengers for local control. I have propounded the possible role of chemical agents in the neighborhood function, which is central in the implementation of the SOM algorithm.

Among my earlier neural-network models, the associative-memory formalism was based on biologically inspired components, the modifiable Hebbian synapses. Only after this kind of theoretical work the biologists changed their views on memory. Still in the 1960s the prevailing view was that memory must be based in DNA and RNA.

G5: Self-organizing networks have received much attention over the years in many areas of Artificial Intelligence. What have been the most innovative uses of SOMs that you have seen? What do you see as the best applications of SOMs?

Since 1985 the SOM algorithm has been used to monitor and control industrial processes, especially in the Finnish forest industries, but recently also in the continuous casting of steel, also in Finland. The process engineers are enthusiastic, since they can now for the first time see process states they were not aware of. In Germany there has been much research on the implementation of flexible robots based on the SOM, for instant, development of a three-finger grip.

The new forest taxation legislation in Finland established in 1995 has details that were determined according to segmentation results obtained by the SOM: which forest parcels are taxed according to area and which ones according to income, respectively.

We have recently completed a big project in which we mapped all the electronically available patent abstracts onto the SOM. This system is called WEBSOM.The size of the text corpus is 20 times as large as all the 34 volumes of Encyclopaedia Britannica together. In general, knowledge discovery in databases is a promising area for the SOM. We see a very promising future in biochemistry. There are masses of chemical data on the Internet, but nobody has been able to use them fully. We have developed illustrative maps to visualize the relationships between protein sequences, and new kinds of results are obtainable in analyzing gene expressions. Guido Deboeck and myself edited the book "Visual Explorations in Finance with Self-Organizing Maps" (Springer, London 1998), which has now been translated into Japanese and Russian. The book is full of innovative financial analyses, including customer profiling in Shanghai. I think that a SOM could be developed for many cases of interactive macroeconomic decision-making, where the interest groups are sitting around a table and look at the so-called component planes of the SOM that display the input variables, and changes are made in the input variables by everybody until a comromise is reached. I recommend that you look into the list of almost 4000 scientific papers on the SOM. This list (at least its earlier version) can be seen at the net address http://www.icsi.berkeley.edu/~jagota/NCS. This publication has an index where you see the various application areas of the SOM.

G5: What tips would you give to programmers wanting to create self-organizing neural networks?

It would be best to start with ready software packages. I recommend our own ones, because they are error-free and involve all our know-how; on the contrary, many commercial packages are of no use. The packages can be downloaded from http://www.cis.hut.fi/research/ where we have one SOM package (for very large problems) written in ANSI C, and another one based on Matlab (including effective graphics). The latter might be easiest to start with. In the book of Deboeck-Kohonen mentioned above there is a chapter on available SOM software tools.

G5: In your opinion, what has been the most exciting advance in neural networks?

In IEEE Spectrum, January 1998 issue, there was an interview of Paul Werbos, who has followed the neural networks from the governmental perspective. He mentioned three interesting applications: control of an unmanned Mach-5 aeroplane intended to transport satellites into orbit, the Ford project to optimize fuel economy and minimize pollution, and emergence landing of an MD11 aeroplane, when part of its hydraulics was frozen. What would you think about the application of the SOM where a new quantitative classification of thousands of galaxies was made on the basis of the Hubble Space Telescope data?

G5: What advantages/disadvantages do Kohonen Networks have over other SOMs like the Instar-Outstar and ART-series networks?

I must warn you about a deliberate confusion of concepts and terms as made by a certain school. The Instar-Outstar structure belongs to associative networks and was originally invented by Karl Steinbuch in Germany. He wrote a book on it in 1963. The ART is a competitive-learning method, which makes use of vector-quantization methods invented around 1960. If we consider the automatic spatial ordering of the abstract representations as the central feature of the SOM, then no other network has this feature.

The advantages of the SOM over the other networks were mentioned above: they can form similarity diagrams automatically, and they can produce abstractions. The SOM is generalizable to nonvectorial , say symbolic data, whereas the other networks are not. I am not aware of any particular disadvantages of the SOM; I sometimes hear that they learn slowly (which would be no problem with the contemporary computers), but even this is not true, because there exist fast-learning (batch) versions of the SOM, and in our WEBSOM project we increased the computing speed 50,000-fold by special programming tricks. You can read about these tricks from the forthcoming IEEE Transactions on Neural Networks, special issue on data mining.

G5: There has been much interest in using Kohonen Networks to control Artificial Intelligent agent in games -- ranging from Quake2 bots to board game and real-time strategy opponents. How successful do you believe such networks can be applied to the various genres of gaming?

I wish I could answer this question, because it would give me a good standing in the eyes of the younger generation. Unfortunately I do not know anything about the games, but I have the feeling that when you cannot define the situations analytically, but only in terms of given examples, the SOM is extremely effective in interpolating and extrapolating between them in a robust and smooth, still nonlinear way. Maybe the game programmers are not yet aware of my nonvectorial SOM version.

Submitted: 17/08/2000

 Article Toolbar
Print
BibTeX entry

Search

Latest News
- The Latest (03/04/2012)
- Generation5 10-year Anniversary (03/09/2008)
- New Generation5 Design! (09/04/2007)
- Happy New Year 2007 (02/01/2007)
- Where has Generation5 Gone?! (04/11/2005)

What's New?
- Back-propagation using the Generation5 JDK (07/04/2008)
- Hough Transforms (02/01/2008)
- Kohonen-based Image Analysis using the Generation5 JDK (11/12/2007)
- Modelling Bacterium using the JDK (19/03/2007)
- Modelling Bacterium using the JDK (19/03/2007)


All content copyright © 1998-2007, Generation5 unless otherwise noted.
- Privacy Policy - Legal - Terms of Use -