At the forefront of Artificial Intelligence
  Home Articles Reviews Interviews JDK Glossary Features Discussion Search
Home » Articles » Gaming » General

Applying Artificial Intelligence, Search Algorithms and Neural Networks to Games

Course: BSc (Hons) Software Engineering

Supervisor: Peggy Gregory


Artificial Intelligence has been developing for many years but until recently game developers have largely ignored it. As computer-processing speed has increased, games have started to include advanced levels of artificial intelligence.

A general description of Artificial intelligence is given to introduce the reader to the topic and to the focus of the paper.

Search algorithms are described in general with a more detailed description of A* and minimax algorithms which are particularly useful for games.

The topic of neural networks is introduced due to the fact they have the ability to ‘learn’ and adapt to any situation.

What artificial intelligence is for games is described with discussion on how it is applied to make the game more interesting.

Search algorithms and neural networks are independently evaluated, then compared to illustrate the similarities and differences between them.

1 Introduction

The following paper will describe Artificial intelligence (AI) in general as well as looking specifically at search algorithms including A* and minimax. Neural networks will be looked at so they can be compared with search algorithms. The use of AI in games will also be discussed to provide an understanding of how it is applied in this environment. Finally search algorithms will be evaluated in general and again for there applicability to games. Detailed below is a brief description of what each section of the paper contains.

Section 2 provides an introduction to AI and the various types that are being researched. Search algorithms are discussed in more detail due to the fact that they are particularly useful for games. The A* algorithm is introduced as it is one of the best for path finding problems which is useful for many types of games. Next the minimax algorithm is introduced because it tries to anticipate what an opponent will do. Lastly, neural networks are introduced to allow a comparison with search algorithms to be made later in the paper.

Section 3 looks at what AI is when used for games and how it can be applied to them.

Section 4 evaluates search algorithms and neural networks separately as well as providing a comparison between them. In addition, the application of AI to games is evaluated towards the end of the paper. Concluding remarks for the paper are also contained within.

Section 5 includes all the references that are used within the paper.

2 Artificial Intelligence

2.1 What is Artificial Intelligence?

AI is the study and development of computer systems that exhibit properties that are similar to the intelligence of humans (Barr, 1981), while McCarthy (2001a) describes AI as the creation of intelligent machines or programs, which is similar to the use of computers to understand the intelligence of humans. These opinions are both correct, however the author believes that AI is a generic term for software that simulates an advanced level of intelligence to perform complex tasks.

2.2 Artificial Intelligence Types

Dumm (2001) and McCarthy (2001b) both describe various types and areas of AI including search algorithms, frames, pattern recognition and genetic algorithms such as neural networks. AI can be applied in many areas, including games, however some types of AI are more suitable for one area than others. For example, search algorithms can be used for path finding problems, however for Optical Character Recognition (OCR) pattern recognition would be more appropriate.

2.2.1 Search Algorithms

A wide collection of search algorithms techniques are discussed in detail by Korf (1996) as a unique method in AI of solving involved problems that require trial and error. All search algorithms have a problem space model, which is the environment where the search is performed (Newell, 1972) as quoted by Korf (1996). This consists of a set of states of the problem and a set of operators that can alter the state (Korf, 1996). To give an example, in a game of draughts the set of the states would be the position of the pieces on the board and one of the operators would move the piece into an empty square.

Brute force algorithms are the most common form of searches that only require a starting state, a state description, a set of legal operators and the goal state (Korf, 1996). Although these types of algorithms have a simplistic implementation, they can require lots of memory and are inefficient as they can explore a large number of possible states until the goal state is reached. However, where there is no limit on how long the search can run and on memory available, then the brute force algorithms, will always reach the goal state.

Two basic types of brute force algorithms are the depth-first and breadth-first. Depth-first searches are expanded down first then back up and across (Mathews, 2000a). Figure 1, adapted from Matthews (2000a), illustrates the order that the nodes are expanded.

Figure 1 – Depth-First Algorithm Node Expansion

Breadth-first searches are expanded across before moving down (Matthews, 2000a). Figure 2, adapted from Matthews (2000a), illustrates the order that the nodes are explored.

Figure 2 - Breadth-First Algorithm Node Expansion A Star Algorithm (A*)

Search algorithms are used for a multitude of AI tasks one of which is path finding. The A* algorithm finds the lowest cost path between the start state and the goal state (Barr, 1981) where changing from one state to another involves some cost. A* requires a heuristic function to evaluate the cost of path that passes though a particular state. This is defined below as:-

f(n) = g(n) + h(n)

Source: (Barr, 1981) and (Korf, 1996)

Where g(n) is the cost of the path from the start state to node n and h(n) is the cost of a path from node n to the goal state. A node is a state between the start state and the goal state. As you can see f(n) provides an estimate of a lowest cost path from the start to the goal state that passes though node n. Therefore the states with the lowest f(n) are evaluated first and the algorithm ends when the goal state is chosen for evaluation (Korf, 1996).

While the theory of A* is relatively simple, implementing the algorithm can be very complicated (Matthews, 2000c) however once this is done you have an efficient path finding algorithm that can be reused in many projects. The Minimax Algorithm

The minimax search algorithm looks ahead to try and anticipate what the opponent will do. This information is used to choose the best move that gives the computer the advantage. For example, when you play a game against another player you are trying to maximise your score whilst trying to minimise theirs (Matthews, 2000e).

To give an example, a game has two possible moves, one-move results in a win and the other results in a draw. Figure 3 shows how the nodes are expanded, which are explored in exactly same way as a depth-first algorithm, see section 2.2.1.

Figure 3 – Minimax Search Tree Node Expansion

A1 is the current state of the game, B2-B3 are the possible moves that can be made and A4-A7 are the follow up moves the opponent could make. Using a heuristic function to assign values to the four possible final moves is shown in figure 4.

Figure 4 - Minimax Search Tree Heuristic Values

Since minimax assumes that the opponent will be trying to minimise our score, the minimum value is calculated for each node B2-B3 (Matthews, 2000e). B2 will equal 0 and B3 will equal 1. When the first player is looking for a move, they want to maximise their score so B3 would be chosen because it offers the best advantage over the opponent.

Increasing the depth the search is performed to will improve the chance that the winning node will be in the search. However there is a trade off between the depth that can be searched and computer memory and processing time. Korf (1996) discusses modifications that can be made to the minimax algorithm which are outside the focus of this paper.

2.3 Neural Networks 2.3.1 What is A Neural Network?

A neural network is an attempt to simulate the inner working of a biological neuron with software (Smith, 1996). Neural networks are attempting to model the inner workings of the human brain (Jain, 1996). For the purposes of this paper, biological neurons will not be looked at because it is not relevant to the focus of this paper. For a detailed explanation of biological neural network see Seaman (2001). To avoid confusion neural networks and artificial neural networks are the same in this paper.

Neural networks are made up of interconnected neurons. “Each neuron has a certain number of inputs, each of which have a weight assigned to them. The weights simply are an indication of how 'important' the incoming signal for that input is. The net value of the neuron is then calculated - the net is simply the weighted sum, the sum of all the inputs multiplied by their specific weight. Each neuron has its own unique threshold value, and if the net is greater than the threshold, the neuron fires (or outputs a 1), otherwise it stays quiet (outputs a 0). The output is then fed into all the neurons it is connected to.” (Matthews, 2000d).

Figure 5 – A Simple Neural Network

Neural networks are made up of several layers of neurons, the input layer, the output layer and hidden layers (Leslie, 1996). Figure 5 shows an example of a neural network with three layers and the connections between the five neurons.

2.3.2 Neural Network Learning

Neural networks have the ability to adapt to any situation and therefore ‘learn’ how to solve a problem. This feature makes them an interesting addition to any game as they can gradually adjust themselves to provide a more challenging experience for the player.

There are several types of learning algorithm that can be applied to train a neural network. One of the simplest to implement is the Backpropagation training algorithm, which was developed in the 1970s and early 1980s (Caudill, 1992). Backpropagation is a two-stage process. First the input is presented to the network input layer and a response is produced at the output layer (Caudill, 1992). Secondly, the networks output is compared to the desired output and when there is an error this is passed backward through the network (Caudill, 1992).

Repeating backpropagation trains the neural network to recognise the inputs and therefore it ‘learns’ how to deal with them. Once enough training has been carried out, the neural network is able to identify the inputs and produce the correct output. Even with errors in the inputs, the neural network can distinguish from other possible inputs and arrive at the same conclusion.

3 Artificial Intelligence in Games

3.1 What is Artificial Intelligence in Games?

Computer controlled opponents that seem to exhibit some kind of cognitive reasoning when performing tasks or responding to player actions (LaMothe, 1995). Until recently, AI had been ignored in games because of its inherent complexity and the amount of processing time required for simulating a realistic level of intelligence. With the continued improvement in processor speed, developers can finally start to include an advanced level of AI in their games.

3.2 Applying Artificial Intelligence to Games

How the AI is applied to the game depends on the type of game that is being developed. For the purpose of this paper strategy games will be used as an example in discussion but the techniques can be adapted for other types of game as well.

A Strategy game is usually grid-based with multiple characters, whose movement and battles are controlled almost entirely computer controlled (Archmage, 2001).

There are many elements of a strategy game that can be enhanced by using AI which is why most games have a AI engine, the simplest of which is a finite state machine (FSM) (Matthews, 2000b). A FSM has several states each with different behaviour and their own triggers. For example, a game may have two states, attack and moving, which would be triggered depending on the position of the player in the game. When the enemy is in the moving state, a path finding algorithm is used to create the illusion of realistic movement between two points.

AI can also be applied in more complicated areas of games, such as opponent intelligence. Neural networks could be used to create an enemy agent that learns how to defeat the player. Minimax can allow a game to look ahead at the possible moves that a human player can make and therefore it can come up with a move that allows the computer to win.

4 Evaluations and Conclusion

4.1 Evaluation of Search Algorithms

Search algorithms provide a very efficient way to find a solution to a problem. For path finding, efficiency and speed is essential, as most users of search algorithms want to find the best path in the least amount of time. This is why the main research done in this area is the development of faster and faster algorithms (Korf, 1996). Since computer speed has also increased, these algorithms are getting even quicker at finding a solution. For example, the A* algorithm can now be reliably used on home computers to provide developers with a reliable solution for finding the best path between two points. Minimax can also be used effectively if the depth of the search is limited and the game has a limited number of solutions, such as chess. Therefore, search algorithms are a quality solution to problems where you require trial and error to find the correct answer.

4.2 Evaluation of Neural Networks.

Using Neural networks allow the development of software and systems that can ‘learn’ and adapt to solve a problem. Having a program that can learn, means that it will be able to cope with unexpected situations instead of just generating an error. This is an immensely powerful tool that can be applied to wide number of areas including games. Therefore, neural networks add an attribute to programs that is not usually available. This is valuable to both the developers of software and the people that have to use them because they have a system that can ‘learn’ for its self.

4.3 Comparison of Search Algorithms and Neural Networks.

Search algorithms attempt to find the optimum solution to a problem, while neural networks have to ‘learn’ to solve the problem to provide an answer. Where search algorithms should eventually, find the best solution, neural networks can adapt and consequently, may not always give the best solution. However, being able to adapt means that if the problem changes slightly then the neural network is more likely to be able to give a good solution because of what it has ‘learnt’ from the problem. A search algorithm would find the best solution but if it was a path-finding problem then the heuristic function may have to be adapted to fit the new problem.

A neural network is similar to a breadth-first search algorithm in the way that the layers interact. This means that neural networks can be considered to be a complex searching algorithm that has the ability to ‘learn’.

Where the minimax tries to anticipate the moves of a player, it does not ‘learn’ how to play the game and doesn’t account for a player who may not want to minimise the score of their opponent. Otherwise it is a basic algorithm for guessing the moves of the other player and there are many improvements that can be made to it.

4.4 Evaluation of applying artificial intelligence to games.

As the size of a game project has increased, the overall complexity of the game has introduced many problems. The use of AI in games allows developers to produce a more complex and detailed environment because it solves many of the problems that can be encountered when creating larger games.

Using a search algorithm, such as A*, allows developers to implement path finding into their games. For a strategy game, having a way to find the best path, is needed to allow the character movement to appear intelligent. As processing speed is increasing, A* can be now utilised without slowing down the game.

Using minimax in a game gives developers the ability to look ahead and try to anticipate the moves the other player will make. It seems to be well suited for games like chess where there are a limited number of moves that can be made, whereas a strategy game has unlimited moves that can be made where the winning one is undefined.

Adding neural networks, to a game, allows developers the opportunity to give enemy characters the ability to learn. This means that the enemy can adapt to the strategy of the human opponent and ‘learn’ how to beat them. The illusion of playing against another human player makes a game more enjoyable to play because you can’t predict the actions a human controlled enemy will take. Creating enemy agents with believable AI is a difficult challenge although neural networks provide a useful method of implementing these features for games.

Depending on the complexity of the game a developer could potentially add both search algorithms and neural networks thus allowing the game to find the best paths and ‘learn’ to play against the opponent. The computer could play the game against the computer in order to train the neural network before taking on a human advisory.

There is no point in adding AI to a game unless it makes the game better as a result. Just adding AI without considering how it should be applied, makes the game overly complex and more likely, less enjoyable for the player.

4.5 Conclusions

Search algorithms are fundamental part of AI, which provide accomplished solutions to problems that require trial and error to solve.

Neural Networks simulate the ability to ‘learn’ and adapt to deal with involved problems and are therefore useful for situations where you can’t anticipate every situation.

Applying AI to games allows developers to add an element of realism to challenge a human opponent and enhance the quality of the game.

The aim of using AI in games is to provide a better and more realistic experience for the player.

5 References And Bibliography

Archmage (2001) The Definition Of A Role-Playing Game.
Available from (Accessed 7th December 2001)
Barr, Avron & Feigenbaum, Edward (1981) The Handbook Of Artificial Intelligence. Pitman
Caudill, Maureen & Charles Butler (1992) Understanding Neural Networks - Volume 1 Bradford
Dumm, Tim (2001) Methods Used To Create Artificial Intelligence.
Available from (Accessed 5th December 2001)
Jain, A., Mao, J. and Mohiuddin, K. (1996) Artificial Neural Networks: A Tutorial IEEE computer vol. 29
Available from (Accessed 6th December 2001)
Korf, Richard E. (1996) Artificial Intelligence Search Algorithms. University Of California
Available from (Accessed 5th December 2001)
LaMothe, André (1995) Building Brains Into Your Games.
Available from (Accessed 6th December 2001)
Matthews, J. (2000a) Simple Tree Searches.
Available from (Accessed 5th December 2001)
Matthews, J. (2000b) AI In Gaming.
Available from (Accessed 5th December 2001)
Matthews, J. (2000c) A* For The Masses.
Available from (Accessed 8th December 2001)
Matthews, J. (2000d) Introduction To Neural Networks.
Available from (Accessed 8th December 2001)
Matthews, J. (2000e) Minimax Trees
Available from (Accessed 8th December 2001)
McCarthy, John (2001a) What Is Artificial Intelligence?. Stanford University
Available from (Accessed 27th November 2001)
McCarthy, John (2001b) Branches Of AI. Stanford University
Available from (Accessed 27th November 2001)
Newell, A & Simon, H. (1972) Human Problem Solving. Englewood Cliffs
Seaman, James (2001) Artificial Neural Networks Vs Biological Neural Networks, Can A Computer Learn?
University Of Central Lancashire
Smith, Prof. Leslie (1996) An Introduction To Neural Networks. University Of Stirling
Available from (Accessed 8th December 2001)
Figure 1: (Matthews, 2000a) Depth-First Algorithm Node Expansion. adapted
Figure 2: (Matthews, 2000a) Breadth-First Algorithm Node Expansion adapted
Figure 3: (Matthews, 2000e) Minimax Search Tree Node Expansion adapted
Figure 4: (Matthews, 2000e) Minimax Search Tree Heuristic Values adapted
Figure 5: Kelly, Stuart J. (2001) A Simple Neural Network.

Submitted: 13/01/2003

Article content copyright © Stuart James Kelly, 2003.
 Article Toolbar
BibTeX entry


Latest News
- The Latest (03/04/2012)
- Generation5 10-year Anniversary (03/09/2008)
- New Generation5 Design! (09/04/2007)
- Happy New Year 2007 (02/01/2007)
- Where has Generation5 Gone?! (04/11/2005)

What's New?
- Back-propagation using the Generation5 JDK (07/04/2008)
- Hough Transforms (02/01/2008)
- Kohonen-based Image Analysis using the Generation5 JDK (11/12/2007)
- Modelling Bacterium using the JDK (19/03/2007)
- Modelling Bacterium using the JDK (19/03/2007)

All content copyright © 1998-2007, Generation5 unless otherwise noted.
- Privacy Policy - Legal - Terms of Use -