In the previous lesson, we saw an example of pattern recognition implemented with an artificial neural network. An artificial neural network (ANN) is a collection of processing units that are connected together in a manner similar to neurons in the brain. Recall that our brains are constructed of millions of neurons that are each interconnected with thousands of neighboring neurons. The electrochemical signals that travel through these connections are altered based on the configuration of the neurons and the type of neuron encountered. To mimic this design, ANNs simulate neurons with processing units that have connections with other processing units. The diagram below shows an example of a processing unit [Brookshear 1997].
In our example, the processing unit has three inputs labeled X1 - X3 which can be either 1 or 0. Each of these inputs is associated with a certain weight (W1 - W3) which represents the relative strength of the input. The effective input to the processing unit is computed by taking the weighted average of all the inputs: X1W1 + X2W2 + X3W3. Finally, the effective input is compared with a threshold value stored in the processing unit. If the effective input is greater than the threshold value, the processing unit produces an output of 1; otherwise the unit produces an output of 0. Consider the following example. Suppose our inputs are 1, 0, 1 respectively, our weights are 1, .5, -2 respectively, and our threshold value is 1. The effective input for this unit would be (1*1) + (0*.5) + (1*-2) or -1. Since this input is less than the threshold value of 1, the unit would output a zero.
The power of neural networks comes from linking these processing units together so that the output of one unit becomes the input of the next unit. Such networks can be trained for specific applications through a process of fine-tuning the weights in each processing unit. The training process works as follows:
- The network is given a set of inputs for which the correct output is known.
- The output of the network is compared with the known correct output, and the error is measured.
- The weights of the network are adjusted in order to reduce the output error, and the training process is repeated.
- The training continues until the network reaches an acceptable error level on the test input.
While artificial neural networks are only a very crude model of the complexity of biological networks (like the brain), they are still powerful tools for solving certain problems in AI such as pattern recognition that are difficult to describe with algorithms. Consider the following images:
We can easily recognize that each of these images is a picture of the number three. Even the last image with all the noise is still recognizable without difficulty. However, for a computer, the task of recognizing the similarity of these images is notoriously difficult. Such a process is not easily described by an algorithm. If you were to ask a friend how he or she can recognize the three, your friend might respond with the answer "Because I've seen a three before!" At first, this seems like a simplistic answer, but as we will see it is really the heart of how neural networks perform pattern recognition.
Neural networks are effective for solving problems with the following characteristics [Smith 1996]:
- Problems where we can't formulate an algorithmic solution.
- Problems where we can get lots of examples of the behavior we require.
- Problems where we need to pick out the structure from existing data.
Pattern recognition problems such as handwriting and speech recognition problems both fit these characteristics since they lack clear algorithmic descriptions, have an abundance of example data (i.e. speech and writing), and have clear structures which must be recognized (e.g. the number three or the sound "hello"). Using the example data, a neural network can be "trained" to recognize certain patterns. Of course, writing and speech are not the only data that can be used for pattern recognition. The table below [AI Intelligence 2000] shows other possibilities for neural networks.
|Input to the network||Output from the network|
|Input: Digitized Images||Output:|
|of a face||the person's name|
|of an aircraft||the category of aircraft: friendly or hostile|
|of a typed or handwritten character||the ASCII value of the character|
|a solder joint||the quality level of the joint|
|Input: Sensor Readings||Output:|
|from an industrial process||the adjustments needed to keep the process within quality and safety limits|
|from a gas turbine||whether or not maintenance is due on the turbine|
|from an infrared detector||how many people are in a room|
|Input: Financial or Marketing Data||Output:|
|recent share price values||a buy/sell indicator|
|personal financial details||the creditworthiness of a customer|
|exchange rates and inflation trend||the predicted movement of exchange rates in four hours' time|
|a customer's historical buying patterns||the likely response of the customer to a direct mail campaign|
In the previous lesson, you saw a simple handwriting recognition applet that used a simulated neural network for training and recognition of characters. The following description explains the basic logic behind the applet:
"Assume that we want a network to recognize handwritten digits. We might use an array of, say, 256 sensors, each recording the presence or absence of ink in a small area of a single digit. The network would therefore need 256 input units (one for each sensor), 10 output units (one for each kind of digit) and a number of hidden units. For each kind of digit recorded by the sensors, the network should produce high activity in the appropriate output unit and low activity in the other output units. To train the network, we present an image of a digit and compare the actual activity of the 10 output units with the desired activity. We then calculate the error, which is defined as the square of the difference between the actual and the desired activities. Next we change the weight of each connection so as to reduce the error. We repeat this training process for many different images of each different images of each kind of digit until the network classifies every image correctly" [Stergiou 1996].
Now launch the JRec applet, and let's watch this process happen.
- Use the mouse to draw a character in the applet window. Note that our input is already digitized since we are using the mouse.
- Store the character by pressing one of the label buttons (0-9). This saves our digitized input.
- Press "Clear" to erase your character.
- Press the "Train" button to train the applet with the character set. Notice that the graph which appears is measuring the error. As the training takes place, the network is repeatedly trained until the error is less than 0.1%
- Redraw one of the characters and press "Test" to recognize the character.
- Brookshear, J. G. (1997), Computer Science: An Overview, Fifth Edition, Addison-Wesley, Reading, MA, pp.378.
- Smith, L. (1996), "An Introduction to Neural Networks," http://www.cs.stir.ac.uk/~lss/NNIntro/InvSlides.html.
- AI Intelligence (2000), "Neural Networks," http://aiintelligence.com/aii-info/techs/nn.htm.
- Stergiou, C. (1996), "Neural Networks, the Human Brain and Learning," http://www-dse.doc.ic.ac.uk/~nd/surprise_96/journal/vol2/cs11/article2.html.