previous up next
Left: COGS Home Page Up: COGS Home Page

LECTURE 4
*
Modelling aspects of language understanding
*
Essay (May 30th)
*
Exam (June 19th)
*
Connectionism (cf Green et al. Chapter 2)

---------------------------------------------------------

ESSAY
*
``A Cognitive Model of ...''
*
2000 words, counts 50%
*
Thursday May 30th 4pm
*
Plagiarism (see Course Outline and Exam Handbook)

---------------------------------------------------------

EXAMINATION
Candidates must attempt TWO questions
  1. Using a simple neural network program that you have studied as an example:
    1. Explain with an example how the network computes output values, given input values.
    2. Explain how the network can be taught.
    3. Briefly discuss the strengths and weaknesses of the program as a model of human performance.
  2. Compare and contrast Act* and Soar as explanatory tools in cognitive science.
  3. Using a Production System that you have studied as an example:
    1. Explain how a production system works.
    2. Explain the difference between Working Memory and Production Memory.
    3. Briefly discuss the strengths and weaknesses of Production Systems as tools for cognitive modelling.

---------------------------------------------------------

WORDS AND NON-WORDS
vague               gauve               ugvae
boats               batso               bstoa
smoke               kemos               ekmso
maize               zamie               imzai

---------------------------------------------------------

CONNECTIONISM
*
Systems produce some ``human-like'' behaviour
*
generalize from examples: ``I goed home''
*
cope sensibly with examples have not been trained on
*
can be made to malfunction in ways evidenced by humans with brain damage
*
Non-symbolic, but still runnable cognitive models
*
Based loosely on analogy to structure of brain
*
Many interconnected simple processing units
*
Overall behaviour a function of
*
what each of the simple processing units do
*
as well as how they mutually communicate

---------------------------------------------------------

NEURAL NETWORKS
*
Strongly associated with modelling learning of ``patterns'' (e.g. discrimination tasks) e.g. cows vs tanks, rules of pronunciation
*
Relatively robust (contrast symbolic models)
*
Nodes with very simple behaviours, linked by
*
Arcs with strengths
*
Activation flows from node to node along arcs (depending on strengths) and depending on behaviour at each node.
*
Neural network -- analogy to brain behaviour
*
Many different kinds of network architecture

---------------------------------------------------------

DISCRIMINATING WORDS FROM NON-WORDS
Enter a word with 5 letters (or RETURN to quit): offal
   The word offal scores 0.732206 in favour, and 0.264867 against

Enter a word with 5 letters (or RETURN to quit): okkly
   The word okkly scores 0.000437 in favour, and 0.999486 against

---------------------------------------------------------

THREE LAYER FEED FORWARD NETWORKS -- 1
75

---------------------------------------------------------

THREE LAYER FEED FORWARD NETWORKS -- 2
*
Input Layer of nodes -- linked to input values e.g. a word, a sentence, a picture, a pattern
*
An intermediate or Hidden Layer of nodes via which generalisations and discriminations computed
*
Output Layer from which the output is taken e.g. yes/no, transformation of word or sentence, classification of pattern or picture.

---------------------------------------------------------

BASIC BEHAVIOUR OF NETWORK -- 1
To determine if a node will fire
  1. See what arcs feed into that node
  2. See what nodes these arcs are linked to
  3. Compute $ \sum weight\_on\_arc \times activation\_of\_node $
  4. If result greater than threshold for that node, fire it
  5. Propagate calculations forward from all input nodes to output nodes

---------------------------------------------------------

CALCULATION AT NODE N
94


$1 \times 0.7 + 0 \times 0.8 + 1 \times 0.5 > 0.4$
so fire node N

---------------------------------------------------------

TEACHING THE NETWORK
  1. Choose number of nodes in each layer according to problem
  2. Assign random weights to arcs and random thresholds to nodes
  3. Select sample of training examples (input/output pairs)
  4. For each example
    1. Compare pattern on output layer with what it should have been
    2. Make small adjustments to weights and thresholds (e.g. by back propagation) so as to make the actual output a bit closer to the desired output
  5. Repeat 4 many times
Test on novel examples

---------------------------------------------------------

DISCRIMINATING WORDS FROM NON-WORDS -- 1
*
Assume 5 letter words
*
Examples: hotel swiss; Non-examples: kaamt jomet
*
Input layer: $ 5 \times 26 $ nodes
*
Hidden layer: 20 nodes
*
Output layer: 2 nodes -- ``in favour'', ``against''

---------------------------------------------------------

DISCRIMINATING WORDS FROM NON-WORDS -- 2
Number of letters in each word (5):
Number of different real words to train on (500):
Number of random "words" to train on (500):
Number of examples to show to the net in training (2000):
Number of units in the net's hidden layer (20):
Initial weight range for net (0.5):
Learning rate constant for net (0.5):
Momentum constant for net (0.9):

---------------------------------------------------------

DISCRIMINATING WORDS FROM NON-WORDS -- 3
The first 40 training words, marked as + or - examples, were:
agony+ offal+ tkqbr- whmvt- asher+ kaamt- gngpu- ttqmc- faber+ wstki-
akyli- cprqq- swiss+ estes+ midge+ exact+ blair+ zqpgp- xmefi- acton+
arena+ drown+ supra+ eppis- linen+ chirp+ sjfhv- xplou- axial+ rlsut-
krnnc- wylie+ hotel+ bineb- cohen+ kwgwc- ndbry- bobby+ jomet- dwypv-

Enter a word with 5 letters (or RETURN to quit): offal
   The word offal scores 0.732206 in favour, and 0.264867 against

Enter a word with 5 letters (or RETURN to quit): okkly
   The word okkly scores 0.000437 in favour, and 0.999486 against

---------------------------------------------------------

REPRESENTATION ISSUES
*
Representation distributed throughout weights and thresholds
*
Can make decent guess for examples not been trained on
*
Experiments with other systems have shown that ``damaged'' but trained networks can be made to produce similar behaviour to brain-damaged patients

---------------------------------------------------------

---------------------------------------------------------

previous up next
Left: COGS Home Page Up: COGS Home Page
Benedict du Boulay, Cognitive Modelling web pages updated on Saturday 11 May 2002