Sem V slide 1

 

Today’s session:

 

1.           Connectionism/Neural Nets

 

 

2.         Learning in Neural Networks

 

 

3.         Modelling Problem-Solving:

 

-ACT

 

-SOAR

 

 

 

 

 

 

 

Learning in Connectionist/Neural Networks/PDP systems

Source: D. Green Cog Sci. An introduction, p. 39

 

1. Hebbian Learning

 

-increase weights of connection between:

      -two nodes when they are both active

      -two nodes when they are both inactive

-decrease weights of connection between:

      -an active and an inactive node

 

-is highly effective and biologically plausible

 

-is unsupervised: there is no explicit mechanism for doing this

-for use in associative networks

 

 

Associative network: input and output are given, network produces pattern that associates them.

 

2. Backpropagation:

-present the network with an input pattern

-compare the output produced by the network with what it should have been

-calculate difference

-propagate the difference backwards through the network & make small adjustments to the weights on the way

-when the pattern is next presented, the output made will more closely resemble the desired input

 

-is supervised learning, explicit mechanism for comparing and propagating

-used for feed-forward and recurrent networks

 

 

 

 

 

 

 

MODELLING PROBLEM-SOLVING

Sources:    Green: Cog Sci,

Sharples et al. Computers and thought, Finlay & Dix: An introduction to AI

 

 

1. ACT   “Adaptive Control of Thought”

<John Anderson, 1976, 1983: ACT*, 1993: ACT-R

 

-intended as a general model of cognition

 

Consists of:

-large long-term memory in the form of a semantic net

      -small working memory of active items

-production system which operates on the memories

Only a small part of LT memory can be activated at any one time (cf. human memory) and productions only operate on active memory

 

 

 

Works as follows:

Productions make changes in memory:

-activate new items in memory

-deactivate other parts

Activation gradually decays in elements that are not probed by the production rules.

Only items that are being used remain in active memory.

-memory elements can spread activation to their neighbours in the semantic network (cf. association of ideas)

 

ACT models learning or skill development: Knowledge about a new domain

+ general problem solving rules (modelled as production rules)

+ a mechanism for deciding which rules to apply

Þ acquisition of procedures to carry out highly specialised activities.

 

 

Skill acquisition:

 * In three stages:

1. using general purpose rules to make sense of facts known about a pb. Each fact is initially retrieved from declarative memory, stored in working memory, and used to work out a sequence of actions.

= a slow process, great demands on working memory

2. development of productions specific to the new task encountered in 1.

In other words, successful sequences are compiled into procedures for action allowing specific actions to be retrieved, in stead of having to be worked out.

3. Tuning the thus formed procedures to improve performance.

 

 

 

 

 

 

 *2 methods of transition:

-Proceduralisation: Making general rules/procedures into new, more specific rules. By replacing variables with specific values. E.g. learning to cook

-Generalisation: (cf. inductive learning) The range of the rule is broadened to cope with novel situations. E.g. children’s subtraction

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2. SOAR

<Laird and Newell 1987

idea: Problem-solving is like traversing a problem space (state space) from initial state to goal state.

 

Given: initial state, goal state,

how to get from initial state to goal state?

 

By making subgoals.

 

This idea of cognition as the traversing of a problem space is implemented in a production system.

How does this work?

WM:     -representation of current goal (+all higher goals

-representation of current goal’s pb space

-current state

-operator which is to be next applied to that state

 

LT M:   -rules/productions for selecting problem spaces

            -states

            -operators

-rules for evaluating & applying operators

 

Processing=cyclic

      Each cycle: 2 phases

1. All long-term memory is brought to  bear on the current representation of the task (i.e. the contents of WM)

Þ This yields a set of potential WM modifications, each tagged with an indication of how suitable it is

 

2. Selection of the most appropriate of these modifications.

Modification of WM accordingly.

 

 

 

 

Impasse: When SOAR is unable to choose an appropriate operator or state.

Þ SOAR’s response: Create a new goal (of solving the impasse.

 

This sub-goal is then solved in the same way as the processing described above.

 

Possibly, SOAR has to set up multiple sub-goals within the sub-goal until it has solved the impasse. When this is done, it returns to the original goal.

 

 

 

 

 

 

 

 

 

 

 

E.g. Making toast

 

 

IF               the goal is to make toast goal

                  and there is sliced bread precondition

                  and there is a toaster                precondition

THEN         toast the bread                                  action

 

 

IF               the goal is sliced bread             goal

                  and there is bread                    precondition

                  and there is a knife                           precondition

THEN         slice the bread                                  action