The aim of the paper is to suggest that although connectionist learning mechanisms such as back-propagation can often form internal representations enabling satisfactory performance in higher-order mappings, these representations may well have the character of kludges. As they are typically not expressed in terms of the concepts and features that are pertinant to the regularities underlying the mappings, they are usually ineffective with respect to similar or closely related mappings. To improve on learning mechanisms such as back-propagation, it will be necessary, I feel, to introduce a creative aspect into the learning; i.e. to devise mechanisms for producing required higher-level concepts and features within the learning process. This, unfortunately turns out to be an extremely difficult task on which much more work is needed.
This paper is not available online