From COGS to Consciousness

Professor Anil Seth

I first arrived at Sussex 15 years ago, as a M.Sc. student on the ‘knowledge based systems’ (KBS) course, with a view to developing an understanding of Cognitive Science and Artificial Intelligence (AI). My first degree, in Experimental Psychology at Cambridge, had left me with a strong desire to follow an academic path but also dissatisfied with prevailing approaches in psychology and neuroscience; in how psychological phenomena were linked to underlying brain mechanisms, and above all in the almost total disregard for the central problem of consciousness.

KBS offered a challenging diversity of topics and seemed to be in the midst of an important transition. On one hand, a good deal of content was devoted to what is now considered ‘good old fashioned AI’ (GOFAI), for example the use of PROLOG to create backgammon-playing software agents. On the other hand, there was a strong emphasis on ‘connectionism’, the idea that psychological processes are underpinned at the neural level by distributed networks of neurons, and not by explicit ‘symbol processing’ mechanisms as assumed by the chess and backgammon routines of GOFAI. I’d first learned about connectionism at Cambridge from Andy Clark’s seminal Microcognition written in part during his time at Sussex. [i]

Two new directions were particularly striking. The first, emphasised by Inman Harvey, was the challenge to the notion of a ‘mental representation’ so dear within GOFAI and so often used to provide an illusory link between neural and psychological levels of description. According to Inman, the circumstances under which one can claim that neural phenomenon X ‘represents’ non-neural phenomenon Y (e.g. a part of the environment, a concept) are much more limited than normally assumed, and indeed the very word itself should probably be banished for a while (amusingly, George Miller – a founding father of cognitive science – said much the same about ‘consciousness’ in the 1970s). Understanding how representation happens in the brain should be the outcome of an explanation, not an assumed component of an explanation.

The second was the challenge of understanding how neural networks give rise to behavior. Most connectionist research had focused on pattern recognition and classification, not on the generation of behavior. Here, Sussex was very much at the forefront of research by combining the conceptual framework of ‘dynamical systems theory’ with the methodological approach of ‘evolutionary robotics’. Dynamical systems theory emphasises the importance of tightly-coupled reciprocal interactions among neural systems, body morphology, and environmental structure. [ii] Conceptually, it supplies a clear link between neural and behavioral phenomena, without recourse to intermediate (and perhaps illusory) levels of description such as computation, information processing, and specifically ‘representation’. In practice however, its challenging mathematics seemed to limit its application to simple behaviors such as phonotaxis in crickets. [iii] In this light, the Sussex specialty of evolutionary robotics seemed especially promising. [iv] Evolutionary robotics, pioneered in COGS by Phil Husbands, Inman Harvey, and Dave Cliff, makes use of so-called ‘genetic algorithms’ to automatically specify neural networks that are able to generate desired behaviors by analogy with natural selection (or more accurately, by analogy with goal-directed breeding by farmers).

The combination of evolutionary robotics for synthesis and dynamical systems theory for analysis seemed to offer a powerful new approach to cognitive science. Drawn by these exciting new developments, I stayed at Sussex to work towards to D.Phil. under the guidance of Phil Husbands. My research at this time focused on characterising the evolution of complexity, and on integrating evolutionary robotics with existing frameworks in theoretical biology such as optimal foraging theory. These four years were intellectually rewarding and great fun too. But at the end of my thesis, in 2000, I had the nagging feeling that I’d wandered far from my central objective of understanding the neural basis of conscious experience.

The scientific investigation of consciousness has of course struggled for legitimacy over the past hundred years. Originally at the centre of psychology in the late-nineteenth century it was ousted by behaviorism and failed to revive even following the cognitive revolution in the mid-twentieth century, remaining largely in the purview of philosophy. At the time, one of the only places with consciousness on the scientific menu was southern California, so in 2001 I packed my bags to work with Gerald Edelman at the Neurosciences Institute (NSI) in San Diego. Edelman, a Nobel laureate for his work in immunology, is the developer of ‘Neural Darwinism’ and in those days was one of the few advocates for a true biology of consciousness. [v]

At the NSI I became aligned with what I still think is a productive approach to the neuroscience of consciousness, which is to identify neural processes that don’t just correlate with, but actually account for, fundamental properties of conscious experience. For example, conscious scenes are both highly differentiated (every conscious scene is different from every other) and highly integrated (every conscious scene is experienced ‘all of a piece’) implying that the underlying neural dynamics should also exhibit coexisting differentiation and integration. This approach can be called the search for ‘explanatory correlates’ of consciousness. [vi]

After five years in San Diego, I was given the opportunity to return to Sussex as a lecturer in the new Department of Informatics (replacing the much missed COGS).

Shortly after returning, several people, including myself, Phil Husbands and Mick O’Shea, started talking about new research strategies that could recapitulate the earlier success of the Centre for Computational Neuroscience and Robotics. One idea that cropped up was ‘consciousness’ but although there was some enthusiasm, there was not a way to make anything concrete happen. Nonetheless, it was clear that Sussex had within its faculty an exceptional assemblage of researchers focusing directly or indirectly on the subject, among them Zoltan Dienes and Jamie Ward in Psychology; Ron Chrisley and Maggie Boden in Informatics; and Hugo Critchley and Nick Medford in the new Brighton and Sussex Medical School (BSMS). In the life sciences, Mick O’Shea and Daniel Osorio also lent strong support. The feeling was that consciousness science was fresh and edgy, but scientifically concrete, in just the same way that the combination of computational neuroscience and robotics had been ten years earlier.

The new senior management, led by (former Vice-Chancellor) Michael Farthing, was casting about for novel research directions. It turned out that the (former) Vice-Chancellor had connections with the Dr. Mortimer and Theresa Sackler Foundation, a major philanthropic institution with interests across the arts and sciences. After some negotiations the foundation agreed to kick-start a new Centre focusing specifically on consciousness science, and in April of 2010 the Sussex Centre for Consciousness Science enjoyed its official opening.

The SCCS, co-directed by myself and Hugo Critchley (Professor of Psychiatry at BSMS), is the first of its kind in the world and exemplifies the Sussex tradition of defining new research frontiers. Over the last ten years, consciousness science has gained increasing prominence and is set to become one of the major research questions for twenty-first century science. [vii] The remit of the SCCS is well placed to ensure that Sussex remains at its forefront. The centre has two research strands: first, a basic science strand which seeks to unravel the complex brain mechanisms that generate consciousness by combining theoretical, experimental, and large-scale computational modelling, and second, a clinical application strand which focuses on neuropsychiatry as well as on brain-injured patients with neurological deficits. Clearly, the two strands strongly interact; the disturbances of conscious mental life that occur in psychiatric and neurological disorders involve radical and disabling shifts in the contents and quality of conscious experience.

Crick, F. & Koch, C. 1990 Towards a neurobiological theory of consciousness. Seminars in the Neurosciences 2, 263-275.

[i] A. Clark, Microcognition: Philosophy, Cognitive Science, and Parallel Distributed Processing (Cambridge, MA: MIT Press, 1989).

[ii] A. Clark, Being There. Putting Brain, Body, and World Together Again (Cambridge, MA: MIT Press, 1997).

[iii] B. Webb, 'Robots, crickets and ants: models of neural control of chemotaxis and phonotaxis', Neural Netw, 11 (1998), pp. 1479-96.

[iv] P. Husbands, I. Harvey, D. Cliff, and G. Miller, ‘Artificial evolution: a new path for artificial intelligence?’, Brain Cogn 34 (1997), pp. 130-59.

[v] G. M. Edelman, ‘Naturalizing consciousness: a theoretical framework’, Proc Natl Acad Sci U S A, 100 (2003), pp. 5520-4.

[vi] A. K. Seth, 'The grand challenge of consciousness', Frontiers in Psychology, 1 (2010), pp. 1-2.

[vii] F. Crick and C. Koch, 'Towards a neurobiological theory of consciousness', Seminars in the Neurosciences, 2 (1990), pp. 263-75.