Centre for Cognitive Science (COGS)

Seminars

COGS Seminars provide a forum for internationally recognised researchers from all corners of cognitive science research to present and discuss their latest findings. All are welcome to attend.

Autumn 2022

Tuesdays 16:00-17:30

DateSeminarVenue

Mar 7

Cognitivism and Consciousness: A Degenerating Research Programme?
Dr Adrian Downey
Trinity College Dublin

Abstract: It is sometimes argued that Descartes invented the modern mind-body problem— by defining "mind" in direct opposition to "physical", he created a gap between the two which could never subsequently be resewn. To avoid the modern mind-body problem, the story then goes, we have only to adopt a non-Cartesian understanding of these concepts. In this talk, I shall argue that cognitivist cognitive science commits to precisely such a conceptual Cartesianism which has resulted in a(n inevitably) degenerating research programme. Accordingly, I suggest that we are best off conducting a non-Cartesian cognitive science (for instance, of the kind to be found in ecological and enactive research paradigms).

I begin my argument by providing a Lakatosian construal of the cognitivist paradigm. First, I explain its ‘hard core’ theses— the unfalsifiable fundaments of the paradigm— to be two-fold: 1) mind is constituted by the publicly accessible processing of unconscious brain-based representations; 2) consciousness is inherently subjective. Then, I highlight a representative range of the dispensable and revisable auxiliary hypotheses making up its ‘protective belt’ (representations are symbolic or sub-symbolic entities; perception is a bottom-up or top-down process; and so on). I next contend that cognitivism’s ‘hard core’ tenets are, in all relevant respects, indistinguishable from Descartes' concepts of "physical" (tenet one) and "mental" (tenet two). Accordingly, that research within cognitivist cognitive science increasingly fits within the binary categories of "illusionism" or "qualia realism" is unsurprising— since Descartes' conceptual distinctions do not allow for a physical understanding of consciousness, it follows that they require realism about either physicalism (illusionism) or consciousness (qualia realism), but never both. A good theory of consciousness should, however, allow for realism about both physicalism and consciousness. Since extant cognitivist theorising shows no sign of allowing for this, I label it a degenerating research programme.

In response to such arguments, cognitivists often contend that future theoretical and empirical advances will obviate them. Whilst granting this may be the case writ large, I retort that cognitivism cannot avail of such a response. This is because, by the strictures of Lakatos, changes within paradigms can only be made to their ‘protective belts’. But the problem with cognitivism lies in its ‘hard core’, and to revise these tenets would requiring giving up on cognitivism all together. I recommend precisely this course of action, suggesting that the explicitly non-Cartesian ecological and enactive research paradigms constitute better bets for successfully pursuing a science of consciousness.

Arts A 108

Zoom ID: 943 6047 9417

Passcode: 8264

Mar 14

The entangled brain: the integration of emotion, motivation, and cognition
Dr Luiz Pessoa
Maryland

Abstract: Research on the “emotional brain” often focuses on particular structures, such as the amygdala and the ventral striatum. In this presentation, I will discuss research that embraces a distributed view of both emotion- and motivation-related processing, as well as efforts to unravel the impact of emotion and motivation across large-scale cortical-subcortical brain networks. In the framework presented, emotion and motivation have broad effects on brain and behavior, leading to the idea of the “entangled brain” where brain parts dynamically assemble into coalitions that support complex cognitive-emotional behaviors. According to this view, it is argued that decomposing brain and behavior in terms of standard mental categories (perception, cognition, emotion, etc.) is counterproductive.

Online

Zoom ID: 912 9509 3360

Passcode: 1563

Mar 23

Generative AI. What it might reveal about 4E cognitive science and the shape of the human mind.
Dr Rob Clowes
New University of Lisbon

Abstract: Even though, up until recently, philosophers and cognitive scientists have been surprisingly quiet about the new AI. The new transformer / deep learning / AI systems will likely change both how we think about AI, but also how we think about intelligence and cognition more generally.

One vantage-point on the new AI is the 1990s discussion over 4E cognitive science. The (programmatic) work of Rodney Brooks with creatures, mobots and subsumption architectures seems to suggest notions about intelligence and how to build (many) AI systems which are, on the face of it, deeply at odds with today's generative AI. ChatGPT, Dall-E and LaMDA seem to be more directly inheritors of connectionism than 4E cog sci and if anything examples of 0E cognition. Moreover, the analysis of such systems by their creators seem to be shot through with (versions of) representationalist assumptions many believed needed to be transcended. One might think that with the advent of deep learning and generative AI at least the radical anti-representationalist flavours of 4E cognitive science may now face a severe challenge.

So, does the new AI really clash with 4E cog sci, behaviour-based robotics and its inheritors, and if so, what does that say about the broader 4E programme? I will offer three possible paths forward for the 4E program in the wake of generative AI and predictive processing. I’ll ask what they might tell us about the structure of the human mind and some possible shapes for our cognitive futures.

Robert W Clowes is the coordinator of the Lisbon Mind, Cognition and Knowledge Group and a senior researcher at IFILNOVA, Universidade Nova de Lisboa.

Fulton 205

Zoom ID: 944 3294 8834

Passcode: 4375

Mar 28

Assembled bias: Biased judgments and exaggerated images in machine learning
Dr Robyn Repko Waller (Sussex) & Russell L. Waller (Machine Learning Contractor)
Sussex

Abstract:Machine learning (ML) models already drive much of contemporary society, and newer ML models, such as ChatGPT and DALL-E, demonstrate impressive competence in tasks such as text and image generation once outside the bounds of artificial intelligence (AI). However, when algorithmic systems are applied to social data, flags have been raised about the occurrence of algorithmic bias against historically marginalized groups. Further, some users of the popular portrait-creating LENSA have reported misogynistic and distorted body images generated from head-only selfies. Those working in AI and the broader algorithmic fairness community point to human biases against marginalised groups and social stereotypes that algorithms inherit from the data set on which they operate as a source of such bias and distortion in AI output.

Here we argue that such bias and exaggeration has a further, more epistemically pernicious source that has been overlooked, the distinctive generative process of feature creation with ML. Specifically, we make the case for the emergence of novel kind of exaggeration and bias, which we term assembled bias, with the use of ML. To do so, we make the process of ML more transparent using the combination of visualisation in topological data analysis and sociopolitical concepts. We demonstrate that ML constructs a non-interpretable exaggerated representation of its target population as a whole (e.g., human bodies, parole applicants) due to the kinds of features favored and massively reconfigured in ML feature space. We introduce the notions of assembled privilege and assembled marginalization to explain the effect this has on representation of interpretable social groups. We contend, therefore, that assembled bias in part drives the biased and exaggerated outputs of operative ML systems. Such bias is epistemically opaque and distinct in both source and content from the kinds under discussion in the algorithmic fairness literature.

Chichester 3 - 3R143

Zoom ID: 985 2138 9426

Passcode: 2863

Contact COGS

For suggestions for speakers, contact Simon Bowes

For publicity and questions regarding the website, contact Simon Bowes.

Please mention COGS and COGS seminars to all potentially interested newcomers to the university.

A good way to keep informed about COGS Seminars is to be a member of COGS.  Any member of the university may join COGS and the COGS mailing list.  Please contact Simon Bowes if you would like ot be added.

Follow us on Twitter: @SussexCOGS