Centre for Cognitive Science (COGS)

Seminars

COGS Seminars provide a forum for internationally recognised researchers from all corners of cognitive science research to present and discuss their latest findings. All are welcome to attend.

Autumn 2025

Tuesdays 16:00-17:30

DateSeminarVenue

Jan 27

Mind-body dualism: A perceptual core
Ehud Ahissar
Weizmann Institute of Science

Abstract: Could the abstract ideas of our minds originate from neuronal interactions within our brains? In addressing this long-standing question, we analyze interactions within the 'brain-world' (BW) and 'brain-brain' (BB) domains, representing the brain's physical interactions with its environment and the mental interactions between brains, respectively. BW interactions are characterized as analog—continuous in time and value—while BB interactions are digital—discrete in time and value. Digital signaling allows BB interactions to facilitate effective, albeit information-limited, communication through categorization. We review existing data showing that cascades of neural loops can convert between analog and digital signals, thereby linking physical and mental processes. Importantly, we show that these circuits cannot reduce one domain to the other, suggesting that the mind-brain duality can be mapped onto the BB-BW duality. This mapping, supported by both behavioral and neuronal data, indicates that the mind's foundation is inherently social. Thus, the BWBB scheme offers a novel account of the physical-mental gap, acknowledging the coexistence of the physical body and the non-physical mind while eliminating the need for a recursive homunculus in the brain or an independent mental foundation in the universe.

 

Fulton 202

Zoom ID: 821 9927 2300

Passcode: 985900

Feb 3

TBA
Pedro Mediano
Imperial College London

Abstract: tba

 

Jubilee G36

Zoom ID:

Passcode:

10 Feb

Can we assess phenomenal consciousness in artificial systems?
Tobias Schlicht
Ruhr

Abstract: In light of the rapid technological progress, several philosophers and scientists discuss the possibilities of creating and assessing phenomenally conscious AI, building on Putnam’s conjecture that computational functionalism is more plausible than a biological view of consciousness (Bayne et al. 2024, Birch 2025, Block 2025, Chalmers 2023, Dung 2025, Schneider et al. 2025). Butlin et al. (2025) outline a research program for distilling computational indicators of consciousness via neuroscientific theories of consciousness, based on experimental research on the neural correlates of consciousness in humans.

In this talk, I will distinguish three related but different questions in this context, argue that Chalmers’ epiphenomenal conception of phenomenal consciousness is unfit to serve as a target in this debate, and then evaluate whether computationalism and functionalism should be assumed to be empirically more plausible than a biological theory of consciousness. While computational functionalism allows for artificial consciousness in principle, it is questionable whether artificial conscious system are practically possible, and whether we are in an epistemic position to judge whether a specific AI is conscious. For this purpose, I am connecting debates about AI consciousness with debates about the required medium independence of neural computation (Piccinini 2020, Maley 2025, Williams 2025) and on multiple realizability (Chirimuuta 2022, 2025, Cao 2022, Seth 2025). I argue that in the case of non-biological artificial systems, all possible markers of consciousness typically used to assess consciousness in humans and non-human animals are either absent, ambiguous or question begging.

Jubilee G36

Zoom ID:

Passcode:

Feb 17

TBA
Anna Ciaunica
Lisbon

Abstract: TBA

Jubilee G36

Zoom ID:

Passcode:

Mar 3

TBA
Nicolas Shea
University of London

Abstract: tba

Jubilee G36

Zoom ID:

Passcode:

Mar 10

Inferring the presence (or absence) of consciousness in artificial systems
Wanja Wiese
Ruhr-University Bochum

Abstract: How should we assess which artificial systems could be conscious? Given uncertainty about the nature and distribution of consciousness, it is promising to look for indicators of consciousness that provide evidence for (or against) consciousness in artificial systems. A challenge is that there are hard cases in which the evidence pulls into different directions. In particular, cognitive and behavioural similarities between artificial and biological systems may speak for the hypothesis that a given artificial system is conscious; differences regarding the underlying mechanisms and substrates may speak against it.

In this talk, I introduce a taxonomy of indicators of consciousness and distinguish between approaches that manage uncertainty about indicators (reaching rational verdicts in the light of uncertainty) and approaches that seek to reduce uncertainty (improving our understanding of what counts as evidence). I argue that hard cases of possible artificial systems require that we reduce uncertainty, before we can rationally infer the presence or absence of consciousness. Furthermore, I discuss ways in which a reduction of uncertainty may be achieved.

Preprint: https://philarchive.org/rec/WIEITP

Jubilee G36

Zoom ID:

Passcode:

Mar 17

TBA
Olivia Guest
Radboud

Abstract: tba

online

Zoom ID:

Passcode:

Apr 14

TBA
Xabier Barandiaran
Basque Country

Abstract: tba

Jubilee G36

Zoom ID:

Passcode:

Contact COGS

For suggestions for speakers, contact Simon Bowes

For publicity and questions regarding the website, contact Simon Bowes.

Please mention COGS and COGS seminars to all potentially interested newcomers to the university.

A good way to keep informed about COGS Seminars is to be a member of COGS.  Any member of the university may join COGS and the COGS mailing list.  Please contact Simon Bowes if you would like ot be added.

Follow us on Twitter: @SussexCOGS