Centre for Cognitive Science (COGS)

Autumn 2022

Autumn 2022

Tuesdays 16:00-17:30

DateSeminarVenue

Sept 6

The Grieving Brain: Mechanisms of Neuroscience that Inform Prolonged Grief Disorder
Mary Frances O’Connor
Arizona

Abstract: Held in collaboration with the School of Psychology, Sussex Partnership NHS Foundation Trust, and Brighton and Sussex Medical School. Professor O’Connor specialises in understanding grief from multiple perspectives including neuroscience and brain body interactions. 

Using an integrative view of clinical psychology and cognitive neuroscience, Dr. O’Connor describes how the brain is critical in understanding that a loved one has died, updating one’s view of the world while carrying the absence of this person, and learning what the loss means for one’s own self and future. Cognitive neuroscience can help clarify why grieving takes so long, and is so painful. This view of bereavement adds to a history of studying the trajectory of grieving, and Dr. O’Connor clarifies why older stage models of grief are no longer used. In addition, empirical research (including neuroscience) has helped to define prolonged grief disorder (previously called complicated grief) and how targeted psychotherapy is an effective treatment for this disabling condition.

Pevensey 1 1A6

Zoom ID: 961 8239 8475

Passcode: 210537

Sept 16

Predictive coding and neurodiversity: a robotic approach
Prof Yukie Nagai
Tokyo

Abstract: What neural mechanisms underlie human cognitive development? What causes individual diversity between typical and atypical development? My research group has been addressing these research questions from a robotic approach. Inspired by the human brain, we design artificial neural networks based on predictive coding and investigate why and how modifications in predictive processing produce diverse cognitive behaviors.
This talk will first show an experiment comparing the development of representational drawing in robots and children. Our results demonstrate that aberrant precisions of prediction and sensation produce various immature drawings as observed in young children. I will then discuss our new challenge to integrate interoceptive sensation in our computational studies. We aim to investigate how the internal state of the body such as heartbeat and breathing affects (and is affected by) sensorimotor experiences.

Biog: Professor Yukie Nagai has been investigating underlying neural mechanisms for social cognitive development by means of computational approaches. She designs neural network models for robots to learn to acquire cognitive functions such as self-other cognition, estimation of others’ intention and emotion, altruism, and so on based on the theory of predictive coding. The simulator reproducing atypical perception in autism spectrum disorder (ASD), which has been developed by her group, greatly impacts society as it enables people with and without ASD to better understand potential causes for social difficulties.
She was elected to “30 women in robotics you need to know about” in 2019 and “World’s 50 Most Renowned Women in Robotics” in 2020.
She serves as the principal investigator of JST CREST “Cognitive Mirroring” and CREST “Cognitive Feeling” since December 2016 and October 2021, respectively.
She is also a member of International Research Center for Neurointelligence at the University of Tokyo since 2019, and a member of Next Generation Artificial Intelligence Research Center and Forefront Physics and Mathematics Program to Drive Transformation at the University of Tokyo since 2020.

tbc

Zoom ID: 919 2706 6407

Passcode: 7936

Oct 4

Explanatory Pragmatism and Philosophy for the Science of Explainable AI
Rune Nyrup
Cambridge

Abstract:AI systems are often accused of being ‘opaque’, ‘uninterpretable’ or ‘unexplainable’. Explainable AI (XAI) is a subfield of AI research that seeks to develops tools for overcoming this challenge. To guide this field, philosophers and AI researchers alike have looked to the philosophy of explanation and understanding. In this paper, I examine the relation between philosophy and this new Science of Explainable AI. I argue that there is a gap between typical philosophical theories of explanation and understanding and the motivations underlying XAI research: such theories are either too abstract, or else to too narrowly focused on specific scientific contexts, to address the varied ethical concerns that motivate XAI. I instead propose an alternative model for how philosophers can contribute to XAI, focused on articulating “mid-level” theories of explainability, i.e., theories which specify what kinds of understanding are important to preserve and promote in specific contexts involving AI-supported decision making. This programme, which I call philosophy for the science of XAI, is conceived as an inherently interdisciplinary endeavour, integrating normative, empirical and technical research on the nature and value of explanation and understanding.

online

Zoom ID: 967 4191 3745

Passcode: 8936

Oct 18

POSTPONED

Doing without the concept of mind?
Joseph Gough
Sussex

Abstract: I suggest that the concept of mind appears to be systematically misleading and unhelpful in cognitive science, psychology, and psychiatry. I offer some argument for this claim, and a diagnosis of why it might hold, before considering how we ought to respond to this situation, and indeed, what exactly this situation amounts to. In particular, I run through several tentative arguments that might be based on the claim, which I think point towards some kind of ‘eliminativism’ – the position that the concept ought to be abandoned, perhaps entailing that there is no such thing as a mind.

Arts A005

Zoom ID: 915 5638 0352

Passcode: 4912

Nov 1

Neural Coding and Visual Perception – Back to Basics
Prof Daniel Osorio
Sussex

Abstract: Accounts of human perception of colour and pattern draw on various principles and mechanisms from photoreceptor properties to inference based on experience and expectation. The wealth of accounts presents a problem because the explanatory power of any given principle is unclear, and it remains difficult to predict colour and lightness in natural conditions. Coding efficiency is an established principle in sensory neuroscience, but its relevance to perception - both on mechanistic and philosophical grounds - is unclear. Recently we have found that colour appearance along with many familiar illusions are indeed predicted by a simple physiologically plausible model of efficient coding in early vision. What does this mean for understanding our experience of the visual world?

Arts A103

Zoom ID: 978 9420 8728

Passcode: 4838

Nov 15

Towards an evolutionary theory for the development of resilient and sustainable economic and financial systems
Dr Eduardo Viegas
Imperial College London

Abstract:The seminar is divided in three parts. The first part will look at Complexity Science from a historical and philosophical nature, essentially arguing that the Science requires a dialogical approach, as opposed to the more common Hegelian-type and positivist frameworks traditionally used in economics and finance. We will then move to the second part where - inspired by real world events as well as by Gould’s structure of evolutionary theory and the essence of Minsky’s Financial Instability Hypothesis - we will describe and propose the fundamental principles for a complexity theory to analyse the essential business dynamics of financial markets and economies. Lastly, we will move beyond a general framework in order apply specific mathematical methods associated with Information Theory, Graph and Network Theory and Evolutionary Dynamics to show how these methods may be able to capture some of the real-world market dynamics that may be fundamentally contrarian to the existing economic and financial orthodoxy.

Arts A005

Zoom ID: 942 2603 5830

Passcode: 5018

Nov 29

POSTPONED

Assembled bias: Beyond transparent algorithmic bias
Dr Robyn Waller
Sussex

Abstract:In this talk, we'll make the case for the emergence of novel kind of bias with the use of algorithmic decision-making systems. We will argue that the distinctive generative process of feature creation, characteristic of machine learning (ML), contorts feature parameters in ways that can lead to emerging feature spaces that encode novel algorithmic bias involving already marginalized groups. We term this bias assembled bias. Moreover, assembled biases are distinct from the much-discussed algorithmic bias, both in source (training data versus feature creation) and in content (mimics of extant societal bias versus reconfigured categories). As such, this problem is distinct from issues arising from bias-encoding training feature sets or proxy features. Assembled bias is not epistemically transparent in source or content. Hence, when these ML models are used as a basis for decision-making in social contexts, algorithmic fairness concerns are compounded.

Chichester 3-3R24

Zoom ID: 993 7915 9975

Passcode: 3705

Dec 6

Plant cognition: Lessons from cognitive neuroscience
Dr Jonny Lee
Murcia

Abstract:Growing evidence indicates that plants possess abilities associated with cognition—such as decision-making, learning and memory—and achieve these (in part) via mechanisms shared with animal nervous systems. However, whether or in what sense plants count as cognitive remains controversial. For example, there are reasons to be sceptical that plants perform computation in a robust sense, whilst computation remains key to orthodox explanations of cognition. This talk explores where contemporary cognitive neuroscience, in contrast to classical cognitive science, helps to solve puzzles surrounding so-called plant cognition. I focus on (1) whether the ‘cognitive neuroscience revolution’ can transform our thinking about the cognitive status of plants, and more practically, (2) how cognitive neuroscience and the study of plant cognition—especially ‘plant neurobiology’—inform each other. I suggest that cognitive neuroscience can inspire a shift away from abstract and dichotomous questions concerning whether plants are truly cognitive, and toward precise, piecemeal questions pitched at different organisational levels. Moreover, dialogue between animal and plant studies promises to be mutually beneficial, for instance, around anaesthetics research.

Arts A005

Zoom ID: 937 1360 0785

Passcode: 3492

Contact COGS

For suggestions for speakers, contact Simon Bowes

For publicity and questions regarding the website, contact Simon Bowes.

Please mention COGS and COGS seminars to all potentially interested newcomers to the university.

A good way to keep informed about COGS Seminars is to be a member of COGS.  Any member of the university may join COGS and the COGS mailing list.  Please contact Simon Bowes if you would like ot be added.

Follow us on Twitter: @SussexCOGS