COGS Seminars provide a forum for internationally recognised researchers from all corners of cognitive science research to present and discuss their latest findings. All are welcome to attend.
Autumn 2025
Tuesdays 16:00-17:30
| Date | Seminar | Venue |
|---|---|---|
|
Oct 7 |
How Words Help Us Think Abstract: There is general agreement that a capacity to act for reasons is a mark of intentionality. Views differ widely, however, on how ‘acting for reasons’ unpacks. According to the cognitivist tradition in which individuals are the central units of investigation, intentional agents make sense of their world via internal representations variously construed as neural, mental, or, on some reductive accounts, both. On these views, to act for a reason is to be responsive to some representation of the how the world is, was, or could be. How behaviour is guided by explicit use of representations—e.g., deliberation between whether to pick answer A or B on a multiple-choice exam—is taken to be continuous with the way that implicitly representational processes such as perception guide behavior. For 4E theorists, in contrast, intentional agents are not individuals so much as they are continually shifting agent-situation couplings to and from which responses develop, often reciprocally. Intentional agents learn to cope within their world as they move and act within it; their needs and wants develop in accordance with their capacity to skillfully “operate” within ongoing situation landscapes. To ‘act for a reason’ here is to be agentive and responsive in a codeveloping agent-situation. In the context of a comprehensive account of cognition, both views offer important insights. The representational approach brings attention to the cognitive power of explicitly deliberative activity but 4E views explain how operative intentionality grounds actions. In How Words Help Us Think, the book on which this talk is based, these insights are merged. Representations do have a powerful role to play in deliberative processes but not as internal structures that agents “recur” on; rather, they are external tools for spatiotemporally extending the ongoing situations in which intentional agents are always embedded. On this view, a deliberative capacity, what I will be calling “representational intentionality,” is a strongly scaffolded skill rather than a fundamental capacity: while neural activity plays a critical role here, the development of representational intentionality requires in addition a certain kind of environment—one in which there are language practices—and a particular skill with it. My task in this book is to give an account of how representational intentionality develops when the requisite endogenous and exogenous factors are present. In this talk, I will present the arc of the view along with some of the key arguments that support it. |
Arts A103 |
|
21 Oct |
Can GPTs be more than Stochastic Parrots? Informationally Grounded Utterances Abstract: Despite the impressive ability of large language models (LLMs) to generate outputs that are often correct, informative, and apparently insightful or creative, there remains a persistent suspicion that these systems do not truly understand what is said to them—or what they themselves say. LLMs, it is often claimed, are merely stochastic parrots: they don’t know what they are talking about. One response is to attempt to modify LLMs so as to endow them with genuine linguistic understanding. I argue that this would only succeed if such modifications made LLMs into full agents and subjects. For those of us sceptical about the near-term likelihood of that, I suggest a different path: rather than aiming to make LLMs more like human speakers, we can examine the (normal) semantic properties of the text itself and ask whether these are supported by the causal and informational processes that generate it. Specifically, what informational relations are required for reference to particulars—objects, events—and does current LLM technology-practice satisfy these requirements? I argue that in many cases it does not. In response, I propose that we should (1) adapt our use of LLMs to match their actual linguistic capacities, and/or (2) modify and augment LLM technology so that the informational relations underlying their outputs more closely match those outputs’ semantic ambitions—without requiring the creation of robust artificial subjects. |
Fulton Building - FUL-104 |
|
Nov 18 |
A NeuroEcological Theory of Organism-Environment Cognitive Systems Abstract: Gibsonian ecological psychology exerted a major influence on radical embodied conceptions of cognition. It offered an alternative to behaviorism’s fixation on the individual’s overt actions and cognitivism’s solipsistic flavor. Instead, the target of inquiry is the organism-environment system and affordances. A long-standing criticism of this approach is the apparent absence of even a sketch of the contributions made by the brain, which has led to the caricatured Gibsonian creature as being filled with “foam rubber” and “wonder tissue.” Here, I provide a path forward to understanding the brain’s contribution to affordances: the NeuroEcological Nexus Theory (NExT). NExT hypothesizes that affordances emerge via the coordination of low-dimensional features of environmental (ecological) information, body synergies, and neural population manifolds. This approach is motivated by recent neuroscience research demonstrating that neural population dynamics are preserved in low-dimensional manifolds within and across animals performing similar actions. Accordingly, it is hypothesized that neural population dynamics map to particular affordance events with regularity. Taken together, the theory of affordances successfully appealed to for decades by Gibsonians is complemented by methods from manifold theory. In this way, ecological psychologists and other proponents of radical embodiment will no longer be accused of believing in creatures filled with foam rubber and wonder tissue. Biography: Dr. Luis (“Louie”) H. Favela is an Associate Professor in the Department of History and Philosophy of Science and Medicine & Cognitive Science Program at Indiana University Bloomington (IUB). Prior to IUB, he was an Associate Professor in the Department of Philosophy & Cognitive Sciences Program at the University of Central Florida. He has been a fellow at the Research Corporation for Science Advancement, University of Pittsburgh’s Center for Philosophy of Science, and Duke University’s Summer Seminars in Neuroscience and Philosophy. He earned graduate degrees in Philosophy (Life Sciences Track) and Experimental Psychology at the University of Cincinnati. His research is interdisciplinary, situated at the intersections of cognitive science, experimental psychology, and the history and philosophy of mind and science. His recent book is The Ecological Brain: Unifying the Sciences of Brain, Body, and Environment (Routledge, 2024). |
online Passcode: 406098 |
|
Nov 25 |
Consciousness in LLMs? Why MOI Says the INNER WORLD Is the Wrong Answer Abstract: The recent enthusiasm surrounding consciousness in LLM is driven by a familiar—yet rarely questioned—assumption, namaley that consciousness is an inner, private, representational world produced by the brain (or by an artificial system) and accessible only from a first-person perspective. This talk challenges that framework at its roots. I present the Mind-Object Identity (MOI) theory, an ontological account according to which consciousness is not an internal model, a computational state, or a phenomenal medium. Instead, consciousness is the external object itself, existing relative to the causal circumstances offered by the physical structure of the body. From this standpoint, the so-called “hard problem” evaporates, and the very idea of engineering artificial consciousness by reproducing inner representations becomes conceptually misguided. Consciousness, both in humans and in LLM, is a pseudoproblem. I will apply MOI to large language models (LLMs) and argue that current debates about emergent subjectivity, inner monologue, self-modeling, and synthetic phenomenology rest on an outdated metaphysics of appearance. LLMs do not instantiate consciousness as long as they lack the physical, world-involving identity relations that constitute experience. At the same time, MOI offers a constructive framework for understanding what AI systems are doing, why they appear increasingly agent-like, and why misattributions of consciousness have become so compelling. By reframing consciousness not as an inner theater but as a relation between real objects, MOI dissolves the conceptual space that makes artificial consciousness appear both possible and problematic. The talk proposes a shift from internalist ontology to a relational, world-based account of mind, and shows how this shift clarifies the capabilities and limits of current and future AI systems. Bio: Riccardo Manzotti is a philosopher of mind and cognitive science at IULM University, Milan. He is internationally known for developing the Mind-Object Identity (MOI) theory, a radical externalist account of consciousness that rejects internal representations and reframes experience as the existence of external objects relative to the body. He has published extensively on perception, embodiment, intentionality, artificial intelligence, and the metaphysics of mind (”The Spread Mind, 2019, OrBooks). His recent work applies MOI to contemporary debates in AI and machine consciousness, arguing for a world-based rather than subject-based ontology of experience. Beyond academia, he works across disciplines—neuroscience, art, technology, and media—to promote a naturalistic yet non-reductionist understanding of the mind. |
Bramber House 243 Passcode: 457077 |
|
Dec 2 |
Open-ended cultural evolution Abstract: tba |
Fulton 101 Passcode: 518650 |
Contact COGS
For suggestions for speakers, contact Simon Bowes
For publicity and questions regarding the website, contact Simon Bowes.
Please mention COGS and COGS seminars to all potentially interested newcomers to the university.
A good way to keep informed about COGS Seminars is to be a member of COGS. Any member of the university may join COGS and the COGS mailing list. Please contact Simon Bowes if you would like ot be added.
Follow us on Twitter: @SussexCOGS

