Can GPTs be more than Stochastic Parrots? Informationally Grounded Utterances
Tuesday 21 October 16:00 until 17:30
University of Sussex Campus : Fulton Building - FUL-104
Speaker: Prof Ron Chrisley
Part of the series: COGS Research Seminars

Despite the impressive ability of large language models (LLMs) to generate outputs that are often correct, informative, and apparently insightful or creative, there remains a persistent suspicion that these systems do not truly understand what is said to them—or what they themselves say. LLMs, it is often claimed, are merely stochastic parrots: they don’t know what they are talking about. One response is to attempt to modify LLMs so as to endow them with genuine linguistic understanding. I argue that this would only succeed if such modifications made LLMs into full agents and subjects. For those of us sceptical about the near-term likelihood of that, I suggest a different path: rather than aiming to make LLMs more like human speakers, we can examine the (normal) semantic properties of the text itself and ask whether these are supported by the causal and informational processes that generate it. Specifically, what informational relations are required for reference to particulars—objects, events—and does current LLM technology-practice satisfy these requirements? I argue that in many cases it does not. In response, I propose that we should (1) adapt our use of LLMs to match their actual linguistic capacities, and/or (2) modify and augment LLM technology so that the informational relations underlying their outputs more closely match those outputs’ semantic ambitions—without requiring the creation of robust artificial subjects.
Zoom Meeting ID: 864 6576 1259
Passcode: 527588
By: Simon Bowes
Last updated: Thursday, 16 October 2025