News
Call for abstracts: workshop AI Consciousness and ethics - at AISB-2026, University of Sussex, UK, 1 to 2 July 20
By: Aleks Kossowska
Last updated: Friday, 9 January 2026
We invite you to present your work in the workshop reg. AI CONSCIOUSNESS AND ETHICS, which will be part of the AISB conference to be held at the University of Sussex, UK, 1 to 2 July 2026.
The conference is organised by The Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB) http://www.aisb.org.uk/
Symposium chair: Steve Torrance
Submissions and any questions should be sent to Steve Torrance via sbtorrance@outlook.com
OVERVIEW:
Are we likely to soon recognize a new constituency of non-biological ethical beings – AI agents which are either moral receivers (that have ethical rights) or moral doers (that have ethical duties? How does the development of AI consciousness (AIC) fit in with such a prospect? How might such ethically significant agents be implemented? How might it change our view of human existence, of living creatures, or of our technologies?
This one-day AISB-26 workshop will invite contributions covering the many ethical issues that arise over AI Consciousness research – in particular the potential for such research to produce systems that have their own moral standing (either as agents or as patients), or that give an illusory, but highly persuasive, appearance of having such status.
Machine Consciousness[1] and Machine Ethics[2] have both been active as AI subfields for over two decades. Recently, many have speculated that AI agents with the conversational abilities of LLMs may, with some future development, come to be seen as giving moral status to the systems on which they operate; that their performance must evidence a potentiality to experience suffering and other states of conscious awareness. Work in AIC development goes hand in hand with work in Consciousness Science more broadly; various theories from that domain have been considered in relation to grounding AIC accounts.[3] Principles have been proposed for ethically responsible research into AI-based consciousness, designed to guide researchers in avoiding vulnerability or victimisation in putative AIC agents that may be created.[4] Others, again, have examined the potential effects on human society and experience in relation to a spread of strong but mistaken belief in artificial sentience in the systems we interact with on social media[5], or have proposed a moratorium on AIC research while we properly assess its ethical and societal ramifications.[6]
Questions arise of many different sorts: Some concern the conceptual or theoretical foundations of notions such consciousness, suffering, needs, etc., and foundational questions concerning the ethical notions implicated in the debate. Other questions cover social, legal and economic impacts and policy issues concerning such research – for example, in the context of its rapid proliferation (and potential for marketisation), or of current and future mass consumer reception of “conscious-seeming” beings. There are also many issues in psychology, biology, neuroscience, etc. concerning the nature of consciousness itself – for example, debates between those who see biological constitution as essential to consciousness, and those who favour functional accounts of consciousness that are open to digital implementation: positions in such debates may have far-reaching consequences for how seriously we should take the idea of attributing moral claims or duties to AI agents.
PROGRAMME COMMITTEE FOR THE WORKSHOP
Prof Steve Torrance (Sussex) – Symposium Chair
Prof Antonio Chella (Palermo)
Dr Rob Clowes (Lisbon, Bochum)
Prof Mark Coeckelbergh (Vienna)
Dr John Dorsch (Prague)
Dr Alexei Grinbaum (CEA, Paris)
Prof Anil Seth (Sussex)
Dr Blay Whitby (Sussex)
SUBMISSION DETAILS AND KEY DATES
Potential contributors should submit a 1000-word extended abstract by 28 February 2026. Please ALSO provide a 300 word abstract – this will be used in the conference guide to be given to attendees, so that they can choose which parallel sessions to attend.
Submissions in the form of draft papers (max 8 pages, including notes and references) are also welcomed, but please supply a 1000-word extended abstract, and a 300-word short abstract.
Authors of accepted submissions will be informed of the decision to be invited to participate as a speaker, or as a poster-presenter, by 21 March 2026.
Full papers to AISB workshops are normally published as part of the AISB-26 Proceedings. Completed camera-ready copy (using the AISB format) should be sent by 28 April 2026. Papers not sent by that date may not be included in the Proceedings. Papers published in the Proceedings will not have a DOI.
Authors of accepted papers (one author per co-authored paper) will be expected to give an in-person presentation at the workshop. Remote presentations will not be accepted if no presenter is present at the workshop. Presentations (including set-up and Q&A) will normally last for 30 minutes, to allow for conference participants to move between different symposia. Exceptionally, presentation-time may be extended to one hour.
Submissions and any questions should be sent to Steve Torrance via sbtorrance@outlook.com
A submission management platform (e.g. via Easychair) may be announced soon.
USE OF GEN-AI TOOLS IN PREPARATION
In view of the potential problems of original authorship presented by AI paper generation tools, authors of submissions may be asked to include a brief career summary and a declaration of how, if at all, AI resources have been employed in the production of the material submitted.
INDICATIVE TOPICS OF INTEREST
(This is a non-exhaustive list. Submissions should specify the topic(s) being addressed.)
1. FOUNDATIONS
- What are/aren’t ethically relevant aspects of consciousness in the AI domain?
- Is ethical status in the AI domain an “objective”, or rationally grounded, matter ?
- Is “consciousness” an inherently ethical category in relation to AI?
- AC (artificial consciousness) and ethical patiency – machines with moral welfare claims
- AC and ethical agency – when could machines bear responsibility , accountability or blame?
- Ethical agency and patiency as mutually-entailing or independent properties?
- AC and the “expanding ethical circle” – AC ethics versus eco-ethics, and other non-anthropic ethical schemes.
- AI agents without consciousness as potential bearers of moral agency or patiency, or other ethically normative properties
- Consciousness/sentience versus alternative criteria for ethical concern (e.g. precarity, autopoiesis, deep cognitive properties, mental models, specific types of functional or neurocomputational organization, etc.)
- AC and suffering – ethical hazards of false negatives and false positives.
- Ethical consequences of Illusionism about consciousness
2. IMPACTS, POLICY
- How imminent is the emergence of an AC suffering crisis?
- Could consciousness in AI agents make them more (or less) trustworthy, efficient or operationally transparent?
- Would super-intelligent AI be likely to lead to super-conscious AI? Should that imply super-rights?
- The ethical, social and legal responsibilities of AI consciousness researchers (present or future)
- Is the objectivity of AC research skewed by the mega-corporate context, marketization, ROI considerations. Etc.?
- Regional/international regulatory frameworks for AC R&D – in the UK, EU, US or other global contexts
- Massive and rapid proliferation: The “ethics of scale” in relation to AI and consciousness
- The ethical, social and legal impacts of seeming-AC (seeming-and-actual vs seeming-and-non-actual)
- ACs/AIs as property/wealth owners; bearers of citizenship rights, criminal charges, etc.
- Should there be a moratorium on creating AC/synthetic phenomenology?
3. IMPLEMENTATION: BIOCENTRISM vs SUBSTRATE INDEPENDENCE
- Biological naturalism vs computational functionalism in the context of AIC ethics.
- AI super-intelligence and (ethically relevant) consciousness – the “AC drop-out” thesis
- AC, ethics and embodied / humanoid robotics
- Ethics of AC with non-standard implementations: e.g. brain organoids, xenobots, synth bio, virtual conscious beings; matrix beings, etc
- Onboard vs Distributed AC - Ethical ramifications
The planned length of the symposium is currently one full day. However, an extra half-day may be added if there is a large number of exceptional-quality submissions.
[1] Vintage collections include: O. Holland (ed) (2003). Special issue on Machine Consciousness, Journal of Consciousness Studies, 10 (4-5); S. Torrance, R. Clowes, R. Chrisley (eds) (2007). Special issue on Machine Consciousness, Embodiment & Imagination. Journal of Consciousness Studies 14 (7).
[2] Vintage collections include: S.Torrance (ed) (2008). Special Issue on Ethics and Artificial Agents, AI & Society 22(4), April 2008; M.Anderson, S.L.Anderson, (eds) (2011). Machine Ethics, Cambridge University Press.
[3] P. Butlin, R.Long, et al. (2023) “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness.” arXiv:2308.08708.
[4] P. Butlin, T. Lappas (2025). "Principles for responsible AI consciousness research." Journal of Artificial Intelligence Research 82.
[5] M. Suleyman (2025). “We must build AI for people, not to be a person: Seemingly conscious AI is coming.” https://mustafa-suleyman.ai/seemingly-conscious-ai-is-coming
[6] T. Metzinger, (2021). “Artificial suffering: An argument for a global moratorium on synthetic phenomenology.” Journal of Artificial Intelligence and Consciousness, 8(1), 43-66.