AISB 2026 Symposium on the Ethics of Gen-AI Assisted Authoring
Posted on behalf of: Sussex Digital Humanities Lab (SHL Digital)
Last updated: Friday, 16 January 2026
Members of SHL Digital are organising a symposium on the Ethics of Generative-AI Assisted Authoring at AISB 2026, University of Sussex, July 1-2 2026. The Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB) is the largest Artificial Intelligence society in the United Kingdom.
Whilst there is no shortage of discussion of AI ethics, the recent widespread uptake of Gen-AI tools to support authoring raises new questions (Formosa et al. 2025, Murray 2024). The primarily deontological frameworks that attempt to provide general principles for AI ethics tend to be aimed at decision-making contexts, and to re-define ethics in terms of checklists for privacy protection, safety, accountability and fairness (Hagendorff, 2020).
This symposium will examine the ethics of Gen-AI assisted authoring in education, work and culture.
Within authoring, we include the production of written texts, multimodal texts and software. It is easy to think of case studies of Gen-AI assisted authoring that most would find to be ethical (such as a dyslexic writer using a Gen-AI tool to correct grammatical errors and improve clarity of a draft) and that most would find to be unethical (such as a student submitting code generated from a prompt based on an assignment brief for assessed coursework).
We invite discussions and explorations of how various factors influence perceptions of ethicality, including the purpose and context of the authoring, the extent to which the output communicates the author’s own ideas, the impact on the author’s own cognitive abilities, the impact on the author’s wellbeing at work, whether the original task was appropriate, the environmental and resource costs of the use, and the specific models and servers used for the generation.
We invite contributions that explore the extent to which earlier innovations in authoring technology (such as typewriters and word processers, or high-level programming languages) are similar to the current rapidly evolving situation with Gen-AI assisted authoring, and the extent to which they differ.
Contributions reflecting on the following topics and questions are very welcome:
- Examining Gen-AI assisted authorship through the lens of ethics of care, rights-based ethics and other contemporary ethical frameworks (see Villegas-Galaviz et al. 2024 for an examination of AI decision-making from an ethics of care perspective).
- The implications of Gen-AI usage for models and definitions of writing and authoring, and its status as a cognitive and social activity.
- How opportunities for self-expression and creativity are being impacted by gen-AI assisted authoring; for instance, with regards to emergent paradigms such as ‘vibe coding’.
- The potential for improving inclusivity, accessibility and equitability in education and workplaces using Gen-AI (see Glazko et al. 2025).
- The changing nature of work and education – what cognitive abilities and skills will be important in the mid 21st Century, and how should we teach them?
- Risks and harms of Gen-AI in authoring, including environmental impacts, perpetuation of biases, stagnation of ideas / loss of transformative ideas (see Lee et al. 2025).
- Frameworks for measuring and balancing the potential benefits and harms, and mitigation of risks and harms, and impacts on wellbeing at work.
- How Gen-AI interfaces hide and reveal information that is important for ethical usage (such as computational and resource cost of queries, models and servers used), potentially increasing moral distance.
- How can Gen-AI assisted authoring systems be designed to support reflection and critical thinking rather than instant resolution?
- Legal issues around Gen-AI authoring, including intellectual property, copyright and liability.
- Emerging governance frameworks and technical solutions for copyright and IP in the AI era, especially their ethical and philosophical underpinnings.
- Imagining alternative futures for the intersection of authorship and AI.
- Gen-AI and authorship seen through theoretical lenses such as disability studies, postcolonial studies, queer and gender studies, political ecology, critical political economy, etc.
Submission information: We invite extended abstracts of up to 1500 words, to be presented during the symposium through panels and short-talks (to be curated by programme committee after reading submissions). Visit the Symposium webpage for more information.
Deadline: 27 February
Symposium format: 1-day symposium
Submission link: https://easychair.org/conferences/?conf=egaaaaisb26
Programme Committee
The symposium PC is led by members of the Sussex Digital Humanities Lab, in collaboration with other University of Sussex colleagues from across the disciplines.
- Kate Howland, SHL Digital/ Informatics, University of Sussex
- Maria Teresa Llano Rodriguez, SHL Digital/ Informatics, University of Sussex
- Sharon Webb, SHL Digital/ History, University of Sussex
- Jo Walton, SHL Digital/ Media and Film, University of Sussex
- Emma Russell, Business School, University of Sussex
- Becky Faith, Institute of Development Studies at the University of Sussex
- Robyn Repko Waller, Philosophy, University of Sussex
- Salvatore Fasciana, Law, University of Sussex
References
Formosa, P., Bankins, S., Matulionyte, R., & Ghasemi, O. (2025). Can ChatGPT be an author? Generative AI creative writing assistance and perceptions of authorship, creatorship, responsibility, and disclosure. AI & Society, 40(5), 3405-3417. https://doi.org/10.1007/s00146-024-02081-0
Glazko, K. S., Huh, M., Johnson, J., Pavel, A., & Mankoff, J. (2025). Generative AI and Accessibility Workshop: Surfacing Opportunities and Risks. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (pp. 1-6). https://doi.org/10.1145/3706599.3706734
Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and machines, 30(1), 99-120. https://doi.org/10.1007/s11023-020-09517-8
Lee, H. P., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. In Proceedings of the 2025 CHI conference on human factors in computing systems (pp. 1-22). https://doi.org/10.1145/3706598.3713778
Murray, M. D. (2024). Tools do not create: human authorship in the use of generative artificial intelligence. Case W. Res. JL Tech. & Internet, 15, 76.
Villegas-Galaviz, C., Martin, K. Moral distance, AI, and the ethics of care. (2024). AI & Society 39, 1695–1706 https://doi.org/10.1007/s00146-023-01642-z

