Events diary
Tracing Archives with Voice, Memory and Machine Learning
Friday 12 June 15:00 until 17:00
University of Sussex Campus : The Meeting House, Falmer BN1 9RH
Jonathan Chaim Reus is an artist-researcher working at the intersection of experimental music, embodied computation, and critical AI practice. His work explores the human-technology relationship through live performance, self-made instruments, and collaborative upstream technical intervention. He is an Affiliate Researcher at the Sussex Digital Humanities Lab and a researcher at the EMUTE Lab at the University of Sussex, and an Associate Researcher at the Intelligent Instruments Lab in Reykjavik. Recent projects include In Search of Good Ancestors / Ahnen in Arbeit, a year-long generative radio broadcast commissioned by CTM Festival, Deutschlandfunk Kultur, and ORF Kunstradio, and Dadasets, an EU S+T+ARTS-funded investigation into convivial voice data ecologies. His performance cycle Bla Blavatar vs. Jaap Blonk received an Award of Distinction in Digital Musics and Sound Art at the 2025 Prix Ars Electronica. He is a co-founder of the instrument inventors initiative [iii] and Netherlands Coding Live, and was the recipient of a W.J. Fulbright Fellowship for his research into new electronic music instruments at former STEIM [Studio for Electro-Instrumental Music].
Info about the work
AI voice systems have proliferated rapidly - capable of cloning a voice from seconds of audio, generating speech signals indistinguishable from recordings. But how do these systems remember the voices they learn from? This need not be a rote technical question, but a complex question of archival concerns. Rather than storing recordings as in a traditional archive, a machine learning model operates through a kind of intermediary machinic remembering, compressing human voices and textual materials into statistical patterns. These probabilistic traces present a unique kind of middle-memory, fixed in underlying pattern yet unfixed in retrieval.
Artist-researcher Jonathan Reus has worked for the past eight years with data-driven voice technologies, and has created numerous artistic works on the international stage dealing with the poetics and politics of data-mediated voice, that is, voice reframed through the mythology of data. His artistic work proposes thinking about machine learning as a fragile constitution of voice-like phenomena, one whose outputs are not replicas of the past but new interpretations of it enacted in the present. Taking an archival approach to machine learning and voice, we might ask what it means to treat the datafied voice as a relationship to be maintained, and an object of care. The event includes a live performance and an archival listening session, exploring how voice data, gift economies, and machine learning intersect in practices of consent, care, and intergenerational memory.
Link to NIME paper “Re-animation Instruments” (in review): Re-Animating the Archiv: Performing a Machine Learning System as Living Memory
Link to Frontiers paper “The Data-driven Voice-Body in Performance”:
Frontiers | The data-driven voice-body in performance: AI voices as materials, mediators, and gifts
This event is organised by the Making & Using Social Archives Network and the Sussex Digital Humanities Lab
By: Kate Malone
Last updated: Wednesday, 13 May 2026