AIMC 2025

Proceedings bulk download:

Israel Neuman (Texas Southern University). Beat-Flow-Interplay (BFI): Interactive Tools for Responsive Beat-Flow Production

Xinyue Hu (KTH Royal Institute of Technology); Bob Sturm (KTH Royal Institute of Technology). Assessing the Alignment of Valence and Arousal between Text Prompts and the Resulting AI-Generated Music

Sebastian Murgul (Klangio GmbH); Michael Heizmann (Karlsruhe Institute of Technology). Exploring Procedural Data Generation for Automatic Acoustic Guitar Fingerpicking Transcription

Ge Liu (Simon Fraser University); Keon Ju Lee (Simon Fraser University); Miles Thorogood (University of British Columbia); Christopher Anderson (University of British Columbia); Philippe Pasquier (Simon Fraser University). AI-assisted Sound Design with Audio Metaphor (AuMe): An Evaluation with Novice Sound Designers

Vincenzo Madaghiele (University of Oslo); Stefano Fasciani (University of Oslo); Tejaswinee Kelkar (University of Oslo); Çağrı Erdem (University of Oslo). MAAL: a multi-agent autonomous live looper for improvised co-creation of musical structures

Elizabeth Wilson (Creative Computing Institute; University of the Arts, London); Anna Wszeborowska (Creative Computing Institute; University of the Arts, London); Nick Bryan-Kinns (Creative Computing Institute; University of the Arts, London). A Short Review of Responsible AI Music Generation

Yiren Zhao (KTH Royal Institute of Technology); Elin Kanhov (KTH Royal Institute of Technology); Bobby L. T. Sturm (KTH Royal Institute of Technology). Methodological Considerations of Digital Ethnographic Studies in the AI Music Field

Nicolas Jonason (KTH Royal Institute of Technology); Luca Casini (KTH Royal Institute of Technology); Bob Sturm (KTH Royal Institute of Technology). SMART: Tuning a symbolic music generation system with an audio domain aesthetic reward

Benjamin Saldías (Pontificia Universidad Catolica de Chile); Denis Parra (Pontificia Universidad Catolica de Chile); Marcelo Mendoza (Pontificia Universidad Catolica de Chile); Rodrigo Cadiz (Pontificia Universidad Catolica de Chile). Generative Variational Autoenconder model of musical scales based on Slominsky’s thesaurus

Agustín Macaya (Pontificia Universidad Catolica de Chile); Denis Parra (Pontificia Universidad Catolica de Chile); Rodrigo Cadiz (Pontificia Universidad Catolica de Chile). Unsupervised generative chord representation learning and its effect on novelty-creativity and fidelity-standards

Matthew Peachey (Dalhousie University); Sageev Oore (Dalhousie University); Joseph Malloch (Dalhousie University). Evaluating Low-Dimensional Latent Representations as a Creative Interface for Digital Synthesizers

Andrea Martelloni (University of Sussex); Chris Kiefer (University of Sussex). Towards an Ecosystem of Instruments of Tunable Machine Learning

Olivier Anoufa (Centrale Lille); Alexandre D’Hooge (Université de Lille); Ken Déguernel (CNRS). Conditional Generation of Bass Guitar Tablature for Guitar Accompaniment in Western Popular Music

Ted Moore (Composer). Learned Navigation of StyleGAN3 Latent Space from Audio Descriptors

Riccardo Ancona (Università di Bologna). Ontologies of Sound in Neural Network Engineering

Fabian Ostermann (TU Dortmund); Jonas Kramer (TU Dortmund); Günter Rudolph (TU Dortmund). Using Large Language Models as Fitness Functions in Evolutionary Algorithms for Music Generation

alexandre saunier (LUCA School of Arts, KU Leuven); Federico Visi (Luleå University of Technology / Universität der Künste Berlin ); Maurice Jones (Concordia University). Large Language Models to generate sonic behaviors: the case of Wilding AI in exploring creative co-agency

Guilherme Coelho (Technische Universität Berlin). The Artist Is Present: Traces of Artists Residing and Spawning in Text-to-Audio AI

Guilherme Coelho (Technische Universität Berlin). AI in Music and Sound: Pedagogical Reflections, Post-Structuralist Approaches, and Creative Outcomes in Seminar Practice

Balthazar Bujard (STMS IRCAM-CNRS-Sorbonne Université); Jérôme Nika (STMS IRCAM-CNRS-Sorbonne Université); Nicolas Obin (STMS IRCAM-CNRS-Sorbonne Université); Frédéric Bevilacqua (STMS IRCAM-CNRS-Sorbonne Université). Learning Relationships Between Separate Audio Tracks for Creative Applications

Eric Browne (MTU). The Shape of Surprise: Structured Uncertainty and Co-Creativity in AI Music Tools

Sri Hanuraga (UPH Conservatory of Music); Stevie J. Sutanto (UPH Conservatory of Music). Jazz in the Age of Algorithmic Alienation: AI as a Catalyst for Critical Improvisation

Błażej Kotowski (Universitat Pompeu Fabra); Nicholas Evans (Universitat Pompeu Fabra); Behzad Haki (Universitat Pompeu Fabra); Frederic Font (Universitat Pompeu Fabra); Sergi Jorda (Universitat Pompeu Fabra). Exploring Situated Stabilities of a Rhythm Generation System Through Variational Cross-Examination

Jérôme Nika (IRCAM); Diemo Schwarz (IRCAM); Augustin Müller (IRCAM). Crafting Musical Agents: Interactive Generation as a Catalyst for Artistic Formalization

Adam Štefunko (Charles University); Suhit Chiruthapudi (Johannes Kepler University); Jan Hajič (Charles University); Carlos Eduardo Cancino-Chacón (Johannes Kepler University). Basso Continuo Goes Digital: Collecting and Aligning a Symbolic Dataset of Continuo Performance

Ashley Noel-Hirst (Queen Mary, University of London); Charalampos Saitis (Queen Mary, University of London); Nick Bryan-Kinns (University of the Arts London). Sampling the Latent Space: Exploring the Creative Potential of Generative AI Through the Lens of Sample-Based Music Making

Dominic Thibault (Université de Montréal); David Piazza (Université de Montréal); Mimi Allard (Université de Montréal); Mathieu Arseneault (Université de Montréal); Gaël Moriceau (Université de Montréal). Navigating Abstract Timbre Spaces: Instrumental Affordances of Concatenative Synthesis in Mosaïque

Juan Parra Cancino (Orpheus Institute); Jonathan Impett (Orpheus Institute). Deviations in Forking Paths: Mapping Nonlinear Agencies in Live Electronic Improvisation

Mingyang Yao (University of California San Diego); Ke Chen (UCSD). From Generality to Mastery: Composer-Style Symbolic Music Generation via Large-Scale Pre-training

Maximos Kaliakatsos-Papakostas (Hellenic Mediterranean University); Dimos Makris (Hellenic Mediterranean University); Konstantinos Soiledis (Hellenic Mediterranean University); Konstantinos-Theodoros Tsamis (Hellenic Mediterranean University); Vassilis Katsouros (Athena Research Centre); Emilios Cambouropoulos (Aristotle University of Thessaloniki). Incorporating Structure and Chord Constraints in Symbolic Transformer-based Melodic Harmonization

Fábio Maria Pereira (Goldsmiths University of London); Mathew Yee-King (Goldsmiths University of London); Jenn Kirby (University of Liverpool). Entangled Voices: AI, Intra-action, and the Body Multiple in Creative Practice

Evan O’Donnell (Goldsmiths, University of London); Patrick Hartono (Goldsmith, University of London). A Practice-Based Methodology for Capturing Embodied Gesture-Rhythm Relations in Small Datasets

Gabriel Levine ( Department of Applied Mathematics, Hunter College, The City University of New York); Drew Thurlow (Opening Ceremony Media); Jon Arfa (Independent Researcher); Sarah Ita Levitan (Hunter College). HuBERT Ensemble Models for Singing Voice Deepfake Detection

Zachary Cooper (Vrije Universiteit Amsterdam). She’s Lost Control Again: The Next Generation Of Copyright Challenges Over Next-Generation Interactive Music

Renaud Bougueng Tchemeube (Simon Fraser University); Philippe Pasquier (Simon Fraser University). Building Calliope and Apollo: Engineer-Designer Reflections on Computer-Assisted Composition Systems

Simon Colton (); Louis Bradshaw (Queen Mary University of London); Berker Banar (Queen Mary University of London); Keshav Bhandari (Queen Mary University of London); Aikaterini Primenta (Queen Mary University of London). Composer as Constrainer in Model-Led Piano Sheet Music Generation

Tak-ai Lam (The Chinese University of Hong Kong); Wing-fung Lo (The Chinese University of Hong Kong); Chuck-jee Chau (The Chinese University of Hong Kong). Automated Pop-to-A Cappella Score Generation

Michael Oehler (Osnabrück University); Benedict Saurbier (Osnabrück University); Jan Schepmann (Osnabrück University). Assessing Expectancy Bias in Listener Evaluations of AI and Human Music Compositions

Anuradha Chopra (Singapore University of Technology and Design); Abhinaba Roy (Singapore University of Technology and Design); Dorien Herremans (Singapore University of Technology and Design). SonicVerse: Multi-Task Learning for Music Feature-Informed Captioning