Dear AIMC Community,
We are looking for 2 PhD candidates, preferably with a music background and interest in research-creation, to join us in the CICM (Research Centre in Informatics and Musical Creation) group at Université Paris 8.
The enrollment process for the PhD program starts in November, so ideally candidates should have already obtained (or be close to obtaining) their Master degree, even though the employment start date is set to January 2026.
Please feel free to disseminate to potential candidates.
Many thanks!
Emma Frid
- - -
The ERC Advanced Grant G3S project “Generative Spatial Synthesis of Sound and Music,” combining generative artificial learning and spatial audio (see description below), began on September 1, 2025, at Paris 8 University (Paris, France).
We are pleased to announce the opening of two three-year doctoral contracts within this framework:
Presentation of the ERC Advanced Grant G3S ‘Generative Spatial Synthesis of Sound and Music’
The ERC Advanced Grant G3S (Generative Spatial Synthesis of Sound and Music) project takes an original approach to artificial intelligence, combining it with spatial audio. We will be developing frugal, local, open-source AI to generate sound spaces. Until now, spatialization has often been neglected in the application of AI to music and sound, even though 3D spatial audio is undergoing significant industrial development and standardization.
The ERC G3S project, designed by and for musicians and sound creators, will explore generative approaches to spatialization based on machine learning. In doing so, it will seek to break away from the standards that shape the way we create and perceive spatiality, opening up to all the different ways and processes of spatial audio.
As part of an international collaborative research and creation network, we will collect a vast set of varied musical pieces with electronics that have been particularly crafted in terms of sound spaces: we will collect the sound engines (represented by signal operations), multichannel recordings, and semantic descriptions of spatiality. We will train low-dimensional learning models, combining existing neural techniques. These models will enable us to generate sound spaces and explore them through user requests that are either prompt-based (describing the desired space), functional (describing the desired processing), or imitation of an audio result. The generated spatial engines can be exported as interoperable plugins.
Over five years, the G3S project will implement and articulate four major research
objectives:
We will produce open-source environments that are compatible with audio standards and shared with the computer music community; we will evaluate them through commissions to composers, creations, workshops, and concerts.