We are looking for 2 PhD candidates, preferably with a music background
and interest in research-creation, to join us in the CICM (Research
Centre in Informatics and Musical Creation) group at Université Paris 8.
Especially the 2nd PhD position
(
https://eu-central-1.protection.sophos.com?d=europa.eu&u=aHR0cHM6Ly9ldXJheGVzcy5lYy5ldXJvcGEuZXUvam9icy8zNzI2MjQ=&i=NjQwOGEwNWQ5YzliNTQ2NmRhMGIxZWNi&t=TDFCZXNyVmFoMWluWkxEc1Z5c2N3WGJhYm91dzdMczBITnI2K3VaZWlsZz0=&h=8d0dcfa0afda46eeb804e754fb8c32c5&s=AVNPUEhUT0NFTkNSWVBUSVYRf-T0WCzYP_04JfKMPxMNcDiQ7Dv9ut5u0sQaXmu-lg
<
https://eu-central-1.protection.sophos.com?d=europa.eu&u=aHR0cHM6Ly9ldXJheGVzcy5lYy5ldXJvcGEuZXUvam9icy8zNzI2MjQ=&i=NjNkYjZkMGQ2ODBjNmUzMDExMDI5MjYy&t=RzY3MWQzU0xWRk8rNkYvaGlscHlMSmEwSXJQMlo5T2hRSkV2QWVVdC9XQT0=&h=4a2b3fc68ace4b709180754694175bc3&s=AVNPUEhUT0NFTkNSWVBUSVb3QyUEZGz4dBf3i9Q_1dyBp0Tu4lu_VbImw_XF9VMkFA>)
might be of interest to the Auditory community, as it will focus on
perceptual aspects of sound spatiality through the development of a
thesaurus, i.e., a structured directory of descriptive terms, as well as
quantitative descriptors related to the spatial qualities of sound.
The enrollment process for the PhD program starts in November, so
ideally candidates should have already obtained (or be close to
obtaining) their Master degree, even though the employment start date is
set to January 2026.
Please feel free to disseminate to potential candidates.
Many thanks!
Emma Frid
- - -
The ERC Advanced Grant G3S project “Generative Spatial Synthesis of
Sound and Music,” combining generative artificial learning and spatial
audio (see description below), began on September 1, 2025, at Paris 8
University (Paris, France).
We are pleased to announce the opening of two three-year doctoral
contracts within this framework:
* A doctoral thesis on the topic of “Unified operational
representation of spatial audio engines”: Euraxess announcement
372616:
https://eu-central-1.protection.sophos.com?d=europa.eu&u=aHR0cHM6Ly9ldXJheGVzcy5lYy5ldXJvcGEuZXUvam9icy8zNzI2MTY=&i=NjQwOGEwNWQ5YzliNTQ2NmRhMGIxZWNi&t=RXB0T3VUenlob1dDUThDaDRmTWNVd0ZpVGpJQTZBTUtCRmlYYVZ0bmRuND0=&h=8d0dcfa0afda46eeb804e754fb8c32c5&s=AVNPUEhUT0NFTkNSWVBUSVYRf-T0WCzYP_04JfKMPxMNcDiQ7Dv9ut5u0sQaXmu-lg
<
https://eu-central-1.protection.sophos.com?d=europa.eu&u=aHR0cHM6Ly9ldXJheGVzcy5lYy5ldXJvcGEuZXUvam9icy8zNzI2MTY=&i=NjNkYjZkMGQ2ODBjNmUzMDExMDI5MjYy&t=M3g4aERkWlRPZVhHOTljdDJ0QUlQMUZZcnd5M2pNRXBzLzlSQXcwZjFWRT0=&h=4a2b3fc68ace4b709180754694175bc3&s=AVNPUEhUT0NFTkNSWVBUSVb3QyUEZGz4dBf3i9Q_1dyBp0Tu4lu_VbImw_XF9VMkFA>
* A doctoral thesis on the topic of “Descriptions of sound
spatiality”: Euraxess announcement 372624:
https://eu-central-1.protection.sophos.com?d=europa.eu&u=aHR0cHM6Ly9ldXJheGVzcy5lYy5ldXJvcGEuZXUvam9icy8zNzI2MjQ=&i=NjQwOGEwNWQ5YzliNTQ2NmRhMGIxZWNi&t=TDFCZXNyVmFoMWluWkxEc1Z5c2N3WGJhYm91dzdMczBITnI2K3VaZWlsZz0=&h=8d0dcfa0afda46eeb804e754fb8c32c5&s=AVNPUEhUT0NFTkNSWVBUSVYRf-T0WCzYP_04JfKMPxMNcDiQ7Dv9ut5u0sQaXmu-lg
<
https://eu-central-1.protection.sophos.com?d=europa.eu&u=aHR0cHM6Ly9ldXJheGVzcy5lYy5ldXJvcGEuZXUvam9icy8zNzI2MjQ=&i=NjNkYjZkMGQ2ODBjNmUzMDExMDI5MjYy&t=RzY3MWQzU0xWRk8rNkYvaGlscHlMSmEwSXJQMlo5T2hRSkV2QWVVdC9XQT0=&h=4a2b3fc68ace4b709180754694175bc3&s=AVNPUEhUT0NFTkNSWVBUSVb3QyUEZGz4dBf3i9Q_1dyBp0Tu4lu_VbImw_XF9VMkFA>
*Presentation of the ERC Advanced Grant G3S ‘Generative Spatial
Synthesis of Sound and Music’*
The ERC Advanced Grant G3S (Generative Spatial Synthesis of Sound and
Music) project takes an original approach to artificial intelligence,
combining it with spatial audio. We will be developing frugal, local,
open-source AI to generate sound spaces. Until now, spatialization has
often been neglected in the application of AI to music and sound, even
though 3D spatial audio is undergoing significant industrial development
and standardization.
The ERC G3S project, designed by and for musicians and sound creators,
will explore generative approaches to spatialization based on machine
learning. In doing so, it will seek to break away from the standards
that shape the way we create and perceive spatiality, opening up to all
the different ways and processes of spatial audio.
As part of an international collaborative research and creation network,
we will collect a vast set of varied musical pieces with electronics
that have been particularly crafted in terms of sound spaces: we will
collect the sound engines (represented by signal operations),
multichannel recordings, and semantic descriptions of spatiality. We
will train low-dimensional learning models, combining existing neural
techniques. These models will enable us to generate sound spaces and
explore them through user requests that are either prompt-based
(describing the desired space), functional (describing the desired
processing), or imitation of an audio result. The generated spatial
engines can be exported as interoperable plugins.
Over five years, the G3S project will implement and articulate four
major research
objectives:
1. the design of a unified operational representation of existing
spatialization engines
2. the proposal of a thesaurus and quantitative measures to describe
the spatiality of sound
3. the generation of sound spaces from machine learning
4. the design of user interfaces to explore spatial audio.
We will produce open-source environments that are compatible with audio
standards and shared with the computer music community; we will evaluate
them through commissions to composers, creations, workshops, and concerts.