We will develop methods to learn from unstructured data Causally Grounded Concepts, i.e., concepts with theoretical guarantees as error bounds and sample complexity, in challenging settings with continuous and discrete valued concepts (e.g., symbols in neuro-symbolic methods), correlations between concepts, and when we do not have (all) labels for the concepts, but we only have a weak supervision signal, e.g., labels of a downstream task, and background knowledge about the concepts. We also envision some practical applications of this framework in cross-species translation (transfer of findings from animal studies to humans) in drug discovery, dynamical systems for long-horizon time series forecasting, and verifiably safe reinforcement learning.
While both PhD positions are part of the same project, each of them has an independent research direction, allowing for a level of autonomy. The first project (“Discrete concepts”) will focus on developing strong theoretical guarantees beyond the current work on independent continuous concepts to allow also for dependent and discrete concepts. The second project (“Weak supervision”) will instead focus on providing similar guarantees by integrating background knowledge and weak supervision, e.g. based on labels in downstream tasks, which can be for example logical constraints.
Deadline: 20 April 2026
Details and application link: https://www.academictransfer.com/en/jobs/359354/2-phd-positions-on-learning-causally-grounded-concepts-for-safe-ai