Join the Interpretable Machine Learning & Explainable AI research group, which is part of the Chair of Statistical Learning and Data Science at the Ludwig-Maximilians-Universität (LMU) München, led by Prof. Dr. Bernd Bischl who is also one of the directors of the Munich Center for Machine Learning (MCML), a leading competence center designed to consolidate machine learning activities in Munich.
Project description:
The main research will focus on the advancement of interpretation methods for machine learning (ML) models trained on tabular data. Existing interpretation methods produce either local explanations for insights into individual observations or global explanations that characterize the overall behavior of a model. During your Ph.D. journey, you will conduct research on a wide range of interpretation methods, including regional explanations, which strike a balance between local and global interpretability, and other innovative methods aimed at enhancing the overall explainability of ML models.
How to apply (please prepare a single PDF file firstname_lastname.pdf containing the following contents):
Your Responsibilities:
Your profile:
What we offer: