Trustworthy AI aims to provide an explainable, robust and fair decision making process. In addition, transparency and security also plays a significant role to improve the adoptability and impact of ML solutions. Particularly, data and models often imported from external sources in addressing solutions in developing countries, thereby risking potential security issues. The divergence of data and model from population at hand also poses a lack of transparency and explainability of the decision making process. Thus, a workshop at Deep Learning Indaba 2022 on this specific theme aims to bring researchers, policy makers and regulators to discuss ways to ensure security and transparency while addressing fundamental problems in developing countries, particularly, when data and models are imported and/or collected locally with less focus on ethical considerations and governance guidelines.
We’re looking for short presentations (10 to 15 minutes) related to:
Audit techniques in data and ML models.
Advances in algorithms and metrics for robust ML.
Uncertainty quantification techniques and Fairness studies.
Applications and research in data and model Privacy/Security.
Methodologies or case studies for explainable and transparent AI.