みなさま、
青森大学の高橋優太と申します。慶應義塾大学の岡田光弘先生、峯島宏次先生の
代理として、以下の会合の告知をさせていただけましたらと思います。
どうぞよろしくお願いいたします。
ーーーーーーーーーーーーーーーーーーーーーーーーー
Sonoteno メールリストのみなさま
日仏Formal Methodsワーキンググループ会合の中間(ハイプリッド)会合を27日開催し、機械学習とFairness,
Explainability, Provacy検証
に関わるつぎの2つの講演が15時45分から予定されています。直前ですが、ご案内させていただきます。
峯島宏次 岡田光弘(慶應義塾大学)
(なお、この日仏FM-WGは日仏サイバーセキュリティ会議の格組の中で毎年開催されていますが、今回の中間会議はこれと独立に開催されます。)
ーーーーーーーーーーーーーーーーーーー
Working Seminar on Fairness and Explainability in the Digital and ML
Environments and Special Seminar on Fairness, Explainability, and
Privacy
フランスInria(国立情報科学/デジタル科学研究所)関係来日グループと機械学習フェアネスに関する2つの講演会(第2部) Part 2:
Twon vitedvitednvited talks.
(Note: Program Revized: プログラムに一部変更があります)
(Hybrid Meeting: Pre-registration is required. Pre-registration is
open until just before starting Part 2.
ハイブリッド形式:要事前登録。会議1部及び第2部の開始直前まで登録を受け付けます。
3月27日木曜日13:00-18:00 Part 2 Two invited talks 15;45-18:00)March 27th
Open-Lab. 4th Floor, East Building at East Gate, Mita Campus, Keio University,
三田キャンパス東館4階Open Lab
Web page:
https://abelard.flet.keio.ac.jp/2025/0327_Workshop
Hybrid-form Meeting. Pre-registration is required.
Pre-registration Form / 申込フォーム:
https://forms.gle/ygQb8x4fwdcNTzoY8
On the occasion of French experts on Machine Learning Fairness experts
and digital ethics experts of our partner groups of Inria, we have the
sessions, following up on the one on the 24th of March. For this
seminar, we discuss some basic issues on Fairness and Explainability,
as well as Fair Development issues of digital environments in
developing countries/regions.
Two invited talks on Fairness, Explainability, Privacy are scheduled in Part 2.
Date: March 27th, 2025, 13:00 – 18:00
Venue: Open-Lab, 4th floor of the East Building at the East Gate of
Mita-Campus of Keio University and Zoom
The East Building is #13 of the campus map below:
https://www.keio.ac.jp/en/maps/
日時:3月27日(木) 15;45-18:00 (ハイブリッド開催)
場所:慶應義塾大学三田キャンパス東館4階Open-Lab(キャンパスマップの13番の建物)、及びZoom
キャンパスマップ:
https://www.keio.ac.jp/ja/maps/mita.html
第2部 講演会 Program
115:45 PART 2
Special Invited Talks on Fairness, Explainability, Privacy, and Causal Methods
(The two abstracts are attached below)
1.Ruta Binkyte (CISPA Helmholtz Center for Information Security)
Causal Methods for AI Fairness and Explainability (See the abstract below)
2. Catuscia Palamidessi (Inria Saclay)
Trade-off between Fairness and Privacy in Machine Learning. (See the
abstract below)
Duscussion
18:00 Closing
後援
慶應義塾大学グローバルリサーチインスティチュート・チャレンジグラント「説明可能なコンピューティング環境の実現に向けて」 (Keio
University KGRI Challenge Grant: Toward the Realization of Explainable
Computing Environments)
Organizing Committee
Mitsuhiro Okada, Koji Mineshima, and Hirohiko Abe
Contact
lo...@abelard.flet.keio.ac.jp
Abstracts
Ruta Binkyte (CISPA Helmholtz Center for Information Security)
Title: Causal Methods for AI Fairness and Explainability
Abstract: In this presentation, we provide a concise introduction to
causal concepts—including structural causal models, interventions, and
mediation analyses—and show how these tools can inform fairness in
decision-making. We focus especially on path-specific fairness, which
distinguishes legitimate from illegitimate pathways by which a
sensitive attribute can influence an outcome. Through illustrative
examples, we demonstrate how to (1) construct a causal graph that
encodes relevant variables, confounders, and mediators; (2) identify
and estimate direct and indirect (mediated) effects. We also touch
upon recent advances in model interpretability, highlighting the
potential of causal frameworks to clarify how complex models make
decisions at a mechanistic level. We conclude by discussing the
practical challenges of implementing path-specific fairness—such as
defining “legitimate” vs. “illegitimate” routes, managing unmeasured
confounders, and coping with high-dimensional feature spaces—and
emphasize why these nuanced causal approaches can be more aligned with
real-world fairness goals than simpler group-level parity metrics.
Catuscia Palamidessi (Inria Saclay)
Title: Trade-off between Fairness and Privacy in Machine Learning.
Abstract: Privacy and Fairness are two important ethical issues in
machine learning, and several research efforts have been dedicated to
trying to understand how they interact, and how they affect accuracy.
In this talk, I will summarize the main results in the literature, and
present our own study on the effect of local differential privacy on
some of the main notions of fairness: Statistical Parity, Conditional
Statistical Parity, and Equality of Opportunity.
ーーーーーーーーーーーーーーーーーーー