ECAI 2025 Tutorial – Towards Adversarially Robust ML in the Age of the AI Act

11 views
Skip to first unread message

Lorenzo CAZZARO

unread,
Sep 1, 2025, 6:22:34 AM (7 days ago) Sep 1
to AIxIA mailing list
(Apologies for multiple copies due to cross-posting)

Dear colleagues,


We are pleased to announce that the tutorial Towards Adversarially Robust ML in The Age of The AI Act has been accepted for presentation at ECAI 2025, which will take place in Bologna from 25 to 30 October 2025. The tutorial will take place on October 25th.


This tutorial is dedicated to the emerging intersection of machine learning security and AI regulation, with a particular focus on the European AI Act and its implications for developing trustworthy, robust AI systems.

The session is designed for researchers, practitioners, and policymakers interested in understanding the risks associated with adversarial threats in machine learning systems and the strategies available to mitigate them within the context of upcoming European regulatory frameworks.


The tutorial is structured into five parts, combining foundational insights with hands-on guidance:


1. Opening & Motivation

• Welcome and introduction to the goals of the tutorial

• Real-world failures of AI in high-risk domains

• Trust, safety, and regulatory challenges

• Overview of the EU AI Act: scope, goals, and robustness requirements

2. Understanding Adversarial Threats in ML

• Taxonomy of adversarial threats (e.g., evasion, poisoning, IPI)

• Threat modeling and evaluation frameworks

3. Mitigation Strategies & Regulatory Alignment

• Overview of practical defense strategies

• Compliance-oriented mitigation techniques

• Tools for empirical and formal robustness verification

4. Future Directions & Best Practices

• Current limitations and open challenges in the field

• Best practices for AI developers and policymakers

• Towards regulation-aware AI development pipelines

5. Summary & Outlook

• Key takeaways from the session

• The road ahead for trustworthy and compliant AI

• Final discussion and Q&A


The tutorial is jointly organized by Antonio Emanuele Cinà (University of Genoa, SAIfer Lab) and Lorenzo Cazzaro (Ca’ Foscari University of Venice).


More information is available on the official website: https://sites.google.com/view/robust-ai-ecai2025/home . If you're interested in attending, please fill out this short form (not mandatory): https://forms.gle/1YikMUgc6pZr8Veu5 .


We look forward to welcoming you to Bologna for an engaging and timely discussion on adversarial robustness and regulatory compliance in the age of trustworthy AI.


Reply all
Reply to author
Forward
0 new messages