Description of the Bridge
This bridge addresses one of the most urgent challenges in AI: how to make embodied AI systems — such as autonomous vehicles, UAVs, and robots — interpretable, testable, and formally verifiable. While modern AI models excel in perception and decision-making, they pose significant challenges for traditional verification techniques, raising critical risks in safety-sensitive domains. The goal of this bridge is to unite diverse communities—AI/ML, formal methods, software engineering, robotics, and cyber-physical systems—in developing a shared roadmap for reliable embodied AI. Participants will engage with cutting-edge approaches in neurosymbolic reasoning, LLM-guided specification mining, scenario-based testing, compositional verification, and robustness under uncertainty, and explore their potential to support certification and assurance of AI-enabled autonomy.
Topics
- Testability and verifiability of AI-enabled autonomy
- LLM-guided specification mining and scenario generation
- White-box and compositional verification of neural and neurosymbolic components
- Neurosymbolic architectures for modular reasoning and distillation
- Robustness under sensing noise, ambiguous instructions, and human-robot interaction
- Formal guardrails for LLMs and VLMs in embodied decision-making
Format of the Bridge
This one-day event will combine tutorials, keynote talks, technical presentations, panel discussions, and interactive breakout sessions. The morning program will feature tutorials and invited keynotes from leaders in neurosymbolic AI, formal verification, and embodied autonomy, followed by selected paper presentations. The afternoon will include a panel on certification challenges and breakout discussions organized around open problems such as scenario-based testing, compositional verification, and robustness under uncertainty.
Attendance
We anticipate 50–80 participants, with priority given to researchers and practitioners in AI, robotics, formal methods, and autonomous cyber-physical systems. Early-career researchers and graduate students are strongly encouraged to participate, with mentoring opportunities included.
Submission requirements
We invite 2–4 page extended abstracts or position papers describing research advances, tools, or case studies relevant to reliable embodied AI. Submissions should emphasize how the work enhances testability, interpretability, or verification of embodied AI. Accepted contributions will be presented as talks, posters, or lightning sessions.
Submission site information
Submissions should be made via https://easychair.org/conferences/?conf=aaai26bridgereai
Bridge Chairs
Assoc. Prof. Xi (James) Zheng
Macquarie University
james...@mq.edu.au
Prof. Corina S. Pasareanu
Carnegie Mellon University / NASA Ames
corina.p...@cmu.edu
Asst. Prof. Ivan Ruchkin
University of Florida
iruc...@ece.ufl.edu
Prof. Archan Misra
Singapore Management University
arc...@smu.edu.sg
Bridge Program Committee
Ziyang Li (Johns Hopkins University)
David Lo (Singapore Management University)
Djordje Žikelić (Singapore Management University)
Anna Lukina (Delft University of Technology)
Aloysius K. Mok (University of Texas at Austin)
Kenneth Kwok (A-STAR, Singapore)
Daniel Neider (TU Dortmund University)
Vijay Ganesh (Georgia Institute of Technology)
Biplav Srivastava (University of South Carolina)
Guy Van Den Broeck (University of California, Los Angeles)
Basura Fernando (A-STAR, Singapore)
Bridge URL
https://www.tacps.org
Important dates
– Submissions due: October 31, 2025
– Notifications: November 14, 2025