Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

GS AI Seminar January 2025 – Using PDDL Planning to Ensure Safety in LLM-based Agents (​Agustín Martinez Suñé)

21 views
Skip to first unread message

Orpheus Lummis

unread,
Dec 12, 2024, 10:47:50 PM12/12/24
to guarantee...@googlegroups.com
You are invited to the January 2025 edition of the Guaranteed Safe AI Seminars:

Using PDDL Planning to Ensure Safety in LLM-based Agents
​Agustín Martinez Suñé – Ph.D. in Computer Science | Postdoctoral Researcher (Starting Soon), OXCAV, University of Oxford

Thu January 9, 18:00-19:00 UTC

Large Language Model (LLM)-based agents have demonstrated impressive capabilities but still face significant safety challenges, with even the most advanced approaches often failing in critical scenarios. In this talk, I’ll explore how integrating PDDL symbolic planning with LLM-based agents can help address these issues. By leveraging LLMs' ability to translate natural language instructions into PDDL formal specifications, we enable symbolic planning algorithms to enforce safety constraints throughout the agent’s execution. Our experimental results demonstrate how this approach ensures safety, even under severe input perturbations and adversarial attacks—situations where traditional LLM-based planning falls short. This work suggests a potential pathway for deploying safer autonomous agents in real-world applications. This work is a collaboration with Tan Zhi Xuan (MIT).

Orpheus Lummis

unread,
Jan 9, 2025, 4:02:44 PMJan 9
to guaranteed-safe-ai
Reply all
Reply to author
Forward
0 new messages