Large Language Model (LLM)-based agents have demonstrated impressive capabilities but still face significant safety challenges, with even the most advanced approaches often failing in critical scenarios. In this talk, I’ll explore how integrating PDDL symbolic planning with LLM-based agents can help address these issues. By leveraging LLMs' ability to translate natural language instructions into PDDL formal specifications, we enable symbolic planning algorithms to enforce safety constraints throughout the agent’s execution. Our experimental results demonstrate how this approach ensures safety, even under severe input perturbations and adversarial attacks—situations where traditional LLM-based planning falls short. This work suggests a potential pathway for deploying safer autonomous agents in real-world applications. This work is a collaboration with Tan Zhi Xuan (MIT).
Orpheus Lummis
unread,
Jan 9, 2025, 4:02:44 PMJan 9
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message