Thank you for joining this research special interest group, SIGSEC, to cover work on LLM and NLP security. SIGSEC is part of the Association for Computational Linguistics (
).
We host regular talks on NLP & LLM Security, a mailing list for people interested in NLP & LLM security, and an annual research workshop. If you would like to talk, or would likeus to invite a speaker, please get in touch!
📅 >>> ACTION FOR YOU: Follow the SIGSEC talks calendar (link at the top of
https://sig.llmsecurity.net/talks/). We aim to present excellent speakers with breaking, often pre-publication, research, and our launch schedule does not disappoint.
The ACL Special Group on NLP Security exists to:
* provide infrastructure and community for those many ACL members working in NLP Security;
* establish a serious research body that represents NLP and ACL interests in the burgeoning field of LLM and NLP security; and
* bridge the Information Security and Computational Linguistics communities, which is a link already actively being pursued by the Information Security community.
Membership is free, and there's an exciting talks series. The video links are posted on
https://sig.llmsecurity.net/talks/. We start with:
* Thursday November 2nd, 10.00 ET / 15.00 CET - Text Embeddings Reveal (Almost) As Much As Text - John X. Morris
* Thursday November 9th, 11.00 ET / 17.00 CET - LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Games - Sahar Abdelnabi
* Thursday November 23rd, 11.00 ET / 17.00 CET - Privacy Side Channels in Machine Learning Systems - Edoardo Debenedetti
All talks present cutting-edge research on LLM security vulnerabilities and assessment methods. 🌶️
We will hold a research workshop in 2024, with peer-reviewed and non-archival tracks, for presenting and discussing LLM & NLP security work; details to follow.
We look forward to seeing you all :)
SIGSEC President: Leon Derczynski, ITU Copenhagen / NVIDIA Corp
SIGSEC Secretary: Muhao Chen, University of Southern California
SIGSEC Expert Advisor: Jekaterina Novikova, AI Risk and Vulnerability Alliance / Cambridge Cognition