CFP for SAGAI'24: 1st Workshop on Security Architectures for GenAI Systems

23 views
Skip to first unread message

Mihai Christodorescu

unread,
Dec 20, 2023, 2:22:47 PM12/20/23
to ias-oppo...@googlegroups.com
We are excited to announce a new workshop in the Security and Privacy Workshops series, collocated with IEEE S&P in May 2024 in San Francisco, CA, on the topic of security for GenAI systems and applications.

Call for Papers
Security Architectures for GenAI Systems (SAGAI) 2024
May 23, 2024, collocated with IEEE S&P

Overview
Generative AI (GenAI) is quickly advancing and fast becoming a widely deployed technology. GenAI-based systems rely on machine-learning (ML) models trained on large amounts of data using deep-learning techniques. As the power and flexibility of the models advance, the architectural complexity of GenAI-based systems is advancing too. Current architectures may combine multiple models, using sequences of model queries to complete a task, with external (non-ML) components leveraged to enhance the model’s operation via database queries or API calls. These architectures may be vulnerable to a variety of attacks that use adversarial inputs to create malicious outputs.
This workshop invites new contributions to the broader understanding of security for GenAI systems and applications. Contributions may address security threats and defenses for individual models, or for systems and architectures that may employ one or more generative ML models as subcomponents. The workshop welcomes discussion of new GenAI security concerns, as well as new approaches to architecting GenAI-based systems for safety, security, and privacy.

Topics of Interest
SAGAI welcomes contributions on all aspects of safety, security, and privacy of GenAI-based systems, including text, image, audio, video, code, and other modalities. Topics of interest include, but are not limited to:

Mechanisms for Safety, Security, and Privacy of GenAI
  • Input sanitization, normalization, and deobfuscation
  • Protections against prompt injection
  • Output validation and sanitization
  • Secure and private tool use
  • Secure and private retrieval-augmented generation
  • Secure and private multi-model/multi-agent systems
  • Mechanisms for whitebox vs blackbox models
  • Security and performance of on-device model training and inference
  • Reliable watermarking techniques
  • Attacks against GenAI safety, security, and privacy mechanisms
Security Architectures for GenAI
  • In-model vs. out-of-model security approaches
  • Secure sequential and parallel composition of GenAI-based systems
  • Layered security for multi-agent GenAI-based systems
  • Composition of provenance mechanisms (system and GenAI)
  • Composition of security and privacy mechanisms (system and GenAI)
  • Security uses of watermarked GenAI outputs
  • Model explainability for security and privacy
Out of Scope
Because there are many other conferences and workshops on this topic, we consider techniques for pre-training or fine-tuning the model(s) used by a GenAI-based system, or to curate the data used in such training or tuning, to be out of scope for the workshop. This includes training techniques to achieve model alignment and techniques to prevent data poisoning. However, submissions that consider alignment, robustness, new forms of attack, and novel defenses of system architectures that combine individual models with other components are welcome.

Submission Guidelines
We accept full-length papers of up to 10 pages, plus additional references. To be considered, papers must be received by the submission deadline (see Important Dates).

Paper Format
Papers must be formatted for US letter (not A4) size paper. The text must be formatted in a two-column layout, with columns no more than 9.5 in. tall and 3.5 in. wide. The text must be in Times font, 10-point or larger, with 11-point or larger line spacing. Authors are strongly recommended to use the latest IEEE "compsoc" conference proceedings templates.
Failure to adhere to the page limit and formatting requirements are grounds for rejection without review. Submissions must be in English and properly anonymized.
IEEE S&P’s criteria for anonymous submissions, conflicts of interest, ethical considerations, and competing interests (all available at https://sp2024.ieee-security.org/cfpapers.html) apply.

Presentation Form
All accepted submissions will be presented at the workshop. All papers will be included in the IEEE workshop proceedings. One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.

Important Dates
Paper submissions due: February 5, 2024 AoE
Acceptance notice to authors: February 19, 2024 AoE
Publication-ready papers due: March 5, 2024 AoE
Workshop: May 23, 2024

Mihai Christodorescu

unread,
Feb 12, 2024, 11:49:40 AMFeb 12
to ias-opportunities
The deadline for submissions to SAGAI'24 (workshop on security of GenAI-systems at IEEE S&P) has been extended to March 1, thanks to flexibility from our publisher for the camera ready.

If you missed the original deadline, this is your chance to submit on the security of GenAI systems, from threats and defenses for individual models, to security of systems that employ one or more generative ML models as subcomponents, and to new approaches to architecting GenAI-based systems for safety, security, and privacy. We look forward to your submission.


Cheers,
Mihai
Reply all
Reply to author
Forward
0 new messages