[Call For Papers] Deadline Extended: BUGS @ NeurIPS 2023 Workshop Deadline Extended.

149 views
Skip to first unread message

Haripriya Harikumar

unread,
Sep 30, 2023, 10:27:10 AM9/30/23
to Machine Learning News

Hello Everyone, 

Due to several requests, we have decided to extend the deadline for the workshop “Backdoors in Deep Learning: The Good, the Bad, and the Ugly” @ NeurIPS 2023 to Oct 3rd, 2023.

Extended submission deadline: Oct 3rd, 2023 Anywhere on Earth (AoE)

Author notification: Oct 27th, 2023 Anywhere on Earth (AoE)

Camera-ready deadline: December 1st, 2023, Anywhere on Earth (AoE)

Workshop date: December 15th, 2023 (Full-day Event)

Submission link:  

https://openreview.net/group?id=NeurIPS.cc/2023/Workshop/BUGS

We welcome submissions related to any aspect of backdoor research, including but not limited to:

  • Backdoor attacks                                                   
    • Poisoning attacks
    • Dirty-label backdoor attacks
    • Clean-label backdoor attacks
    • Backdoors in various learning paradigms (e.g., supervised, semi-supervised, self-supervised)
    • Backdoors in various computer vision tasks (e.g., object detection, segmentation)
    • Backdoors in multimodal models (e.g., vision+language)
    • Backdoors in federated learning
    • Backdoors in NLP and less-studied domains (e.g., speech, graphs)
    • Backdoors in generative models (e.g., Diffusion models) 
    • Backdoors in Large Language Models
  • Backdoor defenses
    • Backdoor detection (poisoned inputs, poisoned models) – Backdoor mitigation (data sanitization, model repair)
    • Understanding backdoor behaviors
  • Backdoor for social good
    • Watermarking (for IP Protection, Ownership Verification, Generative Data Marking, etc....)
    • Trapdoor/Honeypot defenses
    • Model unlearning
    • Deep model behavior understanding

Submitted papers should have up to 6 pages (excluding references, acknowledgments, or appendices). Please use the NeurIPS submission template provided at https://neurips.cc/Conferences/2023/PaperInformation/StyleFiles

Submissions must be anonymous following NeurIPS double-blind reviewing guidelines, NeurIPS Code of Conduct, and Code of Ethics. Accepted papers will be hosted on the workshop website but are considered non-archival and can be submitted to other workshops, conferences, or journals if their submission policy allows.

We also encourage participation, in discussions with our amazing lineup of speakers! Our diverse list of speakers includes
 

                                    *  Bo Li (UIUC)
                                    *  Ruoxi Jia (Virginia Tech)
                                    *  Adam Dziedzic (CISPA)
                                    *  Florian Tramèr (ETH Zürich)
                                    *  Dawn Song (UC Berkeley)
                                    *  Baoyuan Wu (CUHK-Shenzhen)
                                    *  Yiming Li (Zhejiang University)

Topics of the tutorial are backdoors in LLMs, model stealing and defenses (model ownership resolution), backdoor attacks beyond misclassification, watermarking, and many more…


More details of the workshop can be found at: https://neurips2023-bugs.github.io/

If you have any further questions, please feel free to contact us at neurips2...@googlegroups.com

 

Organizers:

Khoa D Doan, VinUniversity, Vietnam

Aniruddha Saha, University of Maryland, College Park, USA

Anh Tuan Tran, VinAI Research, Vietnam

Yingjie Lao, Clemson University, USA

Kok-seng Wong, VinUniversity, Vietnam

Ang Li, Simular Research, USA

Haripriya Harikumar, Deakin University, Australia

Eugene Bagdasaryan, Cornell Tech, USA

Micah Goldblum, New York University, USA

Tom Goldstein, University of Maryland, College Park, USA

 

Cheers,

BUGS@NeurIPS2023 Organizing Committee

Reply all
Reply to author
Forward
0 new messages