On Wednesday - Jingxuan He (ETH) on enhancing LLMs for secure code generation

12 views
Skip to first unread message

Nadav Timor

unread,
May 26, 2023, 8:26:33 AM5/26/23
to LLMs for Code Seminar

Hi all,


We are thrilled to invite you to our upcoming research seminar featuring Jingxuan He from ETH Zurich. To join the virtual event, please register here, and you will receive the Zoom link.


Title: Large Language Models for Code: Security Hardening and Adversarial Testing


Abstract: Large language models (large LMs) trained on massive code corpora are increasingly used to generate programs. However, LMs lack awareness of security and are found to frequently produce unsafe code. In this talk, I will present our recent work that addresses LMs’ limitation in security along two important axes: (i) security hardening, which aims to enhance LMs' reliability in generating secure code, and (ii) adversarial testing, which seeks to evaluate LMs' security at an adversarial standpoint.


Bio: Jingxuan He is a final-year PhD student at ETH Zurich. His research focuses on programming languages, machine learning, and security.


The seminar will take place on May 31 (Wednesday) at

  • 7:00 AM PDT (Pacific Time)
  • 10:00 AM EDT (Eastern Time)
  • 4:00 PM CEST (Central European Summer Time)


We hope to see you there.


Best,

Nadav

Reply all
Reply to author
Forward
0 new messages