Hi all,
We are thrilled to invite you to our upcoming research seminar featuring Jingxuan He from ETH Zurich. To join the virtual event, please register here, and you will receive the Zoom link.
Title: Large Language Models for Code: Security Hardening and Adversarial Testing
Abstract: Large language models (large LMs) trained on massive code corpora are increasingly used to generate programs. However, LMs lack awareness of security and are found to frequently produce unsafe code. In this talk, I will present our recent work that addresses LMs’ limitation in security along two important axes: (i) security hardening, which aims to enhance LMs' reliability in generating secure code, and (ii) adversarial testing, which seeks to evaluate LMs' security at an adversarial standpoint.
Bio: Jingxuan He is a final-year PhD student at ETH Zurich. His research focuses on programming languages, machine learning, and security.
The seminar will take place on May 31 (Wednesday) at
We hope to see you there.
Best,
Nadav