Security: A Next Frontier in AI Coding
Jingxuan He
Thursday, November 6, 2025
Talk at 4:00pm
Abstract:
AI is revolutionizing programming, but is it also creating a new
generation of security debt? This talk confronts this critical question
through a comprehensive analysis of AI’s impact on the software security
landscape. I will begin by introducing two novel benchmarks to quantify
the risks: CyberGym, to assess AI’s offensive capabilities in
vulnerability reproduction and discovery, and BaxBench, to measure AI’s
propensity to introduce security flaws when generating code. Drawing on
insights from these evaluations, I will then present two proactive
approaches to make AI-generated code secure by design. I will detail
our work in fine-tuning models on curated datasets of secure code
patterns to minimize the generation of vulnerabilities. Furthermore, I
will describe an inference-time constraining mechanism that enforces
type safety in generated code, provably eliminating entire classes of
bugs.
Bio:
Jingxuan HE is a Postdoctoral Researcher at UC Berkeley, working with
Prof. Dawn Song. He completed his PhD at ETH Zurich, where he was
advised by Prof. Martin Vechev. His research lies at the intersection
of security, AI, and programming languages, with a current focus on
evaluating AI’s impact on cybersecurity and mitigating associated risks.
His work has been recognized with an ACM CCS Distinguished Paper Award
and an ETH Medal for Outstanding Doctoral Thesis, and is adopted for
model evaluations by leading AI labs like Anthropic. For more