Security Lunch 🍂 Ed. — Wednesday, Oct 1st, 2025, 12:00 pm @ CoDa E160
Preventing AI model-weight exfiltration with FPGAs and inference verification — with live demo!
Jacob Lagerros
Can't make it in person? Join us on
zoom.
See our past & upcoming events on our
website!
Abstract:
AI intellectual property is on track to become some of the most valuable IP on earth. To steal and reproduce a frontier AI model, an attacker would need access to its weights, its code, and a dozen accelerators. Code and compute is easy to get — but the weights
can be TB in size, live in the data center, and have no reason to ever leave their storage / accelerators. The defenders options are much better. However, a frontier clusters needs to serve inference traffic to tens of millions of users. How can one simultaneously
lock down AI model weights, while allowing such a massive volume of inference? This talk will introduce a novel solution, combining FPGA middle-boxes and machine-learning techniques for verifying the integrity of inference traffic. We aim for this solution
be at once: 1) highly performant, 2) highly secure against very sophisticated adversaries, and 3) highly retrofittable to modern clusters with various bespoke architectures and optimizations.
Bio:
Jacob Lagerros is the CEO of Ulyssean, an AI startup building security and verification for large AGI clusters, backed by leaders from Anthropic, DeepMind, Meta, CrowdStrike, and more.