Starkly Speaking next paper: Towards Fast, Specialized Machine Learning Force Fields: Distilling Foundation Models via Energy Hessians

15 views
Skip to first unread message

Hannes Stärk

unread,
Apr 17, 2025, 8:57:24 PMApr 17
to lo...@googlegroups.com
Hi together,

Heyho, we are back with the reading group! New name, new papers :)
Next week, Ezra and I will discuss: 

A General Framework for Inference-time Scaling and Steering of Diffusion Models https://arxiv.org/abs/2501.06848 (Raghav Singhal, Zachary Horvitz, Ryan Teehan, Mengye Ren, Zhou Yu, Kathleen McKeown, Rajesh Ranganath)
Diffusion models produce impressive results in modalities ranging from images and video to protein design and text. However, generating samples with user-specified properties remains a challenge. Recent research proposes fine-tuning models to maximize rewards that capture desired properties, but these methods require expensive training and are prone to mode collapse. In this work, we propose Feynman Kac (FK) steering, an inference-time framework for steering diffusion models with reward functions. FK steering works by sampling a system of multiple interacting diffusion processes, called particles, and resampling particles at intermediate steps based on scores computed using functions called potentials. Potentials are defined using rewards for intermediate states and are selected such that a high value indicates that the particle will yield a high-reward sample. We explore various choices of potentials, intermediate rewards, and samplers. We evaluate FK steering on text-to-image and text diffusion models. For steering text-to-image models with a human preference reward, we find that FK steering a 0.8B parameter model outperforms a 2.6B parameter fine-tuned model on prompt fidelity, with faster sampling and no training. For steering text diffusion models with rewards for text quality and specific text attributes, we find that FK steering generates lower perplexity, more linguistically acceptable outputs and enables gradient-free control of attributes like toxicity. Our results demonstrate that inference-time scaling and steering of diffusion models, even with off-the-shelf rewards, can provide significant sample quality gains and controllability benefits. Code is available at this https URL .

Speaker:
Ezra Erives and Hannes Stark discuss the paper.


Meeting Details:
Every Monday at 12:00 ET / 9:00 PT / 18:00 CE(S)T.  
https://zoom.us/j/5775722530?pwd=ZzlGTXlDNThhUDZOdU4vN2JRMm5pQT09

Add it to your calendar:
Subscribe via Google Calendar, or subscribe via iCal.
Alternatively, add the events, or add this single event.

Slack Workspace for discussion and paper voting:
https://join.slack.com/t/logag/shared_invite/zt-2zuxi7gd1-rLUgxg6gnCkhO7WlRsyElg

All information: Schedule of upcoming papers, recordings, etc.:
https://portal.valencelabs.com/logg
Reply all
Reply to author
Forward
0 new messages