Starkly Speaking next paper: Towards Fast, Specialized Machine Learning Force Fields: Distilling Foundation Models via Energy Hessians
15 views
Skip to first unread message
Hannes Stärk
unread,
Apr 17, 2025, 8:57:24 PMApr 17
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to lo...@googlegroups.com
Hi together,
Heyho, we are back with the reading group! New name, new papers :) Next week, Ezra and I will discuss:
A General Framework for Inference-time Scaling and Steering of Diffusion Models https://arxiv.org/abs/2501.06848 (Raghav Singhal,Zachary Horvitz,Ryan Teehan,Mengye Ren,Zhou Yu,Kathleen McKeown,Rajesh Ranganath) Diffusion models produce impressive results in modalities ranging from images and video to protein design and text. However, generating samples with user-specified properties remains a challenge. Recent research proposes fine-tuning models to maximize rewards that capture desired properties, but these methods require expensive training and are prone to mode collapse. In this work, we propose Feynman Kac (FK) steering, an inference-time framework for steering diffusion models with reward functions. FK steering works by sampling a system of multiple interacting diffusion processes, called particles, and resampling particles at intermediate steps based on scores computed using functions called potentials. Potentials are defined using rewards for intermediate states and are selected such that a high value indicates that the particle will yield a high-reward sample. We explore various choices of potentials, intermediate rewards, and samplers. We evaluate FK steering on text-to-image and text diffusion models. For steering text-to-image models with a human preference reward, we find that FK steering a 0.8B parameter model outperforms a 2.6B parameter fine-tuned model on prompt fidelity, with faster sampling and no training. For steering text diffusion models with rewards for text quality and specific text attributes, we find that FK steering generates lower perplexity, more linguistically acceptable outputs and enables gradient-free control of attributes like toxicity. Our results demonstrate that inference-time scaling and steering of diffusion models, even with off-the-shelf rewards, can provide significant sample quality gains and controllability benefits. Code is available atthis https URL.
Speaker: Ezra Erives and Hannes Stark discuss the paper.