Starkly Speaking: Meta Flow Maps enable scalable reward alignment
17 views
Skip to first unread message
Hannes Stärk
unread,
Feb 16, 2026, 9:42:55 AMFeb 16
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to stark...@googlegroups.com
Hi together,
Today:
Speaker: Peter Potaptchik who is a DPhil student in the Department of Statistics at the University of Oxford, advised by George Deligiannidis, Saifuddin Syed, and Yee Whye Teh. Currently he is visiting Michael Albergo at Harvard University.
Paper: Meta Flow Maps enable scalable reward alignment https://arxiv.org/abs/2601.14430(Peter Potaptchik, Adhi Saravanan, Abbas Mammadov, Alvaro Prat, Michael S. Albergo, Yee Whye Teh) Controlling generative models is computationally expensive. This is because optimal alignment with a reward function--whether via inference-time steering or fine-tuning--requires estimating the value function. This task demands access to the conditional posterior p1|t(x1|xt), the distribution of clean data x1 consistent with an intermediate state xt, a requirement that typically compels methods to resort to costly trajectory simulations. To address this bottleneck, we introduce Meta Flow Maps (MFMs), a framework extending consistency models and flow maps into the stochastic regime. MFMs are trained to perform stochastic one-step posterior sampling, generating arbitrarily many i.i.d. draws of clean data x1 from any intermediate state. Crucially, these samples provide a differentiable reparametrization that unlocks efficient value function estimation. We leverage this capability to solve bottlenecks in both paradigms: enabling inference-time steering without inner rollouts, and facilitating unbiased, off-policy fine-tuning to general rewards. Empirically, our single-particle steered-MFM sampler outperforms a Best-of-1000 baseline on ImageNet across multiple rewards at a fraction of the compute.