LoGG tomorrow's paper: One-step Diffusion Models with f-Divergence Distribution Matching

2 views
Skip to first unread message

Hannes Stärk

unread,
Mar 23, 2025, 5:45:50 PMMar 23
to lo...@googlegroups.com
Hi together,

Reading group session tomorrow March 24th is at a different time then usual. It will be at 3pm PT / 6pm ET / midnight CEST:

One-step Diffusion Models with f-Divergence Distribution Matching https://arxiv.org/abs/2502.15681 (Yilun XuWeili NieArash Vahdat)
Sampling from diffusion models involves a slow iterative process that hinders their practical deployment, especially for interactive applications. To accelerate generation speed, recent approaches distill a multi-step diffusion model into a single-step student generator via variational score distillation, which matches the distribution of samples generated by the student to the teacher's distribution. However, these approaches use the reverse Kullback-Leibler (KL) divergence for distribution matching which is known to be mode seeking. In this paper, we generalize the distribution matching approach using a novel f-divergence minimization framework, termed f-distill, that covers different divergences with different trade-offs in terms of mode coverage and training variance. We derive the gradient of the f-divergence between the teacher and student distributions and show that it is expressed as the product of their score differences and a weighting function determined by their density ratio. This weighting function naturally emphasizes samples with higher density in the teacher distribution, when using a less mode-seeking divergence. We observe that the popular variational score distillation approach using the reverse-KL divergence is a special case within our framework. Empirically, we demonstrate that alternative f-divergences, such as forward-KL and Jensen-Shannon divergences, outperform the current best variational score distillation methods across image generation tasks. In particular, when using Jensen-Shannon divergence, f-distill achieves current state-of-the-art one-step generation performance on ImageNet64 and zero-shot text-to-image generation on MS-COCO. Project page: this https URL

Speaker:
Yilun Xu who got his PhD from MIT and now works at NVIDIA research.

Meeting Details:
Every Monday at 12:00 ET / 9:00 PT / 18:00 CE(S)T.  
https://zoom.us/j/5775722530?pwd=ZzlGTXlDNThhUDZOdU4vN2JRMm5pQT09

Add it to your calendar:
Subscribe via Google Calendar, or subscribe via iCal.
Alternatively, add the events, or add this single event.

Slack Workspace for discussion and paper voting:
https://join.slack.com/t/logag/shared_invite/zt-2zuxi7gd1-rLUgxg6gnCkhO7WlRsyElg

All information: Schedule of upcoming papers, recordings, etc.:
https://portal.valencelabs.com/logg
Reply all
Reply to author
Forward
0 new messages