Learning to Integrate Diffusion ODEs by Averaging the Derivatives
Abstract
Secant losses, derived from the derivative-integral relationship and inspired by Monte Carlo and Picard methods, enable efficient diffusion model inference with improved FID scores on CIFAR-10 and ImageNet.
To accelerate diffusion model inference, numerical solvers perform poorly at extremely small steps, while distillation techniques often introduce complexity and instability. This work presents an intermediate strategy, balancing performance and cost, by learning ODE integration using loss functions derived from the derivative-integral relationship, inspired by Monte Carlo integration and Picard iteration. From a geometric perspective, the losses operate by gradually extending the tangent to the secant, thus are named as secant losses. The target of secant losses is the same as that of diffusion models, or the diffusion model itself, leading to great training stability. By fine-tuning or distillation, the secant version of EDM achieves a 10-step FID of 2.14 on CIFAR-10, while the secant version of SiT-XL/2 attains a 4-step FID of 2.27 and an 8-step FID of 1.96 on ImageNet-256times256. Code is available at https://github.com/poppuppy/secant-expectation.
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper