Skip to content

This respository contains the code for the NeurIPS 2024 paper SF-V: Single Forward Video Generation Model.

Notifications You must be signed in to change notification settings

snap-research/SF-V

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 

Repository files navigation

SF-V
Single Forward Video Generation Model

arXiv Project Page

This respository contains the code for the NeurIPS 2024 paper SF-V: Single Forward Video Generation Model. For more visualization results, please check our project page.

SF-V: Single Forward Video Generation Model
Zhixing Zhang 1,2, Yanyu Li 1, Yushu Wu 1, Yanwu Xu 1, Anil Kag 1, Ivan Skorokhodov 1, Willi Menapace 1, Aliaksandr Siarohin 1, Junli Cao 1, Dimitris Metaxas 2, Sergey Tulyakov 1, and Jian Ren 1
1 Snap Inc. 2 Rutgers University

TL;DR: SF-V is a video generation method that can generate high-quality and motion consistent videos by only performing the sampling once during inference.

Diffusion-based video generation models have demonstrated remarkable success in obtaining high-fidelity videos through the iterative denoising process. However, these models require multiple denoising steps during sampling, resulting in high computational costs. In this work, we propose a novel approach to obtain single-step video generation models by leveraging adversarial training to fine-tune pre-trained video diffusion models. We show that, through the adversarial training, the multi-steps video diffusion model, i.e., Stable Video Diffusion (SVD), can be trained to perform single forward pass to synthesize high-quality videos, capturing both temporal and spatial dependencies in the video data. Extensive experiments demonstrate that our method achieves competitive generation quality of synthesized videos with significantly reduced computational overhead for the denoising process (i.e., around 23x speedup compared with SVD and 6x speedup compared with existing works, with even better generation quality), paving the way for real-time video synthesis and editing.

Reference

If our work helps you, please consider to cite our paper. Thanks!

@inproceedings{
zhang2024sfv,
title={{SF}-V: Single Forward Video Generation Model},
author={Zhixing Zhang and Yanyu Li and Yushu Wu and yanwu xu and Anil Kag and Ivan Skorokhodov and Willi Menapace and Aliaksandr Siarohin and Junli Cao and Dimitris N. Metaxas and Sergey Tulyakov and Jian Ren},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=PVgAeMm3MW}
}

About

This respository contains the code for the NeurIPS 2024 paper SF-V: Single Forward Video Generation Model.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published