---
license: apache-2.0
size_categories:
- 10K
## Abstract
Scaling video diffusion transformers (DiTs) is limited by their quadratic 3D attention, even though most of the attention mass concentrates on a small subset of positions. We turn this observation into VSA, a trainable, hardware-efficient sparse attention that replaces full attention at \emph{both} training and inference. In VSA, a lightweight coarse stage pools tokens into tiles and identifies high-weight \emph{critical tokens}; a fine stage computes token-level attention only inside those tiles subjecting to block computing layout to ensure hard efficiency. This leads to a single differentiable kernel that trains end-to-end, requires no post-hoc profiling, and sustains 85% of FlashAttention3 MFU. We perform a large sweep of ablation studies and scaling-law experiments by pretraining DiTs from 60M to 1.4B parameters. VSA reaches a Pareto point that cuts training FLOPS by 2.53$\times$ with no drop in diffusion loss. Retrofitting the open-source Wan-2.1 model speeds up attention time by 6$\times$ and lowers end-to-end generation time from 31s to 18s with comparable quality. These results establish trainable sparse attention as a practical alternative to full attention and a key enabler for further scaling of video diffusion models.
## Dataset Overview
- The prompts were randomly sampled from the [Vchitect_T2V_DataVerse](https://huggingface.co/datasets/Vchitect/Vchitect_T2V_DataVerse) dataset.
- Each sample was generated using the **Wan2.2-TI2V-5B-Diffusers** model and stored the latents.
- The resolution of each latent sample corresponds to **121 frames**, with each frame sized **704×1280**.
- It includes all preprocessed latents required for **Text-to-Video (T2V)** task (Also include the first frame Image).
- The dataset is fully compatible with the [FastVideo](https://github.com/hao-ai-lab/FastVideo) repository and can be directly loaded and used without any additional preprocessing.
## Sample Usage
To download this dataset, ensure you have Git LFS installed, then clone the repository:
```bash
git lfs install
git clone https://huggingface.co/datasets/FastVideo/Wan2.2-Syn-121x704x1280_32k
```
This dataset contains preprocessed latents ready for Text-to-Video (T2V) tasks and is designed to be directly used with the [FastVideo repository](https://github.com/hao-ai-lab/FastVideo) without further preprocessing. Refer to the FastVideo [documentation](https://hao-ai-lab.github.io/FastVideo) for detailed instructions on how to load and use the dataset for training or finetuning.
If you use FastVideo Synthetic Wan2.2 dataset for your research, please cite our paper:
```
@article{zhang2025vsa,
title={VSA: Faster Video Diffusion with Trainable Sparse Attention},
author={Zhang, Peiyuan and Huang, Haofeng and Chen, Yongqi and Lin, Will and Liu, Zhengzhong and Stoica, Ion and Xing, Eric and Zhang, Hao},
journal={arXiv preprint arXiv:2505.13389},
year={2025}
}
@article{zhang2025fast,
title={Fast video generation with sliding tile attention},
author={Zhang, Peiyuan and Chen, Yongqi and Su, Runlong and Ding, Hangliang and Stoica, Ion and Liu, Zhengzhong and Zhang, Hao},
journal={arXiv preprint arXiv:2502.04507},
year={2025}
}
```