File size: 5,750 Bytes
857bc90
 
 
 
 
 
 
 
 
 
 
 
7643180
857bc90
 
2b5e2f0
8662874
857bc90
7643180
857bc90
 
 
 
 
 
d02e0c4
857bc90
d02e0c4
857bc90
d02e0c4
857bc90
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42b682d
857bc90
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d02e0c4
 
 
 
 
 
 
 
 
857bc90
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d02e0c4
 
 
 
 
 
 
 
 
857bc90
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7643180
857bc90
 
 
 
7643180
ad70762
 
 
 
857bc90
7643180
857bc90
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
---
license: mit
task_categories:
- image-to-video
tags:
- video-generation
- motion-control
- point-trajectory
---

# MoveBench of Wan-Move

[![Paper](https://img.shields.io/badge/ArXiv-Paper-brown)](https://arxiv.org/abs/2512.08765)
[![Code](https://img.shields.io/badge/GitHub-Code-blue)](https://github.com/ali-vilab/Wan-Move)
[![Model](https://img.shields.io/badge/HuggingFace-Model-yellow)](https://huggingface.co/Ruihang/Wan-Move-14B-480P)
[![Model](https://img.shields.io/badge/ModelScope-Model-violet)](https://modelscope.cn/models/churuihang/Wan-Move-14B-480P)
[![Model](https://img.shields.io/badge/HuggingFace-MoveBench-cyan)](https://huggingface.co/datasets/Ruihang/MoveBench)
[![Video](https://img.shields.io/badge/YouTube-Video-red)](https://www.youtube.com/watch?v=_5Cy7Z2NQJQ)
[![Website](https://img.shields.io/badge/Demo-Page-bron)](https://wan-move.github.io/)



## MoveBench: A Comprehensive and Well-Curated Benchmark to Access Motion Control in Videos


MoveBench evaluates fine-grained point-level motion control in generated videos. We categorize the video library from [Pexels](https://www.pexels.com/videos/) into 54 content categories, 10-25 videos each, giving rise to 1018 cases to ensure a broad scenario coverage. All video clips maintain a 5-second duration to facilitate evaluation of long-range dynamics. Every clip is paired with detailed motion annotations for a single object. Addtional 192 clips have motion annotations for multiple objects.

To annotate, annotators click on a target region in the first frame, prompting SAM to generate an initial mask. When the mask exceeds the desired area, annotators add negative points to exclude irrelevant regions. After annotation, each video contains at least one point trajectory indicating a representative motion, with 192 videos additionally including multiple-object motion trajectories.

Welcome everyone to use it!


## Statistics

<p align="center" style="border-radius: 10px">
  <img src="assets/construction.png" width="100%" alt="logo"/>
<strong>The contruction pipeline of MoveBench </strong>
</p>

<p align="center" style="border-radius: 10px">
  <img src="assets/statistics_1.png" width="100%" alt="logo"/>
<strong>Balanced sample number per video category </strong>
</p>

<p align="center" style="border-radius: 10px">
  <img src="assets/statistics_2.png" width="100%" alt="logo"/>
<strong>Comparison with related benchmarks </strong>
</p>

## Download


Download MoveBench from Hugging Face:
``` sh
huggingface-cli download Ruihang/MoveBench --local-dir ./MoveBench --repo-type dataset
```

Extract the files below:
``` sh
tar -xzvf en.tar.gz
tar -xzvf zh.tar.gz
```

The file structure will be:

```
MoveBench
├── en             # English version
│   ├── single_track.txt
│   ├── multi_track.txt
│   ├── first_frame
│   │   ├── Pexels_videoid_0.jpg
│   │   ├── Pexels_videoid_1.jpg
│   │   ├── ...
│   ├── first_frame_mask
│   │   ├── single
│   │   │   ├── Pexels_videoid_0_mask_idx.png
│   │   │   ├── Pexels_videoid_1_mask_idx.png
│   │   │   ├── ...
│   │   ├── multi
│   │   │   ├── Pexels_videoid_0_mask_idx.png
│   │   │   ├── Pexels_videoid_1_mask_idx.png
│   │   │   ├── ...
│   ├── video
│   │   ├── Pexels_videoid_0.mp4
│   │   ├── Pexels_videoid_1.mp4
│   │   ├── ...
│   ├── track
│   │   ├── single
│   │   │   ├── Pexels_videoid_0_tracks.npy
│   │   │   ├── Pexels_videoid_0_visibility.npy
│   │   │   ├── ...
│   │   ├── multi
│   │   │   ├── Pexels_videoid_0_tracks.npy
│   │   │   ├── Pexels_videoid_0_visibility.npy
│   │   │   ├── ...
├── zh             # Chinese version
│   ├── single_track.txt
│   ├── multi_track.txt
│   ├── first_frame
│   │   ├── Pexels_videoid_0.jpg
│   │   ├── Pexels_videoid_1.jpg
│   │   ├── ...
│   ├── first_frame_mask
│   │   ├── single
│   │   │   ├── Pexels_videoid_0_mask_idx.png
│   │   │   ├── Pexels_videoid_1_mask_idx.png
│   │   │   ├── ...
│   │   ├── multi
│   │   │   ├── Pexels_videoid_0_mask_idx.png
│   │   │   ├── Pexels_videoid_1_mask_idx.png
│   │   │   ├── ...
│   ├── video
│   │   ├── Pexels_videoid_0.mp4
│   │   ├── Pexels_videoid_1.mp4
│   │   ├── ...
│   ├── track
│   │   ├── single
│   │   │   ├── Pexels_videoid_0_tracks.npy
│   │   │   ├── Pexels_videoid_0_visibility.npy
│   │   │   ├── ...
│   │   ├── multi
│   │   │   ├── Pexels_videoid_0_tracks.npy
│   │   │   ├── Pexels_videoid_0_visibility.npy
│   │   │   ├── ...
├── bench.py   # Evaluation script
├── utils      # Evaluation code modules
```


For evaluation, please refer to [Wan-Move](https://github.com/ali-vilab/Wan-Move) code base. Enjoy it!


## Citation
If you find our work helpful, please cite us.

```
@article{chu2025wan,
  title={Wan-move: Motion-controllable video generation via latent trajectory guidance},
  author={Chu, Ruihang and He, Yefei and Chen, Zhekai and Zhang, Shiwei and Xu, Xiaogang and Xia, Bin and Wang, Dingdong and Yi, Hongwei and Liu, Xihui and Zhao, Hengshuang and others},
  journal={arXiv preprint arXiv:2512.08765},
  year={2025}
}
```


## Contact Us
If you would like to leave a message to our research teams, feel free to drop me an [Email](ruihangchu@gmail.com).