SiyouLi commited on
Commit
267aeb3
·
verified ·
1 Parent(s): 3c3f0e8

Add files using upload-large-folder tool

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,201 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: transformers
4
+ pipeline_tag: image-text-to-text
5
+ language:
6
+ - en
7
+ tags:
8
+ - multimodal
9
+ - vision
10
+ - video
11
+ - long-video
12
+ - token-selection
13
+ - compression
14
+ - qwen2.5-vl
15
+ - qtsplus
16
+ ---
17
+
18
+ [![arXiv](https://img.shields.io/badge/arXiv-2511.11910-grey?labelColor=B31B1B&logo=arxiv&logoColor=white)](https://arxiv.org/abs/2511.11910)
19
+ [![Website](https://img.shields.io/badge/Website-QTSplus-grey?labelColor=3776AB&logo=GoogleChrome&logoColor=white)](https://qtsplus.github.io/)
20
+ [![Github](https://img.shields.io/badge/Github-QTSplus-grey?labelColor=000&logo=github)](https://github.com/Siyou-Li/QTSplus)
21
+
22
+ ## Model Description
23
+ ![](./assets/qtsplus.svg)
24
+
25
+ QTSplus-7B is a Qwen2.5-VL–based multimodal LLM finetuned with Query‑Aware Token Selector (QTSplus), a lightweight visual token selection module that acts as an information gate between the vision encoder and the LLM.
26
+
27
+ - Query‑aware selection: scores vision tokens via cross‑attention against the input text query.
28
+ - Adaptive retention: predicts an instance‑specific budget and keeps only the most relevant tokens.
29
+ - Temporal reasoning: a small re‑encoder preserves temporal order with absolute time cues.
30
+ - Efficient long‑video understanding: up to 89% vision token compression and 28% end‑to‑end latency reduction on long videos (see paper for details).
31
+
32
+ ## Intended Uses & Limitations
33
+
34
+ Intended uses
35
+ - Long‑video question answering and captioning
36
+ - Multi‑image reasoning and story understanding
37
+ - Efficient multimodal chat with reduced latency on long inputs
38
+
39
+ Limitations
40
+ - May miss fine details if the predicted retention budget is too small.
41
+ - Inherits biases and failure modes from the base Qwen2.5‑VL model and training data.
42
+ - Not a safety‑aligned system; outputs may be inaccurate or unsafe without human oversight.
43
+
44
+ ## Quick Start
45
+
46
+ The repository is designed around a conda‑based Python 3.11 environment with a CUDA‑enabled GPU.
47
+
48
+ 1. **Create and activate the conda environment**
49
+
50
+ ```bash
51
+ conda create -n qtsplus python=3.11 -y
52
+ conda activate qtsplus
53
+ ```
54
+
55
+ 2. **Install toolchain and CUDA toolkit**
56
+
57
+ ```bash
58
+ conda install conda-forge::gcc=11 conda-forge::gxx=11 -y
59
+ conda install nvidia/label/cuda-12.8.1::cuda-toolkit -y
60
+ conda install av -c conda-forge -y
61
+ ```
62
+
63
+ 3. **Install PyTorch with CUDA 12.8 support**
64
+
65
+ ```bash
66
+ pip3 install torch==2.9.0 torchvision --index-url https://download.pytorch.org/whl/cu128
67
+ ```
68
+
69
+ 4. **Install core Python libraries**
70
+
71
+ ```bash
72
+ pip install transformers==4.57.1
73
+ DS_BUILD_CUTLASS_OPS=0 DS_BUILD_RAGGED_DEVICE_OPS=0 DS_BUILD_EVOFORMER_ATTN=0 pip install deepspeed
74
+ pip install accelerate pandas wandb matplotlib scikit-learn datasets evaluate ftfy sentencepiece bitsandbytes
75
+ ```
76
+
77
+ 5. **Install FlashAttention (prebuilt wheel)**
78
+
79
+ ```bash
80
+ pip install https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/download/v0.4.22/flash_attn-2.8.1+cu128torch2.9-cp311-cp311-linux_x86_64.whl
81
+ ```
82
+
83
+ This wheel is specific to Linux x86_64, CUDA 12.8, PyTorch 2.9.0 and Python 3.11; if you deviate from this configuration, you will need to install a compatible FlashAttention build instead.
84
+
85
+ 6. **Verify installation**
86
+
87
+ After installation, you should be able to run:
88
+
89
+ ```bash
90
+ python -c "import torch, transformers, deepspeed, accelerate; print(torch.cuda.is_available())"
91
+ ```
92
+
93
+ which should print `True` on a correctly configured GPU machine.
94
+
95
+ Video example
96
+ ```python
97
+ import torch, glob, os
98
+ from transformers import AutoModelForCausalLM, AutoProcessor
99
+ from qwen_vl_utils import process_vision_info
100
+
101
+ model_id = "AlpachinoNLP/QTSplus-7B"
102
+ device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
103
+ dtype = torch.bfloat16 if torch.cuda.is_available() else torch.float16
104
+
105
+ model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True).to(dtype=dtype, device=device).eval()
106
+ processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
107
+
108
+ question = "Summarize the key events in this video."
109
+ video_path = "/path/to/video.mp4"
110
+
111
+ messages = [{
112
+ "role": "user",
113
+ "content": [
114
+ {"type": "video", "video": video_path, "max_pixels": 360*420, "fps": 1.0},
115
+ {"type": "text", "text": question},
116
+ ],
117
+ }]
118
+
119
+ chat = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
120
+ _, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
121
+
122
+ inputs = processor(text=[chat], images=None, videos=video_inputs, padding=True, return_tensors="pt", **video_kwargs)
123
+ inputs = inputs.to(dtype=torch.float16, device=device)
124
+
125
+ # Pack vision inputs for QTSplus
126
+ pixel_values_videos = inputs.pop("pixel_values_videos", None)
127
+ video_grid_thw = inputs.pop("video_grid_thw", None)
128
+ inputs.pop("second_per_grid_ts", None)
129
+ vision_input = None
130
+ if pixel_values_videos is not None and video_grid_thw is not None:
131
+ vision_input = {"pixel_values_videos": pixel_values_videos, "video_grid_thw": video_grid_thw}
132
+
133
+ # Text ids from the question only (exclude special/system/vision tokens)
134
+ question_ids = processor.tokenizer(question, return_tensors="pt", add_special_tokens=False).input_ids.to(dtype=torch.long, device=device)
135
+
136
+ out_ids = model.generate(vision_input=vision_input, input_ids=inputs.input_ids, question_input_ids=question_ids, max_new_tokens=256)
137
+ trimmed = [o[len(i):] for i, o in zip(inputs.input_ids, out_ids)]
138
+ text = processor.batch_decode(trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=True)
139
+ print(text[0])
140
+ ```
141
+
142
+ Multiple images (treated as a video sequence)
143
+ ```python
144
+ images_dir = "/path/to/images"
145
+ image_list = sorted(glob.glob(os.path.join(images_dir, "*.jpg"))) or sorted(glob.glob(os.path.join(images_dir, "*.jpeg")))
146
+ messages = [{
147
+ "role": "user",
148
+ "content": [
149
+ {"type": "video", "video": image_list},
150
+ {"type": "text", "text": "What story do these images tell?"},
151
+ ],
152
+ }]
153
+
154
+ chat = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
155
+ _, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
156
+ inputs = processor(text=[chat], images=None, videos=video_inputs, padding=True, return_tensors="pt", **video_kwargs).to(dtype=torch.float16, device=device)
157
+
158
+ pixel_values_videos = inputs.pop("pixel_values_videos", None)
159
+ video_grid_thw = inputs.pop("video_grid_thw", None)
160
+ inputs.pop("second_per_grid_ts", None)
161
+ vision_input = {"pixel_values_videos": pixel_values_videos, "video_grid_thw": video_grid_thw}
162
+
163
+ out = model.generate(vision_input=vision_input, input_ids=inputs.input_ids, max_new_tokens=256)
164
+ print(processor.decode(out[0], skip_special_tokens=True))
165
+ ```
166
+
167
+ Notes
168
+ - The chat template is applied via `processor.apply_chat_template` and expects the messages schema shown above.
169
+ - QTSplus expects the vision payload under the `vision_input` keyword argument during generation.
170
+ - For fully offline use, pass `local_files_only=True` to `from_pretrained` calls once the files are cached locally.
171
+
172
+ ## Efficiency & Controls
173
+
174
+ The following QTSplus hyperparameters in `config.json` control compression and selection behavior:
175
+ - `qts_plus_rho_min` / `qts_plus_rho_max`: min/max retention ratio bounds.(default: 0.05 / 0.5)
176
+ - `qts_plus_tau_s`: scoring temperature for cross‑attention.(default: 0.5)
177
+ - `qts_plus_nmax`: hard cap on selected tokens per sample. (default: 25600)
178
+ These trade off detail vs. speed/memory. See the paper for guidance, ablations, and latency/throughput measurements.
179
+
180
+
181
+ ## Safety, Bias, and Limitations
182
+
183
+ - Outputs may be factually incorrect, biased, or unsafe. Do not use without human oversight.
184
+ - QTSplus compresses the vision stream; extremely small budgets may drop rare but important details.
185
+ - Inherits safety/bias characteristics from the underlying Qwen2.5‑VL model and training data.
186
+
187
+ ## Citation
188
+
189
+ If you find this work helpful, please cite:
190
+
191
+ ```bibtex
192
+ @misc{li2025seeingforesttreesqueryaware,
193
+ title = {Seeing the Forest and the Trees: Query-Aware Tokenizer for Long-Video Multimodal Language Models},
194
+ author = {Siyou Li and Huanan Wu and Juexi Shao and Yinghao Ma and Yujian Gan and Yihao Luo and Yuwei Wang and Dong Nie and Lu Wang and Wengqing Wu and Le Zhang and Massimo Poesio and Juntao Yu},
195
+ year = {2025},
196
+ eprint = {2511.11910},
197
+ archivePrefix= {arXiv},
198
+ primaryClass = {cs.CV},
199
+ url = {https://arxiv.org/abs/2511.11910}
200
+ }
201
+ ```
added_tokens.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</tool_call>": 151658,
3
+ "<tool_call>": 151657,
4
+ "<|box_end|>": 151649,
5
+ "<|box_start|>": 151648,
6
+ "<|endoftext|>": 151643,
7
+ "<|file_sep|>": 151664,
8
+ "<|fim_middle|>": 151660,
9
+ "<|fim_pad|>": 151662,
10
+ "<|fim_prefix|>": 151659,
11
+ "<|fim_suffix|>": 151661,
12
+ "<|im_end|>": 151645,
13
+ "<|im_start|>": 151644,
14
+ "<|image_pad|>": 151655,
15
+ "<|object_ref_end|>": 151647,
16
+ "<|object_ref_start|>": 151646,
17
+ "<|quad_end|>": 151651,
18
+ "<|quad_start|>": 151650,
19
+ "<|repo_name|>": 151663,
20
+ "<|video_pad|>": 151656,
21
+ "<|vision_end|>": 151653,
22
+ "<|vision_pad|>": 151654,
23
+ "<|vision_start|>": 151652
24
+ }
assets/dataset.svg ADDED
assets/logo_with_glasses.svg ADDED
assets/qtsplus.svg ADDED
assets/system_load.svg ADDED
assets/training_process.svg ADDED
chat_template.jinja ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {% set image_count = namespace(value=0) %}{% set video_count = namespace(value=0) %}{% for message in messages %}{% if loop.first and message['role'] != 'system' %}<|im_start|>system
2
+ You are a helpful assistant.<|im_end|>
3
+ {% endif %}<|im_start|>{{ message['role'] }}
4
+ {% if message['content'] is string %}{{ message['content'] }}<|im_end|>
5
+ {% else %}{% for content in message['content'] %}{% if content['type'] == 'image' or 'image' in content or 'image_url' in content %}{% set image_count.value = image_count.value + 1 %}{% if add_vision_id %}Picture {{ image_count.value }}: {% endif %}<|vision_start|><|image_pad|><|vision_end|>{% elif content['type'] == 'video' or 'video' in content %}{% set video_count.value = video_count.value + 1 %}{% if add_vision_id %}Video {{ video_count.value }}: {% endif %}<|vision_start|><|video_pad|><|vision_end|>{% elif 'text' in content %}{{ content['text'] }}{% endif %}{% endfor %}<|im_end|>
6
+ {% endif %}{% endfor %}{% if add_generation_prompt %}<|im_start|>assistant
7
+ {% endif %}
config.json ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "QTSplusQwen2_5_VLTextForCausalLM"
4
+ ],
5
+ "auto_map": {
6
+ "AutoConfig": "configuration_qts_plus_qwen2_5_vl.QTSplusQwen2_5_VL_CausalLM_Config",
7
+ "AutoModelForCausalLM": "modeling_qts_plus_qwen2_5_vl.QTSplusQwen2_5_VLTextForCausalLM",
8
+ "AutoProcessor": "processing_qts_plus_qwen2_5_vl.QTSplusQwen2_5_VLProcessor"
9
+ },
10
+ "vision_tower": "qwen2_5_vl_vision",
11
+ "enable_qts_plus": true,
12
+ "qts_plus_n_heads": 28,
13
+ "qts_plus_tau_s": 0.5,
14
+ "qts_plus_nmax": 25600,
15
+ "qts_plus_rho_min": 0.05,
16
+ "qts_plus_rho_max": 0.5,
17
+ "qts_plus_block_dropout": 0.0,
18
+ "qts_plus_reencode": true,
19
+ "qts_plus_scoring_layers": 1,
20
+ "qts_plus_reencode_layers": 2,
21
+ "project_text_if_needed": false,
22
+ "freeze_qts_scoring_layers": false,
23
+ "lambda_t": 0,
24
+ "lambda_m": 0,
25
+ "lambda_s": 0,
26
+ "attention_dropout": 0.0,
27
+ "bos_token_id": 151643,
28
+ "eos_token_id": 151645,
29
+ "vision_start_token_id": 151652,
30
+ "vision_end_token_id": 151653,
31
+ "vision_token_id": 151654,
32
+ "image_token_id": 151655,
33
+ "video_token_id": 151656,
34
+ "hidden_act": "silu",
35
+ "hidden_size": 3584,
36
+ "initializer_range": 0.02,
37
+ "intermediate_size": 18944,
38
+ "max_position_embeddings": 128000,
39
+ "max_window_layers": 28,
40
+ "model_type": "qts_plus_qwen2_5_vl_causal_lm",
41
+ "num_attention_heads": 28,
42
+ "num_hidden_layers": 28,
43
+ "num_key_value_heads": 4,
44
+ "rms_norm_eps": 1e-06,
45
+ "rope_theta": 1000000.0,
46
+ "sliding_window": null,
47
+ "tie_word_embeddings": false,
48
+ "torch_dtype": "bfloat16",
49
+ "transformers_version": "4.57.1",
50
+ "use_cache": true,
51
+ "use_sliding_window": false,
52
+ "vision_config": {
53
+ "depth": 32,
54
+ "hidden_act": "silu",
55
+ "hidden_size": 1280,
56
+ "intermediate_size": 3420,
57
+ "num_heads": 16,
58
+ "in_chans": 3,
59
+ "out_hidden_size": 3584,
60
+ "patch_size": 14,
61
+ "spatial_merge_size": 2,
62
+ "spatial_patch_size": 14,
63
+ "window_size": 112,
64
+ "fullatt_block_indexes": [
65
+ 7,
66
+ 15,
67
+ 23,
68
+ 31
69
+ ],
70
+ "tokens_per_second": 2,
71
+ "temporal_patch_size": 2
72
+ },
73
+ "rope_scaling": {
74
+ "type": "mrope",
75
+ "mrope_section": [
76
+ 16,
77
+ 24,
78
+ 24
79
+ ]
80
+ },
81
+ "vocab_size": 152064
82
+ }
configuration_qts_plus_qwen2_5_vl.py ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Self-contained config shim for trust_remote_code.
3
+
4
+ This file defines the minimal configuration class expected by
5
+ `config.json` without importing from a local `src` package.
6
+ """
7
+ from transformers import AutoConfig
8
+ from transformers.models.qwen2_5_vl.configuration_qwen2_5_vl import Qwen2_5_VLTextConfig
9
+
10
+ class QTSplusQwen2_5_VL_CausalLM_Config(Qwen2_5_VLTextConfig):
11
+ """Config alias for QTS+ Qwen2.5-VL Causal LM.
12
+
13
+ It inherits from the upstream Qwen2.5-VL text config and only sets a
14
+ distinct `model_type` so that Transformers can resolve the proper
15
+ architecture via `auto_map`.
16
+ """
17
+
18
+ model_type = "qts_plus_qwen2_5_vl_causal_lm"
19
+
20
+ AutoConfig.register("qts_plus_qwen2_5_vl_causal_lm", QTSplusQwen2_5_VL_CausalLM_Config)
21
+ __all__ = ["QTSplusQwen2_5_VL_CausalLM_Config"]
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 151643,
4
+ "eos_token_id": [
5
+ 151645
6
+ ],
7
+ "pad_token_id": 151643,
8
+ "transformers_version": "4.57.1"
9
+ }
latest ADDED
@@ -0,0 +1 @@
 
 
1
+ global_step30000
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1366365f50c6d1464cec1b067f4a1d600b8fa05c4a22911c1e46aa8b81fd5b14
3
+ size 17379187170
modeling_qts_plus_qwen2_5_vl.py ADDED
@@ -0,0 +1,1090 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Self-contained modeling shim for trust_remote_code.
3
+
4
+ Implements the QTS+ Qwen2.5‑VL Causal LM architecture locally by
5
+ composing upstream Transformers' Qwen2.5‑VL text and vision modules
6
+ with a lightweight QTS+ selector. This avoids importing any local `src`
7
+ package while preserving checkpoint compatibility (including
8
+ `model.vision_tower.*` and `model.qts_plus.selector.*` parameters).
9
+ """
10
+ import json
11
+ import os
12
+ from typing import Optional
13
+ import torch
14
+ import torch.nn as nn
15
+ from transformers import AutoConfig, AutoModelForCausalLM, logging
16
+ from transformers.modeling_flash_attention_utils import is_flash_attn_available
17
+ from transformers.modeling_outputs import CausalLMOutputWithPast
18
+ from transformers.generation import GenerationMixin
19
+ from dataclasses import dataclass
20
+ from typing import Any, Dict, List, Optional, Tuple, Union
21
+
22
+ from transformers.models.qwen2_5_vl.modeling_qwen2_5_vl import Qwen2_5_VisionTransformerPretrainedModel as Qwen2_5_VisionTransformerPretrainedModelBase
23
+ from transformers.models.qwen2_5_vl.modeling_qwen2_5_vl import (
24
+ Qwen2_5_VLTextModel,
25
+ Qwen2_5_VLPreTrainedModel,
26
+
27
+ )
28
+ from transformers.models.qwen2_5_vl.configuration_qwen2_5_vl import Qwen2_5_VLTextConfig, Qwen2_5_VLVisionConfig
29
+ from .configuration_qts_plus_qwen2_5_vl import (
30
+ QTSplusQwen2_5_VL_CausalLM_Config
31
+ )
32
+ logger = logging.get_logger(__name__)
33
+ # ------------------------------
34
+ # Utilities: embedding integration
35
+ # ------------------------------
36
+ def qts_integrate_embeddings(
37
+ vision_features: torch.Tensor,
38
+ input_ids: torch.Tensor,
39
+ attention_mask: torch.Tensor,
40
+ labels: Optional[torch.Tensor] = None,
41
+ image_token_id: Optional[int] = None,
42
+ video_token_id: Optional[int] = None,
43
+ image_grid_thw: Optional[torch.Tensor] = None,
44
+ video_grid_thw: Optional[torch.Tensor] = None,
45
+ text_model_embed_layer: Optional[nn.Embedding] = None,
46
+ kept_indices: Optional[torch.Tensor] = None,
47
+ ) -> Tuple[torch.Tensor, torch.Tensor, Optional[torch.Tensor]]:
48
+ """Integrate visual features into text embeddings (single-sample batch).
49
+
50
+ This mirrors the behavior of the full Qwen2.5‑VL generation path, but
51
+ works with pre-computed visual features and placeholder tokens in the
52
+ text sequence. It supports both the single <|video_pad|> token case and
53
+ multi-placeholder templates.
54
+ """
55
+ if text_model_embed_layer is None:
56
+ raise ValueError("text_model_embed_tokens is required for text embedding integration")
57
+ if input_ids.dtype is not torch.long:
58
+ input_ids = input_ids.long()
59
+
60
+ inputs_embeds = text_model_embed_layer(input_ids)
61
+ if vision_features.shape[0] <= 0:
62
+ raise ValueError("vision_features must contain at least one feature vector")
63
+ if video_token_id is None:
64
+ raise ValueError("video_token_id must be provided for video feature integration")
65
+
66
+ B, S = input_ids.shape
67
+ assert B == 1, "Sequence-trimming currently assumes batch_size == 1."
68
+
69
+ vid_pos = (input_ids[0] == video_token_id).nonzero(as_tuple=False).flatten()
70
+ n_feats = int(vision_features.shape[0])
71
+
72
+ if vid_pos.numel() == 1 and n_feats >= 1:
73
+ insert_idx = int(vid_pos.item())
74
+ vision_features = vision_features.to(inputs_embeds.device, inputs_embeds.dtype)
75
+
76
+ pre_embeds = inputs_embeds[:, :insert_idx, :]
77
+ post_embeds = inputs_embeds[:, insert_idx + 1 :, :]
78
+
79
+ feats_embeds = vision_features.unsqueeze(0)
80
+ inputs_embeds = torch.cat([pre_embeds, feats_embeds, post_embeds], dim=1)
81
+
82
+ feats_mask = torch.ones((1, n_feats), dtype=attention_mask.dtype, device=attention_mask.device)
83
+ pre_mask = attention_mask[:, :insert_idx]
84
+ post_mask = attention_mask[:, insert_idx + 1 :]
85
+ attention_mask = torch.cat([pre_mask, feats_mask, post_mask], dim=1)
86
+
87
+ if labels is not None:
88
+ labels = labels.clone()
89
+ if labels.size(1) > insert_idx:
90
+ pre_labels = labels[:, :insert_idx]
91
+ post_labels = labels[:, insert_idx + 1 :]
92
+ pad = torch.full((1, n_feats), -100, dtype=labels.dtype, device=labels.device)
93
+ labels = torch.cat([pre_labels, pad, post_labels], dim=1)
94
+ return inputs_embeds, attention_mask, labels
95
+
96
+ # Fallback: multi-placeholder handling
97
+ M = int(vid_pos.numel())
98
+ if M == 0:
99
+ raise ValueError("No video placeholder tokens found in input_ids for provided vision_features")
100
+
101
+ vision_features = vision_features.to(inputs_embeds.device, inputs_embeds.dtype)
102
+ N = n_feats
103
+ if N > M:
104
+ raise NotImplementedError(
105
+ "Number of vision features exceeds video placeholders; use a single <|video_pad|> token template."
106
+ )
107
+ if N < M:
108
+ drop_pos = vid_pos[N:]
109
+ if drop_pos.numel() > 0:
110
+ keep_seq = torch.ones(S, dtype=torch.bool, device=input_ids.device)
111
+ keep_seq[drop_pos] = False
112
+ input_ids = input_ids[:, keep_seq]
113
+ attention_mask = attention_mask[:, keep_seq]
114
+ inputs_embeds = inputs_embeds[:, keep_seq, :]
115
+ if labels is not None:
116
+ labels = labels[:, keep_seq]
117
+ vid_pos = (input_ids[0] == video_token_id).nonzero(as_tuple=False).flatten()
118
+ M = int(vid_pos.numel())
119
+
120
+ for i in range(N):
121
+ pos = int(vid_pos[i].item())
122
+ inputs_embeds[0, pos, :] = vision_features[i, :]
123
+ if labels is not None and N > 0:
124
+ labels = labels.clone()
125
+ labels[0, vid_pos[:N]] = -100
126
+
127
+ return inputs_embeds, attention_mask.to(inputs_embeds.device), labels
128
+
129
+
130
+ # ------------------------------
131
+ # QTS+ modules (selector + tokenizer)
132
+ # ------------------------------
133
+ class RMSNorm(nn.Module):
134
+ def __init__(self, d: int, eps: float = 1e-6):
135
+ super().__init__()
136
+ self.weight = nn.Parameter(torch.ones(d))
137
+ self.eps = eps
138
+
139
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
140
+ norm = x.pow(2).mean(dim=-1, keepdim=True)
141
+ x = x * torch.rsqrt(norm + self.eps)
142
+ return self.weight * x
143
+
144
+
145
+ class FeedForward(nn.Module):
146
+ def __init__(self, d_model: int, d_ff: int, dropout: float = 0.0):
147
+ super().__init__()
148
+ self.net = nn.Sequential(
149
+ nn.Linear(d_model, d_ff),
150
+ nn.GELU(),
151
+ nn.Linear(d_ff, d_model),
152
+ nn.Dropout(dropout),
153
+ )
154
+
155
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
156
+ return self.net(x)
157
+
158
+
159
+ class Qwen2_5_ScoringCrossAttentionLayer(nn.Module):
160
+ """Qwen2.5-style cross-attention used in QTS+ scoring.
161
+
162
+ Separate q/k/v projections (with optional multi-query kv heads) followed by
163
+ an output projection and a small FFN on the query path.
164
+ """
165
+
166
+ def __init__(
167
+ self,
168
+ d_model: int,
169
+ num_heads: int,
170
+ num_key_value_heads: Optional[int] = None,
171
+ dropout: float = 0.0,
172
+ d_ff: Optional[int] = None,
173
+ rms_norm_eps: float = 1e-6,
174
+ use_qwen_rms: bool = True,
175
+ ) -> None:
176
+ super().__init__()
177
+ assert d_model % num_heads == 0
178
+ self.hidden_size = d_model
179
+ self.num_heads = int(num_heads)
180
+ self.head_dim = d_model // self.num_heads
181
+ self.num_key_value_heads = int(num_key_value_heads) if num_key_value_heads else self.num_heads
182
+ self.num_key_value_groups = self.num_heads // self.num_key_value_heads
183
+ self.attention_dropout = dropout
184
+
185
+ # Minimal Qwen-like RMS norms
186
+ class _Qwen2RMSNorm(nn.Module):
187
+ def __init__(self, hidden_size: int, eps: float = 1e-6):
188
+ super().__init__()
189
+ self.weight = nn.Parameter(torch.ones(hidden_size))
190
+ self.eps = float(eps)
191
+
192
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
193
+ dtype = x.dtype
194
+ x = x.float()
195
+ variance = x.pow(2).mean(-1, keepdim=True)
196
+ x = x * torch.rsqrt(variance + self.eps)
197
+ x = x.to(dtype)
198
+ return self.weight * x
199
+
200
+ self.q_norm = _Qwen2RMSNorm(d_model, eps=rms_norm_eps) if use_qwen_rms else RMSNorm(d_model, eps=rms_norm_eps)
201
+ self.kv_norm = _Qwen2RMSNorm(d_model, eps=rms_norm_eps) if use_qwen_rms else RMSNorm(d_model, eps=rms_norm_eps)
202
+ self.ffn_norm = _Qwen2RMSNorm(d_model, eps=rms_norm_eps) if use_qwen_rms else RMSNorm(d_model, eps=rms_norm_eps)
203
+
204
+ self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=True)
205
+ self.k_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=True)
206
+ self.v_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=True)
207
+ self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=False)
208
+
209
+ self.ffn = FeedForward(d_model, d_ff or (4 * d_model), dropout=dropout)
210
+
211
+ @staticmethod
212
+ def _repeat_kv(x: torch.Tensor, n_rep: int) -> torch.Tensor:
213
+ b, h_kv, t, dh = x.shape
214
+ if n_rep == 1:
215
+ return x
216
+ x = x[:, :, None, :, :].expand(b, h_kv, n_rep, t, dh)
217
+ return x.reshape(b, h_kv * n_rep, t, dh)
218
+
219
+ def forward(
220
+ self,
221
+ q: torch.Tensor, # [B, L, D]
222
+ kv: torch.Tensor, # [B, M, D]
223
+ kv_key_padding_mask: Optional[torch.Tensor] = None, # [B, M]
224
+ need_weights: bool = False,
225
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor]]:
226
+ B, L, _ = q.shape
227
+ _, M, _ = kv.shape
228
+
229
+ qn = self.q_norm(q)
230
+ kvn = self.kv_norm(kv)
231
+
232
+ q_states = self.q_proj(qn)
233
+ k_states = self.k_proj(kvn)
234
+ v_states = self.v_proj(kvn)
235
+
236
+ q_states = q_states.view(B, L, self.num_heads, self.head_dim).transpose(1, 2)
237
+ k_states = k_states.view(B, M, self.num_key_value_heads, self.head_dim).transpose(1, 2)
238
+ v_states = v_states.view(B, M, self.num_key_value_heads, self.head_dim).transpose(1, 2)
239
+
240
+ if self.num_key_value_groups > 1:
241
+ k_states = self._repeat_kv(k_states, self.num_key_value_groups)
242
+ v_states = self._repeat_kv(v_states, self.num_key_value_groups)
243
+
244
+ attn_weights = torch.matmul(q_states, k_states.transpose(2, 3)) / (self.head_dim ** 0.5)
245
+ if kv_key_padding_mask is not None:
246
+ mask = kv_key_padding_mask[:, None, None, :].to(dtype=attn_weights.dtype)
247
+ attn_weights = attn_weights.masked_fill(mask > 0.5, float("-inf"))
248
+ attn_dtype = attn_weights.dtype
249
+ attn_weights = torch.softmax(attn_weights, dim=-1, dtype=torch.float32).to(attn_dtype)
250
+ attn_output = torch.matmul(attn_weights, v_states)
251
+ attn_output = attn_output.transpose(1, 2).contiguous().view(B, L, self.num_heads * self.head_dim)
252
+
253
+ out = self.o_proj(attn_output)
254
+ q = q + out
255
+ q = q + self.ffn(self.ffn_norm(q))
256
+ return q, (attn_weights if need_weights else None)
257
+
258
+
259
+ class Qwen2_5_SelfReencodeLayer(nn.Module):
260
+ def __init__(
261
+ self,
262
+ d_model: int,
263
+ num_heads: int,
264
+ num_key_value_heads: Optional[int] = None,
265
+ dropout: float = 0.0,
266
+ d_ff: Optional[int] = None,
267
+ rms_norm_eps: float = 1e-6,
268
+ use_qwen_rms: bool = True,
269
+ ) -> None:
270
+ super().__init__()
271
+ self.core = Qwen2_5_ScoringCrossAttentionLayer(
272
+ d_model=d_model,
273
+ num_heads=num_heads,
274
+ num_key_value_heads=num_key_value_heads or num_heads,
275
+ dropout=dropout,
276
+ d_ff=d_ff,
277
+ rms_norm_eps=rms_norm_eps,
278
+ use_qwen_rms=use_qwen_rms,
279
+ )
280
+
281
+ def forward(self, x: torch.Tensor, key_padding_mask: Optional[torch.Tensor] = None) -> torch.Tensor:
282
+ y, _ = self.core(x, x, kv_key_padding_mask=key_padding_mask, need_weights=False)
283
+ return y
284
+
285
+ def init_from_qwen_attn(self, qwen_attn: nn.Module, qwen_input_ln: Optional[nn.Module] = None) -> None:
286
+ self.core.init_from_qwen_attn(qwen_attn, qwen_input_ln)
287
+
288
+
289
+ class BudgetHead(nn.Module):
290
+ def __init__(self, d_model: int, hidden: int = 256, rho_min: float = 0.05, rho_max: float = 0.5) -> None:
291
+ super().__init__()
292
+ self.rho_min = rho_min
293
+ self.rho_max = rho_max
294
+ self.mlp = nn.Sequential(
295
+ nn.Linear(d_model + 3, hidden),
296
+ nn.GELU(),
297
+ nn.Linear(hidden, 1),
298
+ )
299
+
300
+ def forward(self, sq: torch.Tensor, logM: torch.Tensor, r_max: torch.Tensor, H: torch.Tensor) -> torch.Tensor:
301
+ B, D = sq.shape
302
+ x = torch.cat([sq, logM.view(B, 1), r_max.view(B, 1), H.view(B, 1)], dim=1)
303
+ # Ensure input dtype matches layer weights to avoid Float/Half mismatch
304
+ x = x.to(dtype=self.mlp[0].weight.dtype)
305
+ logits = self.mlp(x).squeeze(1)
306
+ rho = self.rho_min + (self.rho_max - self.rho_min) * torch.sigmoid(logits)
307
+ return rho
308
+
309
+
310
+ class QTSplus(nn.Module):
311
+ """Query‑Aware Token Selector with Adaptive Budget."""
312
+
313
+ def __init__(
314
+ self,
315
+ d_model: int,
316
+ n_heads: int = 8,
317
+ n_kv_heads: Optional[int] = None,
318
+ tau_s: float = 0.1,
319
+ nmax: int = 2560,
320
+ rho_min: float = 0.05,
321
+ rho_max: float = 0.5,
322
+ block_dropout: float = 0.0,
323
+ use_reencode: bool = True,
324
+ n_scoring_layers: int = 1,
325
+ n_reencode_layers: int = 1,
326
+ ) -> None:
327
+ super().__init__()
328
+ assert d_model % n_heads == 0
329
+ self.d_model = d_model
330
+ self.n_heads = int(n_heads)
331
+ self.d_head = d_model // self.n_heads
332
+ self.tau_s = float(tau_s)
333
+ self.nmax = int(nmax)
334
+ self.use_reencode = bool(use_reencode)
335
+ self.n_scoring_layers = max(int(n_scoring_layers), 1)
336
+ self.n_reencode_layers = max(int(n_reencode_layers), 1)
337
+
338
+ n_kv_heads_eff = int(n_kv_heads) if (n_kv_heads is not None and int(n_kv_heads) > 0) else self.n_heads
339
+ self.scoring_layers = nn.ModuleList(
340
+ [
341
+ Qwen2_5_ScoringCrossAttentionLayer(
342
+ d_model,
343
+ num_heads=self.n_heads,
344
+ num_key_value_heads=n_kv_heads_eff,
345
+ dropout=0.0,
346
+ rms_norm_eps=1e-6,
347
+ use_qwen_rms=True,
348
+ )
349
+ for _ in range(self.n_scoring_layers)
350
+ ]
351
+ )
352
+
353
+ self.budget = BudgetHead(d_model, rho_min=rho_min, rho_max=rho_max)
354
+
355
+ if self.use_reencode:
356
+ self.reencode_layers = nn.ModuleList(
357
+ [
358
+ Qwen2_5_SelfReencodeLayer(
359
+ d_model,
360
+ num_heads=self.n_heads,
361
+ num_key_value_heads=n_kv_heads_eff,
362
+ dropout=0.0,
363
+ rms_norm_eps=1e-6,
364
+ use_qwen_rms=True,
365
+ )
366
+ for _ in range(self.n_reencode_layers)
367
+ ]
368
+ )
369
+ else:
370
+ self.reencode_layers = None
371
+
372
+ def _score(self, Xv: torch.Tensor, Qt: torch.Tensor) -> torch.Tensor:
373
+ # Simple cross-attention based scoring aggregated across heads and query positions
374
+ B, M, D = Xv.shape
375
+ q = Qt
376
+ kv = Xv
377
+ for layer in self.scoring_layers:
378
+ q, attn = layer(q, kv, need_weights=True)
379
+ # attn: [B, H, L, M]; aggregate -> [B, M]
380
+ r = attn.amax(dim=2).mean(dim=1)
381
+ return r
382
+
383
+ def _predict_budget(self, q: torch.Tensor, r: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
384
+ B, L, D = q.shape
385
+ M = r.shape[-1]
386
+ sq = q.mean(dim=1)
387
+ # Create logM with same dtype/device as q to keep types consistent
388
+ logM = torch.log(torch.tensor(float(M), device=q.device, dtype=q.dtype)).expand(B)
389
+ r_max = r.max(dim=-1).values
390
+ # entropy H over token scores after softmax
391
+ p = torch.softmax(r, dim=-1)
392
+ H = -(p * (p.clamp(min=1e-12).log())).sum(dim=-1)
393
+ rho = self.budget(sq, logM, r_max, H)
394
+ # n = clamp(round(rho * M), 1, nmax)
395
+ n = torch.clamp((rho * float(M)).round(), min=1.0, max=float(self.nmax)).to(torch.long)
396
+ return rho, n
397
+
398
+ def forward(self, Xv: torch.Tensor, Qt: torch.Tensor, mode: str = "train") -> Dict[str, Any]:
399
+ assert mode in ("train", "infer")
400
+ B, M, D = Xv.shape
401
+ r = self._score(Xv, Qt)
402
+ rho, n = self._predict_budget(Qt, r)
403
+
404
+ # Hard top-n with original order preserved
405
+ kept_idx_list: List[torch.Tensor] = []
406
+ Z_out: List[torch.Tensor] = []
407
+ for b in range(B):
408
+ kb = torch.topk(r[b], k=int(n[b].item()), dim=0).indices
409
+ kb, _ = torch.sort(kb)
410
+ kept_idx_list.append(kb)
411
+ Z_out.append(Xv[b, kb])
412
+
413
+ if self.use_reencode:
414
+ max_keep = int(max(z.size(0) for z in Z_out))
415
+ Zb = []
416
+ for z in Z_out:
417
+ if z.size(0) < max_keep:
418
+ pad = z[-1:].repeat(max_keep - z.size(0), 1)
419
+ z = torch.cat([z, pad], dim=0)
420
+ Zb.append(z.unsqueeze(0))
421
+ Zb = torch.cat(Zb, dim=0)
422
+ for layer in self.reencode_layers or []:
423
+ Zb = layer(Zb)
424
+ Z_final = [Zb[b, : kept_idx_list[b].numel()] for b in range(B)]
425
+ else:
426
+ Z_final = Z_out
427
+
428
+ # Simple training proxies
429
+ p = torch.softmax(r, dim=-1)
430
+ flops_proxy = ((rho * float(M)) ** 2) / float(self.nmax ** 2)
431
+ kv_proxy = (rho * float(M)) / float(self.nmax)
432
+
433
+ return {
434
+ "indices": kept_idx_list,
435
+ "Z": Z_final,
436
+ "rho": rho,
437
+ "r": r,
438
+ "n": n,
439
+ "add_loss": {
440
+ "flops": flops_proxy.mean(),
441
+ "kv": kv_proxy.mean(),
442
+ "smooth": torch.tensor(0.0, device=Xv.device, dtype=Xv.dtype),
443
+ },
444
+ }
445
+
446
+
447
+ class QTSplusTokenizerConfig:
448
+ def __init__(
449
+ self,
450
+ embedding_dim: int,
451
+ n_heads: int = 8,
452
+ num_kv_heads: Optional[int] = None,
453
+ tau_s: float = 0.1,
454
+ nmax: int = 2560,
455
+ rho_min: float = 0.05,
456
+ rho_max: float = 0.5,
457
+ block_dropout: float = 0.0,
458
+ reencode: bool = True,
459
+ scoring_layers: int = 1,
460
+ reencode_layers: int = 1,
461
+ lambda_t: float = 1.0,
462
+ lambda_m: float = 1.7,
463
+ lambda_s: float = 0.05,
464
+ project_text_if_needed: bool = False,
465
+ ) -> None:
466
+ self.embedding_dim = embedding_dim
467
+ self.n_heads = n_heads
468
+ self.num_kv_heads = num_kv_heads
469
+ self.tau_s = tau_s
470
+ self.nmax = nmax
471
+ self.rho_min = rho_min
472
+ self.rho_max = rho_max
473
+ self.block_dropout = block_dropout
474
+ self.reencode = reencode
475
+ self.scoring_layers = scoring_layers
476
+ self.reencode_layers = reencode_layers
477
+ self.lambda_t = lambda_t
478
+ self.lambda_m = lambda_m
479
+ self.lambda_s = lambda_s
480
+ self.project_text_if_needed = project_text_if_needed
481
+
482
+
483
+ class QTSplusTokenizer(nn.Module):
484
+ def __init__(self, cfg: QTSplusTokenizerConfig) -> None:
485
+ super().__init__()
486
+ self.cfg = cfg
487
+ self.selector = QTSplus(
488
+ d_model=cfg.embedding_dim,
489
+ n_heads=cfg.n_heads,
490
+ n_kv_heads=cfg.num_kv_heads or cfg.n_heads,
491
+ tau_s=cfg.tau_s,
492
+ nmax=cfg.nmax,
493
+ rho_min=cfg.rho_min,
494
+ rho_max=cfg.rho_max,
495
+ block_dropout=cfg.block_dropout,
496
+ use_reencode=cfg.reencode,
497
+ n_scoring_layers=cfg.scoring_layers,
498
+ n_reencode_layers=cfg.reencode_layers,
499
+ )
500
+ self.text_proj: Optional[nn.Linear] = None
501
+
502
+ def forward(self, X_v: torch.Tensor, Q_t: torch.Tensor, mode: str = "train") -> Dict[str, Any]:
503
+ B, M, D = X_v.shape
504
+ D_txt = Q_t.shape[-1]
505
+ if D_txt != D:
506
+ if self.cfg.project_text_if_needed:
507
+ if self.text_proj is None:
508
+ self.text_proj = nn.Linear(D_txt, D, bias=False).to(device=Q_t.device, dtype=Q_t.dtype)
509
+ Q_proj = self.text_proj(Q_t)
510
+ else:
511
+ raise ValueError(f"QTS+ expects text dim {D}, got {D_txt}. Set project_text_if_needed=True.")
512
+ else:
513
+ Q_proj = Q_t
514
+ sel = self.selector(X_v, Q_proj, mode=mode)
515
+ # Add simple proxies for train-time regularization
516
+ M_tensor = torch.tensor(float(M), device=X_v.device, dtype=X_v.dtype)
517
+ rho = sel["rho"]
518
+ flops_proxy = ((rho * M_tensor) ** 2) / float(self.cfg.nmax ** 2)
519
+ kv_proxy = (rho * M_tensor) / float(self.cfg.nmax)
520
+ sel["add_loss"] = {
521
+ "flops": flops_proxy.mean() * self.cfg.lambda_t,
522
+ "kv": kv_proxy.mean() * self.cfg.lambda_m,
523
+ "smooth": torch.tensor(0.0, device=X_v.device, dtype=X_v.dtype),
524
+ }
525
+ return sel
526
+
527
+ class Qwen2_5_VisionTransformerPretrainedModel(Qwen2_5_VisionTransformerPretrainedModelBase):
528
+ def __init__(self, config, *inputs, **kwargs) -> None:
529
+ super().__init__(config, *inputs, **kwargs)
530
+
531
+ def forward(self, hidden_states: torch.Tensor, grid_thw: torch.Tensor, **kwargs) -> torch.Tensor:
532
+ # Return the output from the base implementation.
533
+ # Without this return, callers receive None and downstream code fails.
534
+ return super().forward(hidden_states, grid_thw, **kwargs)
535
+
536
+ def get_video_features(
537
+ self, pixel_values_videos: torch.FloatTensor, video_grid_thw: Optional[torch.LongTensor] = None
538
+ ):
539
+ """
540
+ Encodes videos into continuous embeddings that can be forwarded to the language model.
541
+
542
+ Args:
543
+ pixel_values_videos (`torch.FloatTensor` of shape `(batch_size, num_channels, image_size, image_size)`):
544
+ The tensors corresponding to the input videos.
545
+ video_grid_thw (`torch.LongTensor` of shape `(num_videos, 3)`, *optional*):
546
+ The temporal, height and width of feature shape of each video in LLM.
547
+ """
548
+ pixel_values_videos = pixel_values_videos.type(self.dtype)
549
+ video_embeds = self.forward(pixel_values_videos, grid_thw=video_grid_thw)
550
+ # split_sizes = (video_grid_thw.prod(-1) // self.spatial_merge_size**2).tolist()
551
+ # video_embeds = torch.split(video_embeds, split_sizes)
552
+ return video_embeds
553
+
554
+ def _try_load_vision_config_from_path(path: str) -> Optional[Dict[str, Any]]:
555
+ """Best-effort load of Qwen2.5-VL vision `config.json`.
556
+
557
+ Accepts either a directory containing `config.json` or a file path to a
558
+ weights file. In the latter case, attempts to locate a sibling
559
+ `config.json` in the same directory.
560
+ """
561
+ if not path:
562
+ return None
563
+
564
+ cfg_path = None
565
+ if os.path.isdir(path):
566
+ candidate = os.path.join(path, "config.json")
567
+ if os.path.isfile(candidate):
568
+ cfg_path = candidate
569
+ else:
570
+ # If a file is given (e.g., .../model.safetensors), look next to it
571
+ base_dir = os.path.dirname(path)
572
+ candidate = os.path.join(base_dir, "config.json")
573
+ if os.path.isfile(candidate):
574
+ cfg_path = candidate
575
+
576
+ if cfg_path is None:
577
+ return None
578
+
579
+ try:
580
+ with open(cfg_path, "r", encoding="utf-8") as f:
581
+ return json.load(f)
582
+ except Exception:
583
+ return None
584
+
585
+
586
+ def build_vision_tower(vision_tower_cfg, **kwargs):
587
+ vision_tower = getattr(vision_tower_cfg, 'mm_vision_tower', getattr(vision_tower_cfg, 'vision_tower', None))
588
+ if vision_tower != "qwen2_5_vl_vision":
589
+ raise ValueError(f"Unknown vision tower type: {vision_tower}")
590
+
591
+ # Attempt to infer correct dimensions from the provided pretrained path
592
+ pretrained_path = getattr(vision_tower_cfg, 'pretrain_vision_model', None)
593
+ cfg_json = _try_load_vision_config_from_path(pretrained_path) if pretrained_path else None
594
+
595
+ if cfg_json is not None:
596
+ # Map json fields to Qwen2_5_VLVisionConfig kwargs (use json defaults when available)
597
+ config = Qwen2_5_VLVisionConfig(
598
+ hidden_size=cfg_json.get("hidden_size", 1280),
599
+ out_hidden_size=cfg_json.get("out_hidden_size", cfg_json.get("hidden_size", 1280)),
600
+ depth=cfg_json.get("depth", 32),
601
+ intermediate_size=cfg_json.get("intermediate_size", 3420),
602
+ num_heads=cfg_json.get("num_heads", 16),
603
+ fullatt_block_indexes=cfg_json.get("fullatt_block_indexes", [7, 15, 23, 31]),
604
+ in_channels=cfg_json.get("in_channels", cfg_json.get("in_chans", 3)),
605
+ patch_size=cfg_json.get("patch_size", cfg_json.get("spatial_patch_size", 14)),
606
+ spatial_merge_size=cfg_json.get("spatial_merge_size", 2),
607
+ temporal_patch_size=cfg_json.get("temporal_patch_size", 2),
608
+ tokens_per_second=cfg_json.get("tokens_per_second", 2),
609
+ window_size=cfg_json.get("window_size", 112),
610
+ initializer_range=cfg_json.get("initializer_range", 0.02),
611
+ )
612
+ else:
613
+ # Fallback to a safe default (3B) when no config file is available
614
+ # This keeps backwards-compatibility but different-scale checkpoints
615
+ # should always provide a config.json alongside the weights.
616
+ config = Qwen2_5_VLVisionConfig(
617
+ hidden_size=1280,
618
+ out_hidden_size=2048,
619
+ depth=32,
620
+ intermediate_size=3420,
621
+ num_heads=16,
622
+ fullatt_block_indexes=[7, 15, 23, 31],
623
+ )
624
+
625
+ return Qwen2_5_VisionTransformerPretrainedModel(config)
626
+
627
+ # ------------------------------
628
+ # Builders used by the meta model
629
+ # ------------------------------
630
+ def build_vision_tower(config: Qwen2_5_VLTextConfig) -> Qwen2_5_VisionTransformerPretrainedModel:
631
+ vcfg_dict = getattr(config, "vision_config", None) or {}
632
+ vcfg = Qwen2_5_VLVisionConfig(**vcfg_dict) if vcfg_dict else Qwen2_5_VLVisionConfig()
633
+ return Qwen2_5_VisionTransformerPretrainedModel(vcfg)
634
+
635
+
636
+ def build_qts_plus_tower(config: Qwen2_5_VLTextConfig) -> QTSplusTokenizer:
637
+ lm_heads = getattr(config, "num_attention_heads", None)
638
+ vision_dim = getattr(config, "vision_embed_size", None)
639
+ if not isinstance(lm_heads, int) or lm_heads <= 0:
640
+ raise ValueError("num_attention_heads must be provided by the Qwen2.5‑VL config")
641
+ if not isinstance(vision_dim, int) or vision_dim <= 0:
642
+ raise ValueError("vision_embed_size must be a positive int before building QTS+")
643
+ if vision_dim % lm_heads != 0:
644
+ raise ValueError(
645
+ f"vision_embed_size ({vision_dim}) must be divisible by LM num_attention_heads ({lm_heads})"
646
+ )
647
+ kv_heads = getattr(config, "num_key_value_heads", None)
648
+ cfg = QTSplusTokenizerConfig(
649
+ embedding_dim=vision_dim,
650
+ n_heads=lm_heads,
651
+ num_kv_heads=kv_heads if isinstance(kv_heads, int) and kv_heads > 0 else None,
652
+ tau_s=getattr(config, "qts_plus_tau_s", 0.1),
653
+ nmax=getattr(config, "qts_plus_nmax", 2560),
654
+ rho_min=getattr(config, "qts_plus_rho_min", 0.05),
655
+ rho_max=getattr(config, "qts_plus_rho_max", 0.5),
656
+ block_dropout=getattr(config, "qts_plus_block_dropout", 0.0),
657
+ reencode=getattr(config, "qts_plus_reencode", True),
658
+ scoring_layers=getattr(config, "qts_plus_scoring_layers", 1),
659
+ reencode_layers=getattr(config, "qts_plus_reencode_layers", 1),
660
+ lambda_t=getattr(config, "lambda_t", 1.0),
661
+ lambda_m=getattr(config, "lambda_m", 1.7),
662
+ lambda_s=getattr(config, "lambda_s", 0.05),
663
+ project_text_if_needed=getattr(config, "project_text_if_needed", False),
664
+ )
665
+ return QTSplusTokenizer(cfg)
666
+
667
+
668
+ # ------------------------------
669
+ # Meta classes to build vision/QTS+ towers and preprocessing hook
670
+ # ------------------------------
671
+ class QTSplusMetaModel:
672
+ def __init__(self, config):
673
+ super(QTSplusMetaModel, self).__init__(config)
674
+ self.config = config
675
+
676
+ # Vision tower: build early so weights under `model.vision_tower.*` load
677
+ if hasattr(config, "vision_tower"):
678
+ self.vision_tower = build_vision_tower(config)
679
+ try:
680
+ vt = getattr(self, "vision_tower", None)
681
+ out_hidden = getattr(getattr(vt, "config", None), "out_hidden_size", None)
682
+ if isinstance(out_hidden, int) and out_hidden > 0:
683
+ self.config.vision_embed_size = out_hidden
684
+ except Exception:
685
+ pass
686
+
687
+ # QTS+ tower: build early if enabled so parameters exist during load
688
+ if getattr(self.config, "enable_qts_plus", False) and getattr(self, "qts_plus", None) is None:
689
+ try:
690
+ self.qts_plus = build_qts_plus_tower(self.config)
691
+ except Exception:
692
+ pass
693
+
694
+ def get_qts_plus_tower(self):
695
+ return getattr(self, "qts_plus", None)
696
+
697
+ def get_vision_tower(self):
698
+ return getattr(self, "vision_tower", None)
699
+
700
+
701
+ class QTSplusMetaForCausalLM:
702
+ def get_model(self): # pragma: no cover - abstract in practice
703
+ raise NotImplementedError
704
+
705
+ def get_vision_tower(self):
706
+ return self.get_model().get_vision_tower()
707
+
708
+ def get_qts_plus_tower(self):
709
+ return self.get_model().get_qts_plus_tower()
710
+
711
+ def encode_visions(self, vision):
712
+ return self.get_model().get_vision_tower()(vision)
713
+
714
+ def prepare_inputs_for_multimodal(
715
+ self,
716
+ vision_input,
717
+ input_ids,
718
+ position_ids,
719
+ attention_mask,
720
+ past_key_values,
721
+ labels,
722
+ question_input_ids: Optional[torch.Tensor] = None,
723
+ video_token_id: Optional[int] = None,
724
+ mode: str = "train",
725
+ ):
726
+ vision_tower = self.get_vision_tower()
727
+ qts_plus_tower = self.get_qts_plus_tower()
728
+ text_embed_layer = self.get_model().get_input_embeddings()
729
+
730
+ if vision_tower is None or vision_input is None or input_ids.shape[1] == 1:
731
+ # Match text embedding dtype for scalar placeholders
732
+ z = torch.tensor(0.0, device=input_ids.device, dtype=text_embed_layer.weight.dtype)
733
+ return input_ids, position_ids, attention_mask, past_key_values, None, labels, z, z, z
734
+
735
+ if self.config.enable_qts_plus:
736
+ if self.config.vision_tower == "qwen2_5_vl_vision":
737
+ if isinstance(vision_input, list):
738
+ if len(vision_input) == 0:
739
+ z = torch.tensor(0.0, device=input_ids.device, dtype=text_embed_layer.weight.dtype)
740
+ return input_ids, position_ids, attention_mask, past_key_values, None, labels, z, z, z
741
+ vision_input = vision_input[0]
742
+
743
+ vision_features = vision_tower.get_video_features(
744
+ vision_input["pixel_values_videos"].to(vision_tower.device),
745
+ vision_input["video_grid_thw"].to(vision_tower.device),
746
+ )
747
+ video_grid_thw = vision_input["video_grid_thw"]
748
+ if isinstance(vision_features, list) and len(vision_features) > 0:
749
+ vision_features = vision_features[0]
750
+ if vision_features.ndim == 2:
751
+ vision_features = vision_features.unsqueeze(0)
752
+
753
+ if question_input_ids is None:
754
+ raise AssertionError("question_input_ids must be provided in training to avoid data leakage")
755
+ if question_input_ids.dtype is not torch.long:
756
+ question_input_ids = question_input_ids.long()
757
+
758
+ text_embeddings = text_embed_layer(question_input_ids)
759
+ vision_features = vision_features.to(dtype=text_embeddings.dtype)
760
+
761
+ qts_plus_out = qts_plus_tower(vision_features, text_embeddings, mode=mode)
762
+ vision_features = qts_plus_out["Z"]
763
+ flops_loss = qts_plus_out["add_loss"]["flops"]
764
+ kv_loss = qts_plus_out["add_loss"]["kv"]
765
+ smooth_loss = qts_plus_out["add_loss"]["smooth"]
766
+
767
+ if video_token_id is None:
768
+ video_token_id = getattr(self.config, "video_token_id", None) or 151656
769
+
770
+ inputs_embeds, attention_mask, labels = qts_integrate_embeddings(
771
+ vision_features=vision_features[0],
772
+ input_ids=input_ids,
773
+ attention_mask=attention_mask,
774
+ labels=labels,
775
+ video_token_id=video_token_id,
776
+ text_model_embed_layer=text_embed_layer,
777
+ video_grid_thw=video_grid_thw,
778
+ )
779
+ return (
780
+ vision_input,
781
+ position_ids,
782
+ attention_mask,
783
+ past_key_values,
784
+ inputs_embeds,
785
+ labels,
786
+ flops_loss,
787
+ kv_loss,
788
+ smooth_loss,
789
+ )
790
+ else:
791
+ raise ValueError("Not support this model")
792
+
793
+ # QTS+ disabled: just embed tokens
794
+ z = torch.tensor(0.0, device=input_ids.device, dtype=text_embed_layer.weight.dtype)
795
+ return input_ids, position_ids, attention_mask, past_key_values, None, labels, z, z, z
796
+
797
+ def vision_features_count_qtsplus(
798
+ self,
799
+ pixel_values_videos: Optional[torch.Tensor],
800
+ video_grid_thw: Optional[torch.Tensor],
801
+ question_input_ids: Optional[torch.Tensor],
802
+ ) -> int:
803
+ try:
804
+ if pixel_values_videos is None or video_grid_thw is None or question_input_ids is None:
805
+ return 0
806
+ vision_tower = self.get_vision_tower()
807
+ qts_tower = self.get_qts_plus_tower()
808
+ text_embed = self.get_model().get_input_embeddings()
809
+ if vision_tower is None or qts_tower is None or text_embed is None:
810
+ return 0
811
+ if question_input_ids.dtype is not torch.long:
812
+ question_input_ids = question_input_ids.long()
813
+ try:
814
+ vt_device = next(vision_tower.parameters()).device
815
+ except StopIteration:
816
+ vt_device = text_embed.weight.device
817
+ vf = vision_tower.get_video_features(
818
+ pixel_values_videos.to(vt_device),
819
+ video_grid_thw.to(vt_device),
820
+ )
821
+ if isinstance(vf, list) and len(vf) > 0:
822
+ vf = vf[0]
823
+ if isinstance(vf, torch.Tensor) and vf.ndim == 2:
824
+ vf = vf.unsqueeze(0)
825
+ te = text_embed(question_input_ids.to(text_embed.weight.device))
826
+ if isinstance(vf, torch.Tensor):
827
+ vf = vf.to(device=te.device, dtype=te.dtype)
828
+ with torch.inference_mode():
829
+ qpo = qts_tower(vf, te, mode="infer")
830
+ Z = qpo.get("Z")
831
+ if isinstance(Z, list) and len(Z) > 0:
832
+ return int(Z[0].shape[0])
833
+ if isinstance(Z, torch.Tensor):
834
+ return int(Z.shape[1] if Z.ndim == 3 else Z.shape[0])
835
+ return 0
836
+ except Exception:
837
+ return 0
838
+
839
+
840
+ # ------------------------------
841
+ # Base text-only CausalLM for Qwen2.5‑VL (local copy)
842
+ # ------------------------------
843
+ class Qwen2_5_VL_CausalLM_Config(Qwen2_5_VLTextConfig):
844
+ model_type = "qwen2_5_vl_causal_lm"
845
+
846
+
847
+ class Qwen2_5_VLTextForCausalLM(Qwen2_5_VLPreTrainedModel, GenerationMixin):
848
+ config_class = Qwen2_5_VL_CausalLM_Config
849
+ _tied_weights_keys = ["lm_head.weight"]
850
+
851
+ def __init__(self, config: Qwen2_5_VL_CausalLM_Config):
852
+ super().__init__(config)
853
+ self.model = Qwen2_5_VLTextModel(config)
854
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
855
+ self.post_init()
856
+
857
+ def get_input_embeddings(self):
858
+ return self.model.embed_tokens
859
+
860
+ def set_input_embeddings(self, value):
861
+ self.model.embed_tokens = value
862
+
863
+ def get_output_embeddings(self):
864
+ return self.lm_head
865
+
866
+ def set_output_embeddings(self, new_embeddings):
867
+ self.lm_head = new_embeddings
868
+
869
+ def get_decoder(self):
870
+ return self.model
871
+
872
+ def set_decoder(self, decoder):
873
+ self.model = decoder
874
+
875
+ def forward(
876
+ self,
877
+ input_ids: Optional[torch.LongTensor] = None,
878
+ attention_mask: Optional[torch.Tensor] = None,
879
+ position_ids: Optional[torch.LongTensor] = None,
880
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
881
+ inputs_embeds: Optional[torch.FloatTensor] = None,
882
+ labels: Optional[torch.LongTensor] = None,
883
+ use_cache: Optional[bool] = None,
884
+ output_attentions: Optional[bool] = None,
885
+ output_hidden_states: Optional[bool] = None,
886
+ return_dict: Optional[bool] = None,
887
+ cache_position: Optional[torch.LongTensor] = None,
888
+ num_logits_to_keep: int = 0,
889
+ **loss_kwargs,
890
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
891
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
892
+ output_hidden_states = (
893
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
894
+ )
895
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
896
+
897
+ outputs = self.model(
898
+ input_ids=input_ids,
899
+ attention_mask=attention_mask,
900
+ position_ids=position_ids,
901
+ past_key_values=past_key_values,
902
+ inputs_embeds=inputs_embeds,
903
+ use_cache=use_cache,
904
+ output_attentions=output_attentions,
905
+ output_hidden_states=output_hidden_states,
906
+ return_dict=return_dict,
907
+ cache_position=cache_position,
908
+ )
909
+
910
+ hidden_states = outputs[0]
911
+ logits = self.lm_head(hidden_states[:, -num_logits_to_keep:, :])
912
+
913
+ loss = None
914
+ if labels is not None:
915
+ # Defer to simple cross-entropy with ignore_index set by caller
916
+ shift_logits = logits[..., :-1, :].contiguous()
917
+ shift_labels = labels[..., 1:].contiguous()
918
+ loss_fct = nn.CrossEntropyLoss(ignore_index=-100)
919
+ loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
920
+
921
+ if not return_dict:
922
+ output = (logits,) + outputs[1:]
923
+ return (loss,) + output if loss is not None else output
924
+
925
+ return CausalLMOutputWithPast(
926
+ loss=loss,
927
+ logits=logits,
928
+ past_key_values=outputs.past_key_values,
929
+ hidden_states=outputs.hidden_states,
930
+ attentions=outputs.attentions,
931
+ )
932
+
933
+
934
+ # ------------------------------
935
+ # QTS+ Qwen2.5‑VL Causal LM (text model + QTS+ + vision)
936
+ # ------------------------------
937
+ class QTSplusQwen2_5_VLModel(QTSplusMetaModel, Qwen2_5_VLTextModel):
938
+ config_class = QTSplusQwen2_5_VL_CausalLM_Config
939
+
940
+ def __init__(self, config: Qwen2_5_VLTextConfig):
941
+ super(QTSplusQwen2_5_VLModel, self).__init__(config)
942
+
943
+
944
+ class QTSplusQwen2_5_VLTextForCausalLM(QTSplusMetaForCausalLM, Qwen2_5_VLTextForCausalLM):
945
+ config_class = QTSplusQwen2_5_VL_CausalLM_Config
946
+
947
+ def __init__(self, config):
948
+ try:
949
+ cfg_attn = getattr(config, "attn_implementation", None)
950
+ if (cfg_attn is None or str(cfg_attn) == "auto") and is_flash_attn_available():
951
+ setattr(config, "attn_implementation", "flash_attention_2")
952
+ setattr(config, "_attn_implementation", "flash_attention_2")
953
+ except Exception:
954
+ pass
955
+
956
+ super(Qwen2_5_VLTextForCausalLM, self).__init__(config)
957
+ self.model = QTSplusQwen2_5_VLModel(config)
958
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
959
+ self.post_init()
960
+
961
+ def get_model(self):
962
+ return self.model
963
+
964
+ def forward(
965
+ self,
966
+ vision_input: Optional[torch.FloatTensor] = None,
967
+ input_ids: torch.LongTensor = None,
968
+ labels: Optional[torch.LongTensor] = None,
969
+ attention_mask: Optional[torch.Tensor] = None,
970
+ position_ids: Optional[torch.LongTensor] = None,
971
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
972
+ inputs_embeds: Optional[torch.FloatTensor] = None,
973
+ use_cache: Optional[bool] = None,
974
+ output_attentions: Optional[bool] = None,
975
+ output_hidden_states: Optional[bool] = None,
976
+ return_dict: Optional[bool] = None,
977
+ cache_position: Optional[torch.LongTensor] = None,
978
+ question_input_ids: Optional[torch.LongTensor] = None,
979
+ video_token_id: Optional[int] = None,
980
+ ):
981
+ if inputs_embeds is not None:
982
+ input_ids = None
983
+
984
+ if inputs_embeds is None:
985
+ (
986
+ vision_input,
987
+ position_ids,
988
+ attention_mask,
989
+ past_key_values,
990
+ inputs_embeds,
991
+ labels,
992
+ flops_loss,
993
+ kv_loss,
994
+ smooth_loss,
995
+ ) = self.prepare_inputs_for_multimodal(
996
+ vision_input,
997
+ input_ids,
998
+ position_ids,
999
+ attention_mask,
1000
+ past_key_values,
1001
+ labels,
1002
+ question_input_ids,
1003
+ video_token_id,
1004
+ mode="train" if self.training else "infer",
1005
+ )
1006
+ if inputs_embeds is None:
1007
+ inputs_embeds = self.get_model().embed_tokens(input_ids)
1008
+
1009
+ input_ids = None
1010
+ try:
1011
+ outputs = super().forward(
1012
+ attention_mask=attention_mask,
1013
+ position_ids=position_ids,
1014
+ past_key_values=past_key_values,
1015
+ inputs_embeds=inputs_embeds,
1016
+ labels=labels,
1017
+ use_cache=use_cache,
1018
+ output_attentions=output_attentions,
1019
+ output_hidden_states=output_hidden_states,
1020
+ return_dict=return_dict,
1021
+ cache_position=cache_position,
1022
+ )
1023
+ except ValueError as error:
1024
+ raise ValueError(
1025
+ f"{error} (input_ids is None: {input_ids is None}, inputs_embeds is None: {inputs_embeds is None})"
1026
+ ) from error
1027
+
1028
+ add_loss = {
1029
+ "flops_loss": flops_loss if vision_input is not None else 0.0,
1030
+ "kv_loss": kv_loss if vision_input is not None else 0.0,
1031
+ "smooth_loss": smooth_loss if vision_input is not None else 0.0,
1032
+ }
1033
+ if labels is None and not self.training:
1034
+ return outputs
1035
+ return (outputs, add_loss)
1036
+
1037
+ @torch.no_grad()
1038
+ def generate(
1039
+ self,
1040
+ vision_input: Optional[torch.Tensor] = None,
1041
+ input_ids: Optional[torch.Tensor] = None,
1042
+ question_input_ids: Optional[torch.Tensor] = None,
1043
+ video_token_id: Optional[int] = None,
1044
+ **kwargs,
1045
+ ):
1046
+ position_ids = kwargs.pop("position_ids", None)
1047
+ attention_mask = kwargs.pop("attention_mask", None)
1048
+ if attention_mask is None and input_ids is not None:
1049
+ attention_mask = torch.ones_like(input_ids, dtype=torch.long, device=input_ids.device)
1050
+ if "inputs_embeds" in kwargs:
1051
+ raise NotImplementedError("`inputs_embeds` is not supported")
1052
+
1053
+ if vision_input is not None:
1054
+ (
1055
+ vision_input,
1056
+ position_ids,
1057
+ attention_mask,
1058
+ _,
1059
+ inputs_embeds,
1060
+ _,
1061
+ *_unused_losses,
1062
+ ) = self.prepare_inputs_for_multimodal(
1063
+ vision_input,
1064
+ input_ids,
1065
+ position_ids,
1066
+ attention_mask,
1067
+ None,
1068
+ None,
1069
+ question_input_ids,
1070
+ video_token_id,
1071
+ mode="infer",
1072
+ )
1073
+ else:
1074
+ inputs_embeds = self.get_model().embed_tokens(input_ids)
1075
+
1076
+ kwargs["attention_mask"] = attention_mask
1077
+ if position_ids is not None:
1078
+ kwargs["position_ids"] = position_ids
1079
+ kwargs.pop("input_ids", None)
1080
+ if "use_cache" not in kwargs:
1081
+ kwargs["use_cache"] = True
1082
+ output_ids = super().generate(inputs_embeds=inputs_embeds, **kwargs)
1083
+ if input_ids is not None:
1084
+ input_ids = input_ids.to(output_ids.device)
1085
+ output_ids = torch.cat([input_ids, output_ids], dim=1)
1086
+ return output_ids
1087
+
1088
+ # Register for Auto* resolution
1089
+ AutoModelForCausalLM.register(QTSplusQwen2_5_VL_CausalLM_Config, QTSplusQwen2_5_VLTextForCausalLM)
1090
+ __all__ = ["QTSplusQwen2_5_VLTextForCausalLM"]
preprocessor_config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "crop_size": null,
3
+ "data_format": "channels_first",
4
+ "default_to_square": true,
5
+ "device": null,
6
+ "disable_grouping": null,
7
+ "do_center_crop": null,
8
+ "do_convert_rgb": true,
9
+ "do_normalize": true,
10
+ "do_pad": null,
11
+ "do_rescale": true,
12
+ "do_resize": true,
13
+ "image_mean": [
14
+ 0.48145466,
15
+ 0.4578275,
16
+ 0.40821073
17
+ ],
18
+ "image_processor_type": "Qwen2VLImageProcessorFast",
19
+ "image_std": [
20
+ 0.26862954,
21
+ 0.26130258,
22
+ 0.27577711
23
+ ],
24
+ "input_data_format": null,
25
+ "max_pixels": 12845056,
26
+ "merge_size": 2,
27
+ "min_pixels": 3136,
28
+ "pad_size": null,
29
+ "patch_size": 14,
30
+ "processor_class": "Qwen2_5_VLVisionProcessor",
31
+ "resample": 3,
32
+ "rescale_factor": 0.00392156862745098,
33
+ "return_tensors": null,
34
+ "size": {
35
+ "longest_edge": 12845056,
36
+ "shortest_edge": 3136
37
+ },
38
+ "temporal_patch_size": 2
39
+ }
processing_qts_plus_qwen2_5_vl.py ADDED
@@ -0,0 +1,260 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Self-contained processor shim for trust_remote_code.
3
+
4
+ Exports `QTSplusQwen2_5_VLProcessor` by aliasing the upstream
5
+ Qwen2.5-VL processor from Transformers. This avoids importing a local
6
+ `src` package while keeping the same class name referenced in
7
+ `processor_config.json`.
8
+ """
9
+ from typing import Optional, Union
10
+
11
+ import numpy as np
12
+
13
+ from transformers.feature_extraction_utils import BatchFeature
14
+ from transformers.image_utils import ImageInput
15
+ from transformers.processing_utils import ImagesKwargs, MultiModalData, ProcessingKwargs, ProcessorMixin, Unpack, VideosKwargs
16
+ from transformers.tokenization_utils_base import PreTokenizedInput, TextInput
17
+ from transformers.video_utils import VideoInput
18
+ from transformers import AutoProcessor
19
+
20
+ class Qwen2_5_VLVideosProcessorKwargs(VideosKwargs, total=False):
21
+ fps: Union[list[float], float]
22
+
23
+
24
+ class Qwen2_5_VLImagesKwargs(ImagesKwargs):
25
+ min_pixels: Optional[int]
26
+ max_pixels: Optional[int]
27
+ patch_size: Optional[int]
28
+ temporal_patch_size: Optional[int]
29
+ merge_size: Optional[int]
30
+
31
+
32
+ class Qwen2_5_VLProcessorKwargs(ProcessingKwargs, total=False):
33
+ images_kwargs: Qwen2_5_VLImagesKwargs
34
+ videos_kwargs: Qwen2_5_VLVideosProcessorKwargs
35
+ _defaults = {
36
+ "text_kwargs": {
37
+ "padding": False,
38
+ "return_mm_token_type_ids": False,
39
+ },
40
+ }
41
+
42
+
43
+ class QTSplusQwen2_5_VLProcessor(ProcessorMixin):
44
+ r"""
45
+ Constructs a Qwen2.5-VL processor which wraps a Qwen2.5-VL image processor and a Qwen2 tokenizer into a single processor.
46
+ [`Qwen2_5_VLProcessor`] offers all the functionalities of [`Qwen2VLImageProcessor`] and [`Qwen2TokenizerFast`]. See the
47
+ [`~Qwen2_5_VLProcessor.__call__`] and [`~Qwen2_5_VLProcessor.decode`] for more information.
48
+ Args:
49
+ image_processor ([`Qwen2VLImageProcessor`], *optional*):
50
+ The image processor is a required input.
51
+ tokenizer ([`Qwen2TokenizerFast`], *optional*):
52
+ The tokenizer is a required input.
53
+ video_processor ([`Qwen2_5_VLVideoProcessor`], *optional*):
54
+ The video processor is a required input.
55
+ chat_template (`str`, *optional*): A Jinja template which will be used to convert lists of messages
56
+ in a chat into a tokenizable string.
57
+ """
58
+
59
+ attributes = ["image_processor", "tokenizer", "video_processor"]
60
+
61
+ image_processor_class = "AutoImageProcessor"
62
+ video_processor_class = "AutoVideoProcessor"
63
+ tokenizer_class = ("Qwen2Tokenizer", "Qwen2TokenizerFast")
64
+
65
+ def __init__(self, image_processor=None, tokenizer=None, video_processor=None, chat_template=None, **kwargs):
66
+ self.image_token = "<|image_pad|>" if not hasattr(tokenizer, "image_token") else tokenizer.image_token
67
+ self.video_token = "<|video_pad|>" if not hasattr(tokenizer, "video_token") else tokenizer.video_token
68
+ self.image_token_id = (
69
+ tokenizer.image_token_id
70
+ if getattr(tokenizer, "image_token_id", None)
71
+ else tokenizer.convert_tokens_to_ids(self.image_token)
72
+ )
73
+ self.video_token_id = (
74
+ tokenizer.video_token_id
75
+ if getattr(tokenizer, "video_token_id", None)
76
+ else tokenizer.convert_tokens_to_ids(self.video_token)
77
+ )
78
+ super().__init__(image_processor, tokenizer, video_processor, chat_template=chat_template)
79
+
80
+ def __call__(
81
+ self,
82
+ images: Optional[ImageInput] = None,
83
+ text: Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput]] = None,
84
+ videos: Optional[VideoInput] = None,
85
+ **kwargs: Unpack[Qwen2_5_VLProcessorKwargs],
86
+ ) -> BatchFeature:
87
+ """
88
+ Main method to prepare for the model one or several sequences(s) and image(s). This method forwards the `text`
89
+ and `kwargs` arguments to Qwen2TokenizerFast's [`~Qwen2TokenizerFast.__call__`] if `text` is not `None` to encode
90
+ the text. To prepare the vision inputs, this method forwards the `vision_infos` and `kwargs` arguments to
91
+ Qwen2VLImageProcessor's [`~Qwen2VLImageProcessor.__call__`] if `vision_infos` is not `None`.
92
+
93
+ Args:
94
+ images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `list[PIL.Image.Image]`, `list[np.ndarray]`, `list[torch.Tensor]`):
95
+ The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
96
+ tensor. Both channels-first and channels-last formats are supported.
97
+ text (`str`, `list[str]`, `list[list[str]]`):
98
+ The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
99
+ (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
100
+ `is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
101
+ videos (`np.ndarray`, `torch.Tensor`, `list[np.ndarray]`, `list[torch.Tensor]`):
102
+ The image or batch of videos to be prepared. Each video can be a 4D NumPy array or PyTorch
103
+ tensor, or a nested list of 3D frames. Both channels-first and channels-last formats are supported.
104
+ return_tensors (`str` or [`~utils.TensorType`], *optional*):
105
+ If set, will return tensors of a particular framework. Acceptable values are:
106
+ - `'tf'`: Return TensorFlow `tf.constant` objects.
107
+ - `'pt'`: Return PyTorch `torch.Tensor` objects.
108
+ - `'np'`: Return NumPy `np.ndarray` objects.
109
+ - `'jax'`: Return JAX `jnp.ndarray` objects.
110
+
111
+ Returns:
112
+ [`BatchFeature`]: A [`BatchFeature`] with the following fields:
113
+
114
+ - **input_ids** -- List of token ids to be fed to a model. Returned when `text` is not `None`.
115
+ - **attention_mask** -- List of indices specifying which tokens should be attended to by the model (when
116
+ `return_attention_mask=True` or if *"attention_mask"* is in `self.model_input_names` and if `text` is not
117
+ `None`).
118
+ - **pixel_values** -- Pixel values to be fed to a model. Returned when `images` is not `None`.
119
+ - **pixel_values_videos** -- Pixel values of videos to be fed to a model. Returned when `videos` is not `None`.
120
+ - **image_grid_thw** -- List of image 3D grid in LLM. Returned when `images` is not `None`.
121
+ - **video_grid_thw** -- List of video 3D grid in LLM. Returned when `videos` is not `None`.
122
+ - **second_per_grid_ts** -- List of video seconds per time grid. Returned when `videos` is not `None`.
123
+ """
124
+ output_kwargs = self._merge_kwargs(
125
+ Qwen2_5_VLProcessorKwargs,
126
+ tokenizer_init_kwargs=self.tokenizer.init_kwargs,
127
+ **kwargs,
128
+ )
129
+
130
+ image_inputs = videos_inputs = {}
131
+ if images is not None:
132
+ image_inputs = self.image_processor(images=images, **output_kwargs["images_kwargs"])
133
+ image_grid_thw = image_inputs["image_grid_thw"]
134
+
135
+ if videos is not None:
136
+ fps = output_kwargs["videos_kwargs"].get("fps", 2.0)
137
+ videos_inputs = self.video_processor(videos=videos, **output_kwargs["videos_kwargs"])
138
+ video_grid_thw = videos_inputs["video_grid_thw"]
139
+
140
+ if isinstance(fps, (int, float)):
141
+ second_per_grid_ts = [self.video_processor.temporal_patch_size / fps] * len(video_grid_thw)
142
+ elif hasattr(fps, "__len__") and len(fps) == len(video_grid_thw):
143
+ second_per_grid_ts = [self.video_processor.temporal_patch_size / tmp for tmp in fps]
144
+ else:
145
+ raise ValueError(
146
+ f"The length of fps ({len(fps) if hasattr(fps, '__len__') else fps}) must be equal to the length of video_grid_thw ({len(video_grid_thw)}) or fps should be a single number."
147
+ )
148
+ videos_inputs.update({"second_per_grid_ts": second_per_grid_ts})
149
+
150
+ if not isinstance(text, list):
151
+ text = [text]
152
+
153
+ text = text.copy() # below lines change text in-place
154
+ if images is not None:
155
+ merge_length = self.image_processor.merge_size**2
156
+ index = 0
157
+ for i in range(len(text)):
158
+ while self.image_token in text[i]:
159
+ num_image_tokens = image_grid_thw[index].prod() // merge_length
160
+ text[i] = text[i].replace(self.image_token, "<|placeholder|>" * num_image_tokens, 1)
161
+ index += 1
162
+ text[i] = text[i].replace("<|placeholder|>", self.image_token)
163
+
164
+ if videos is not None:
165
+ merge_length = self.video_processor.merge_size**2
166
+ index = 0
167
+ for i in range(len(text)):
168
+ while self.video_token in text[i]:
169
+ num_video_tokens = video_grid_thw[index].prod() // merge_length
170
+ text[i] = text[i].replace(self.video_token, "<|placeholder|>" * num_video_tokens, 1)
171
+ index += 1
172
+ text[i] = text[i].replace("<|placeholder|>", self.video_token)
173
+
174
+ return_tensors = output_kwargs["text_kwargs"].pop("return_tensors", None)
175
+ return_mm_token_type_ids = output_kwargs["text_kwargs"].pop("return_mm_token_type_ids", None)
176
+ text_inputs = self.tokenizer(text, **output_kwargs["text_kwargs"])
177
+ self._check_special_mm_tokens(text, text_inputs, modalities=["image", "video"])
178
+
179
+ if return_mm_token_type_ids:
180
+ array_ids = np.array(text_inputs["input_ids"])
181
+ mm_token_type_ids = np.zeros_like(text_inputs["input_ids"])
182
+ mm_token_type_ids[array_ids == self.image_token_id] = 1
183
+ text_inputs["mm_token_type_ids"] = mm_token_type_ids.tolist()
184
+
185
+ return BatchFeature(data={**text_inputs, **image_inputs, **videos_inputs}, tensor_type=return_tensors)
186
+
187
+ def _get_num_multimodal_tokens(self, image_sizes=None, video_sizes=None, **kwargs):
188
+ """
189
+ Computes the number of placeholder tokens needed for multimodal inputs with the given sizes.
190
+ Args:
191
+ image_sizes (`list[list[int]]`, *optional*):
192
+ The input sizes formatted as (height, width) per each image.
193
+ video_sizes (`list[list[int]]`, *optional*):
194
+ The input sizes formatted as (num_frames, height, width) per each video.
195
+ Returns:
196
+ `MultiModalData`: A `MultiModalData` object holding number of tokens per each of the provided
197
+ input modalities, along with other useful data.
198
+ """
199
+
200
+ vision_data = {}
201
+ if image_sizes is not None:
202
+ images_kwargs = Qwen2_5_VLProcessorKwargs._defaults.get("images_kwargs", {})
203
+ images_kwargs.update(kwargs)
204
+ merge_size = images_kwargs.get("merge_size", None) or self.image_processor.merge_size
205
+
206
+ num_image_patches = [
207
+ self.image_processor.get_number_of_image_patches(*image_size, images_kwargs)
208
+ for image_size in image_sizes
209
+ ]
210
+ num_image_tokens = [(num_patches // merge_size**2) for num_patches in num_image_patches]
211
+ vision_data.update({"num_image_tokens": num_image_tokens, "num_image_patches": num_image_patches})
212
+
213
+ if video_sizes is not None:
214
+ videos_kwargs = Qwen2_5_VLProcessorKwargs._defaults.get("videos_kwargs", {})
215
+ videos_kwargs.update(kwargs)
216
+ num_video_patches = [
217
+ self.video_processor.get_number_of_video_patches(*video_size, videos_kwargs)
218
+ for video_size in video_sizes
219
+ ]
220
+ num_video_tokens = [(num_patches // merge_size**2) for num_patches in num_video_patches]
221
+ vision_data["num_video_tokens"] = num_video_tokens
222
+
223
+ return MultiModalData(**vision_data)
224
+
225
+ def post_process_image_text_to_text(
226
+ self, generated_outputs, skip_special_tokens=True, clean_up_tokenization_spaces=False, **kwargs
227
+ ):
228
+ """
229
+ Post-process the output of the model to decode the text.
230
+
231
+ Args:
232
+ generated_outputs (`torch.Tensor` or `np.ndarray`):
233
+ The output of the model `generate` function. The output is expected to be a tensor of shape `(batch_size, sequence_length)`
234
+ or `(sequence_length,)`.
235
+ skip_special_tokens (`bool`, *optional*, defaults to `True`):
236
+ Whether or not to remove special tokens in the output. Argument passed to the tokenizer's `batch_decode` method.
237
+ clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):
238
+ Whether or not to clean up the tokenization spaces. Argument passed to the tokenizer's `batch_decode` method.
239
+ **kwargs:
240
+ Additional arguments to be passed to the tokenizer's `batch_decode method`.
241
+
242
+ Returns:
243
+ `list[str]`: The decoded text.
244
+ """
245
+ return self.tokenizer.batch_decode(
246
+ generated_outputs,
247
+ skip_special_tokens=skip_special_tokens,
248
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
249
+ **kwargs,
250
+ )
251
+
252
+ @property
253
+ def model_input_names(self):
254
+ tokenizer_input_names = self.tokenizer.model_input_names
255
+ image_processor_input_names = self.image_processor.model_input_names
256
+ names_from_processor = list(dict.fromkeys(tokenizer_input_names + image_processor_input_names))
257
+ return names_from_processor + ["second_per_grid_ts"]
258
+
259
+ AutoProcessor.register("QTSplusQwen2_5_VLProcessor", QTSplusQwen2_5_VLProcessor)
260
+ __all__ = ["QTSplusQwen2_5_VLProcessor"]
processor_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoProcessor": "processing_qts_plus_qwen2_5_vl.QTSplusQwen2_5_VLProcessor"
4
+ },
5
+ "image_processor_type": "Qwen2VLImageProcessorFast",
6
+ "processor_class": "QTSplusQwen2_5_VLProcessor",
7
+ "tokenizer_class": "Qwen2Tokenizer",
8
+ "video_processor_type": "Qwen2VLVideoProcessor"
9
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "bos_token": "<|endoftext|>",
18
+ "eos_token": "<|im_end|>",
19
+ "pad_token": "<|endoftext|>"
20
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04c5a433a454dcf945e826dea381181827c01c6a9f99c5d1eb969b77c364d6da
3
+ size 11422174
tokenizer_config.json ADDED
@@ -0,0 +1,208 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ }
181
+ },
182
+ "additional_special_tokens": [
183
+ "<|im_start|>",
184
+ "<|im_end|>",
185
+ "<|object_ref_start|>",
186
+ "<|object_ref_end|>",
187
+ "<|box_start|>",
188
+ "<|box_end|>",
189
+ "<|quad_start|>",
190
+ "<|quad_end|>",
191
+ "<|vision_start|>",
192
+ "<|vision_end|>",
193
+ "<|vision_pad|>",
194
+ "<|image_pad|>",
195
+ "<|video_pad|>"
196
+ ],
197
+ "bos_token": "<|endoftext|>",
198
+ "clean_up_tokenization_spaces": false,
199
+ "eos_token": "<|im_end|>",
200
+ "errors": "replace",
201
+ "extra_special_tokens": {},
202
+ "model_max_length": 131072,
203
+ "pad_token": "<|endoftext|>",
204
+ "processor_class": "Qwen2_5_VLVisionProcessor",
205
+ "split_special_tokens": false,
206
+ "tokenizer_class": "Qwen2Tokenizer",
207
+ "unk_token": null
208
+ }
video_preprocessor_config.json ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "crop_size": null,
3
+ "data_format": "channels_first",
4
+ "default_to_square": true,
5
+ "device": null,
6
+ "do_center_crop": null,
7
+ "do_convert_rgb": true,
8
+ "do_normalize": true,
9
+ "do_rescale": true,
10
+ "do_resize": true,
11
+ "do_sample_frames": false,
12
+ "fps": null,
13
+ "image_mean": [
14
+ 0.48145466,
15
+ 0.4578275,
16
+ 0.40821073
17
+ ],
18
+ "image_std": [
19
+ 0.26862954,
20
+ 0.26130258,
21
+ 0.27577711
22
+ ],
23
+ "input_data_format": null,
24
+ "max_frames": 768,
25
+ "max_pixels": 12845056,
26
+ "merge_size": 2,
27
+ "min_frames": 4,
28
+ "min_pixels": 3136,
29
+ "num_frames": null,
30
+ "pad_size": null,
31
+ "patch_size": 14,
32
+ "processor_class": "Qwen2_5_VLVisionProcessor",
33
+ "resample": 3,
34
+ "rescale_factor": 0.00392156862745098,
35
+ "return_metadata": false,
36
+ "size": {
37
+ "longest_edge": 12845056,
38
+ "shortest_edge": 3136
39
+ },
40
+ "temporal_patch_size": 2,
41
+ "video_metadata": null,
42
+ "video_processor_type": "Qwen2VLVideoProcessor"
43
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
zero_to_fp32.py ADDED
@@ -0,0 +1,760 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+
3
+ # Copyright (c) Microsoft Corporation.
4
+ # SPDX-License-Identifier: Apache-2.0
5
+
6
+ # DeepSpeed Team
7
+
8
+ # This script extracts fp32 consolidated weights from a zero 1, 2 and 3 DeepSpeed checkpoints. It gets
9
+ # copied into the top level checkpoint dir, so the user can easily do the conversion at any point in
10
+ # the future. Once extracted, the weights don't require DeepSpeed and can be used in any
11
+ # application.
12
+ #
13
+ # example:
14
+ # python zero_to_fp32.py . output_dir/
15
+ # or
16
+ # python zero_to_fp32.py . output_dir/ --safe_serialization
17
+
18
+ import argparse
19
+ import torch
20
+ import glob
21
+ import math
22
+ import os
23
+ import re
24
+ import gc
25
+ import json
26
+ import numpy as np
27
+ from tqdm import tqdm
28
+ from collections import OrderedDict
29
+ from dataclasses import dataclass
30
+
31
+ # while this script doesn't use deepspeed to recover data, since the checkpoints are pickled with
32
+ # DeepSpeed data structures it has to be available in the current python environment.
33
+ from deepspeed.utils import logger
34
+ from deepspeed.checkpoint.constants import (DS_VERSION, OPTIMIZER_STATE_DICT, SINGLE_PARTITION_OF_FP32_GROUPS,
35
+ FP32_FLAT_GROUPS, ZERO_STAGE, PARTITION_COUNT, PARAM_SHAPES, BUFFER_NAMES,
36
+ FROZEN_PARAM_SHAPES, FROZEN_PARAM_FRAGMENTS)
37
+
38
+
39
+ @dataclass
40
+ class zero_model_state:
41
+ buffers: dict()
42
+ param_shapes: dict()
43
+ shared_params: list
44
+ ds_version: int
45
+ frozen_param_shapes: dict()
46
+ frozen_param_fragments: dict()
47
+
48
+
49
+ debug = 0
50
+
51
+ # load to cpu
52
+ device = torch.device('cpu')
53
+
54
+
55
+ def atoi(text):
56
+ return int(text) if text.isdigit() else text
57
+
58
+
59
+ def natural_keys(text):
60
+ '''
61
+ alist.sort(key=natural_keys) sorts in human order
62
+ http://nedbatchelder.com/blog/200712/human_sorting.html
63
+ (See Toothy's implementation in the comments)
64
+ '''
65
+ return [atoi(c) for c in re.split(r'(\d+)', text)]
66
+
67
+
68
+ def get_model_state_file(checkpoint_dir, zero_stage):
69
+ if not os.path.isdir(checkpoint_dir):
70
+ raise FileNotFoundError(f"Directory '{checkpoint_dir}' doesn't exist")
71
+
72
+ # there should be only one file
73
+ if zero_stage <= 2:
74
+ file = os.path.join(checkpoint_dir, "mp_rank_00_model_states.pt")
75
+ elif zero_stage == 3:
76
+ file = os.path.join(checkpoint_dir, "zero_pp_rank_0_mp_rank_00_model_states.pt")
77
+
78
+ if not os.path.exists(file):
79
+ raise FileNotFoundError(f"can't find model states file at '{file}'")
80
+
81
+ return file
82
+
83
+
84
+ def get_checkpoint_files(checkpoint_dir, glob_pattern):
85
+ # XXX: need to test that this simple glob rule works for multi-node setup too
86
+ ckpt_files = sorted(glob.glob(os.path.join(checkpoint_dir, glob_pattern)), key=natural_keys)
87
+
88
+ if len(ckpt_files) == 0:
89
+ raise FileNotFoundError(f"can't find {glob_pattern} files in directory '{checkpoint_dir}'")
90
+
91
+ return ckpt_files
92
+
93
+
94
+ def get_optim_files(checkpoint_dir):
95
+ return get_checkpoint_files(checkpoint_dir, "*_optim_states.pt")
96
+
97
+
98
+ def get_model_state_files(checkpoint_dir):
99
+ return get_checkpoint_files(checkpoint_dir, "*_model_states.pt")
100
+
101
+
102
+ def parse_model_states(files):
103
+ zero_model_states = []
104
+ for file in files:
105
+ state_dict = torch.load(file, map_location=device, weights_only=False)
106
+
107
+ if BUFFER_NAMES not in state_dict:
108
+ raise ValueError(f"{file} is not a model state checkpoint")
109
+ buffer_names = state_dict[BUFFER_NAMES]
110
+ if debug:
111
+ print("Found buffers:", buffer_names)
112
+
113
+ # recover just the buffers while restoring them to fp32 if they were saved in fp16
114
+ buffers = {k: v.float() for k, v in state_dict["module"].items() if k in buffer_names}
115
+ param_shapes = state_dict[PARAM_SHAPES]
116
+
117
+ # collect parameters that are included in param_shapes
118
+ param_names = []
119
+ for s in param_shapes:
120
+ for name in s.keys():
121
+ param_names.append(name)
122
+
123
+ # update with frozen parameters
124
+ frozen_param_shapes = state_dict.get(FROZEN_PARAM_SHAPES, None)
125
+ if frozen_param_shapes is not None:
126
+ if debug:
127
+ print(f"Found frozen_param_shapes: {frozen_param_shapes}")
128
+ param_names += list(frozen_param_shapes.keys())
129
+
130
+ # handle shared params
131
+ shared_params = [[k, v] for k, v in state_dict["shared_params"].items()]
132
+
133
+ ds_version = state_dict.get(DS_VERSION, None)
134
+
135
+ frozen_param_fragments = state_dict.get(FROZEN_PARAM_FRAGMENTS, None)
136
+
137
+ z_model_state = zero_model_state(buffers=buffers,
138
+ param_shapes=param_shapes,
139
+ shared_params=shared_params,
140
+ ds_version=ds_version,
141
+ frozen_param_shapes=frozen_param_shapes,
142
+ frozen_param_fragments=frozen_param_fragments)
143
+ zero_model_states.append(z_model_state)
144
+
145
+ return zero_model_states
146
+
147
+
148
+ def parse_optim_states(files, ds_checkpoint_dir):
149
+ total_files = len(files)
150
+ state_dicts = []
151
+ for f in tqdm(files, desc='Loading checkpoint shards'):
152
+ state_dict = torch.load(f, map_location=device, mmap=True, weights_only=False)
153
+ # immediately discard the potentially huge 2 optimizer states as we only care for fp32 master weights
154
+ # and also handle the case where it was already removed by another helper script
155
+ state_dict["optimizer_state_dict"].pop("optimizer_state_dict", None)
156
+ state_dicts.append(state_dict)
157
+
158
+ if ZERO_STAGE not in state_dicts[0][OPTIMIZER_STATE_DICT]:
159
+ raise ValueError(f"{files[0]} is not a zero checkpoint")
160
+ zero_stage = state_dicts[0][OPTIMIZER_STATE_DICT][ZERO_STAGE]
161
+ world_size = state_dicts[0][OPTIMIZER_STATE_DICT][PARTITION_COUNT]
162
+
163
+ # For ZeRO-2 each param group can have different partition_count as data parallelism for expert
164
+ # parameters can be different from data parallelism for non-expert parameters. So we can just
165
+ # use the max of the partition_count to get the dp world_size.
166
+
167
+ if type(world_size) is list:
168
+ world_size = max(world_size)
169
+
170
+ if world_size != total_files:
171
+ raise ValueError(
172
+ f"Expected {world_size} of '*_optim_states.pt' under '{ds_checkpoint_dir}' but found {total_files} files. "
173
+ "Possibly due to an overwrite of an old checkpoint, or a checkpoint didn't get saved by one or more processes."
174
+ )
175
+
176
+ # the groups are named differently in each stage
177
+ if zero_stage <= 2:
178
+ fp32_groups_key = SINGLE_PARTITION_OF_FP32_GROUPS
179
+ elif zero_stage == 3:
180
+ fp32_groups_key = FP32_FLAT_GROUPS
181
+ else:
182
+ raise ValueError(f"unknown zero stage {zero_stage}")
183
+
184
+ fp32_flat_groups = [state_dicts[i][OPTIMIZER_STATE_DICT][fp32_groups_key] for i in range(len(state_dicts))]
185
+ return zero_stage, world_size, fp32_flat_groups
186
+
187
+
188
+ def _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir, exclude_frozen_parameters):
189
+ """
190
+ Returns fp32 state_dict reconstructed from ds checkpoint
191
+
192
+ Args:
193
+ - ``ds_checkpoint_dir``: path to the deepspeed checkpoint folder (where the optimizer files are)
194
+
195
+ """
196
+ print(f"Processing zero checkpoint '{ds_checkpoint_dir}'")
197
+
198
+ optim_files = get_optim_files(ds_checkpoint_dir)
199
+ zero_stage, world_size, fp32_flat_groups = parse_optim_states(optim_files, ds_checkpoint_dir)
200
+ print(f"Detected checkpoint of type zero stage {zero_stage}, world_size: {world_size}")
201
+
202
+ model_files = get_model_state_files(ds_checkpoint_dir)
203
+
204
+ zero_model_states = parse_model_states(model_files)
205
+ print(f'Parsing checkpoint created by deepspeed=={zero_model_states[0].ds_version}')
206
+
207
+ if zero_stage <= 2:
208
+ return _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states,
209
+ exclude_frozen_parameters)
210
+ elif zero_stage == 3:
211
+ return _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states,
212
+ exclude_frozen_parameters)
213
+
214
+
215
+ def _zero2_merge_frozen_params(state_dict, zero_model_states):
216
+ if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
217
+ return
218
+
219
+ frozen_param_shapes = zero_model_states[0].frozen_param_shapes
220
+ frozen_param_fragments = zero_model_states[0].frozen_param_fragments
221
+
222
+ if debug:
223
+ num_elem = sum(s.numel() for s in frozen_param_shapes.values())
224
+ print(f'rank 0: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
225
+
226
+ wanted_params = len(frozen_param_shapes)
227
+ wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
228
+ avail_numel = sum([p.numel() for p in frozen_param_fragments.values()])
229
+ print(f'Frozen params: Have {avail_numel} numels to process.')
230
+ print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
231
+
232
+ total_params = 0
233
+ total_numel = 0
234
+ for name, shape in frozen_param_shapes.items():
235
+ total_params += 1
236
+ unpartitioned_numel = shape.numel()
237
+ total_numel += unpartitioned_numel
238
+
239
+ state_dict[name] = frozen_param_fragments[name]
240
+
241
+ if debug:
242
+ print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
243
+
244
+ print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
245
+
246
+
247
+ def _has_callable(obj, fn):
248
+ attr = getattr(obj, fn, None)
249
+ return callable(attr)
250
+
251
+
252
+ def _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
253
+ param_shapes = zero_model_states[0].param_shapes
254
+
255
+ # Reconstruction protocol:
256
+ #
257
+ # XXX: document this
258
+
259
+ if debug:
260
+ for i in range(world_size):
261
+ for j in range(len(fp32_flat_groups[0])):
262
+ print(f"{FP32_FLAT_GROUPS}[{i}][{j}].shape={fp32_flat_groups[i][j].shape}")
263
+
264
+ # XXX: memory usage doubles here (zero2)
265
+ num_param_groups = len(fp32_flat_groups[0])
266
+ merged_single_partition_of_fp32_groups = []
267
+ for i in range(num_param_groups):
268
+ merged_partitions = [sd[i] for sd in fp32_flat_groups]
269
+ full_single_fp32_vector = torch.cat(merged_partitions, 0)
270
+ merged_single_partition_of_fp32_groups.append(full_single_fp32_vector)
271
+ avail_numel = sum(
272
+ [full_single_fp32_vector.numel() for full_single_fp32_vector in merged_single_partition_of_fp32_groups])
273
+
274
+ if debug:
275
+ wanted_params = sum([len(shapes) for shapes in param_shapes])
276
+ wanted_numel = sum([sum(shape.numel() for shape in shapes.values()) for shapes in param_shapes])
277
+ # not asserting if there is a mismatch due to possible padding
278
+ print(f"Have {avail_numel} numels to process.")
279
+ print(f"Need {wanted_numel} numels in {wanted_params} params.")
280
+
281
+ # params
282
+ # XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
283
+ # out-of-core computing solution
284
+ total_numel = 0
285
+ total_params = 0
286
+ for shapes, full_single_fp32_vector in zip(param_shapes, merged_single_partition_of_fp32_groups):
287
+ offset = 0
288
+ avail_numel = full_single_fp32_vector.numel()
289
+ for name, shape in shapes.items():
290
+
291
+ unpartitioned_numel = shape.numel() if _has_callable(shape, 'numel') else math.prod(shape)
292
+ total_numel += unpartitioned_numel
293
+ total_params += 1
294
+
295
+ if debug:
296
+ print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
297
+ state_dict[name] = full_single_fp32_vector.narrow(0, offset, unpartitioned_numel).view(shape)
298
+ offset += unpartitioned_numel
299
+
300
+ # Z2 started to align to 2*world_size to improve nccl performance. Therefore both offset and
301
+ # avail_numel can differ by anywhere between 0..2*world_size. Due to two unrelated complex
302
+ # paddings performed in the code it's almost impossible to predict the exact numbers w/o the
303
+ # live optimizer object, so we are checking that the numbers are within the right range
304
+ align_to = 2 * world_size
305
+
306
+ def zero2_align(x):
307
+ return align_to * math.ceil(x / align_to)
308
+
309
+ if debug:
310
+ print(f"original offset={offset}, avail_numel={avail_numel}")
311
+
312
+ offset = zero2_align(offset)
313
+ avail_numel = zero2_align(avail_numel)
314
+
315
+ if debug:
316
+ print(f"aligned offset={offset}, avail_numel={avail_numel}")
317
+
318
+ # Sanity check
319
+ if offset != avail_numel:
320
+ raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
321
+
322
+ print(f"Reconstructed fp32 state dict with {total_params} params {total_numel} elements")
323
+
324
+
325
+ def _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states,
326
+ exclude_frozen_parameters):
327
+ state_dict = OrderedDict()
328
+
329
+ # buffers
330
+ buffers = zero_model_states[0].buffers
331
+ state_dict.update(buffers)
332
+ if debug:
333
+ print(f"added {len(buffers)} buffers")
334
+
335
+ if not exclude_frozen_parameters:
336
+ _zero2_merge_frozen_params(state_dict, zero_model_states)
337
+
338
+ _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
339
+
340
+ # recover shared parameters
341
+ for pair in zero_model_states[0].shared_params:
342
+ if pair[1] in state_dict:
343
+ state_dict[pair[0]] = state_dict[pair[1]]
344
+
345
+ return state_dict
346
+
347
+
348
+ def zero3_partitioned_param_info(unpartitioned_numel, world_size):
349
+ remainder = unpartitioned_numel % world_size
350
+ padding_numel = (world_size - remainder) if remainder else 0
351
+ partitioned_numel = math.ceil(unpartitioned_numel / world_size)
352
+ return partitioned_numel, padding_numel
353
+
354
+
355
+ def _zero3_merge_frozen_params(state_dict, world_size, zero_model_states):
356
+ if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
357
+ return
358
+
359
+ if debug:
360
+ for i in range(world_size):
361
+ num_elem = sum(s.numel() for s in zero_model_states[i].frozen_param_fragments.values())
362
+ print(f'rank {i}: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
363
+
364
+ frozen_param_shapes = zero_model_states[0].frozen_param_shapes
365
+ wanted_params = len(frozen_param_shapes)
366
+ wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
367
+ avail_numel = sum([p.numel() for p in zero_model_states[0].frozen_param_fragments.values()]) * world_size
368
+ print(f'Frozen params: Have {avail_numel} numels to process.')
369
+ print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
370
+
371
+ total_params = 0
372
+ total_numel = 0
373
+ for name, shape in zero_model_states[0].frozen_param_shapes.items():
374
+ total_params += 1
375
+ unpartitioned_numel = shape.numel()
376
+ total_numel += unpartitioned_numel
377
+
378
+ param_frags = tuple(model_state.frozen_param_fragments[name] for model_state in zero_model_states)
379
+ state_dict[name] = torch.cat(param_frags, 0).narrow(0, 0, unpartitioned_numel).view(shape)
380
+
381
+ partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
382
+
383
+ if debug:
384
+ print(
385
+ f"Frozen params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
386
+ )
387
+
388
+ print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
389
+
390
+
391
+ class GatheredTensor:
392
+ """
393
+ A pseudo tensor that collects partitioned weights.
394
+ It is more memory efficient when there are multiple groups.
395
+ """
396
+
397
+ def __init__(self, flat_groups, flat_groups_offset, offset, partitioned_numel, shape):
398
+ self.flat_groups = flat_groups
399
+ self.flat_groups_offset = flat_groups_offset
400
+ self.offset = offset
401
+ self.partitioned_numel = partitioned_numel
402
+ self.shape = shape
403
+ self.dtype = self.flat_groups[0][0].dtype
404
+
405
+ def contiguous(self):
406
+ """
407
+ Merge partitioned weights from flat_groups into a single tensor.
408
+ """
409
+ end_idx = self.offset + self.partitioned_numel
410
+ world_size = len(self.flat_groups)
411
+ pad_flat_param_chunks = []
412
+
413
+ for rank_i in range(world_size):
414
+ # for each rank, we need to collect weights from related group/groups
415
+ flat_groups_at_rank_i = self.flat_groups[rank_i]
416
+ start_group_id = None
417
+ end_group_id = None
418
+ for group_id in range(len(self.flat_groups_offset)):
419
+ if self.flat_groups_offset[group_id] <= self.offset < self.flat_groups_offset[group_id + 1]:
420
+ start_group_id = group_id
421
+ if self.flat_groups_offset[group_id] < end_idx <= self.flat_groups_offset[group_id + 1]:
422
+ end_group_id = group_id
423
+ break
424
+ # collect weights from related group/groups
425
+ for group_id in range(start_group_id, end_group_id + 1):
426
+ flat_tensor = flat_groups_at_rank_i[group_id]
427
+ start_offset = self.offset - self.flat_groups_offset[group_id]
428
+ end_offset = min(end_idx, self.flat_groups_offset[group_id + 1]) - self.flat_groups_offset[group_id]
429
+ pad_flat_param_chunks.append(flat_tensor[start_offset:end_offset])
430
+
431
+ # collect weights from all ranks
432
+ pad_flat_param = torch.cat(pad_flat_param_chunks, dim=0)
433
+ param = pad_flat_param[:self.shape.numel()].view(self.shape).contiguous()
434
+ return param
435
+
436
+
437
+ def _zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
438
+ param_shapes = zero_model_states[0].param_shapes
439
+ avail_numel = sum([flat_group.numel() for flat_group in fp32_flat_groups[0]]) * world_size
440
+
441
+ # Reconstruction protocol: For zero3 we need to zip the partitions together at boundary of each
442
+ # param, re-consolidating each param, while dealing with padding if any
443
+
444
+ # merge list of dicts, preserving order
445
+ param_shapes = {k: v for d in param_shapes for k, v in d.items()}
446
+
447
+ if debug:
448
+ for i in range(world_size):
449
+ print(f"{FP32_FLAT_GROUPS}[{i}].shape={fp32_flat_groups[i].shape}")
450
+
451
+ wanted_params = len(param_shapes)
452
+ wanted_numel = sum(shape.numel() for shape in param_shapes.values())
453
+ # not asserting if there is a mismatch due to possible padding
454
+ avail_numel = fp32_flat_groups[0].numel() * world_size
455
+ print(f"Trainable params: Have {avail_numel} numels to process.")
456
+ print(f"Trainable params: Need {wanted_numel} numels in {wanted_params} params.")
457
+
458
+ # params
459
+ # XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
460
+ # out-of-core computing solution
461
+ offset = 0
462
+ total_numel = 0
463
+ total_params = 0
464
+ flat_groups_offset = [0] + list(np.cumsum([flat_tensor.numel() for flat_tensor in fp32_flat_groups[0]]))
465
+ for name, shape in tqdm(param_shapes.items(), desc='Gathering sharded weights'):
466
+ unpartitioned_numel = shape.numel()
467
+ total_numel += unpartitioned_numel
468
+ total_params += 1
469
+ partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
470
+
471
+ if debug:
472
+ print(
473
+ f"Trainable params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
474
+ )
475
+
476
+ # memory efficient tensor
477
+ tensor = GatheredTensor(fp32_flat_groups, flat_groups_offset, offset, partitioned_numel, shape)
478
+ state_dict[name] = tensor
479
+ offset += partitioned_numel
480
+
481
+ offset *= world_size
482
+
483
+ # Sanity check
484
+ if offset != avail_numel:
485
+ raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
486
+
487
+ print(f"Reconstructed Trainable fp32 state dict with {total_params} params {total_numel} elements")
488
+
489
+
490
+ def _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states,
491
+ exclude_frozen_parameters):
492
+ state_dict = OrderedDict()
493
+
494
+ # buffers
495
+ buffers = zero_model_states[0].buffers
496
+ state_dict.update(buffers)
497
+ if debug:
498
+ print(f"added {len(buffers)} buffers")
499
+
500
+ if not exclude_frozen_parameters:
501
+ _zero3_merge_frozen_params(state_dict, world_size, zero_model_states)
502
+
503
+ _zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
504
+
505
+ # recover shared parameters
506
+ for pair in zero_model_states[0].shared_params:
507
+ if pair[1] in state_dict:
508
+ state_dict[pair[0]] = state_dict[pair[1]]
509
+
510
+ return state_dict
511
+
512
+
513
+ def to_torch_tensor(state_dict, return_empty_tensor=False):
514
+ """
515
+ Convert state_dict of GatheredTensor to torch tensor
516
+ """
517
+ torch_state_dict = {}
518
+ converted_tensors = {}
519
+ for name, tensor in state_dict.items():
520
+ tensor_id = id(tensor)
521
+ if tensor_id in converted_tensors: # shared tensors
522
+ shared_tensor = torch_state_dict[converted_tensors[tensor_id]]
523
+ torch_state_dict[name] = shared_tensor
524
+ else:
525
+ converted_tensors[tensor_id] = name
526
+ if return_empty_tensor:
527
+ torch_state_dict[name] = torch.empty(tensor.shape, dtype=tensor.dtype)
528
+ else:
529
+ torch_state_dict[name] = tensor.contiguous()
530
+ return torch_state_dict
531
+
532
+
533
+ def get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir,
534
+ tag=None,
535
+ exclude_frozen_parameters=False,
536
+ lazy_mode=False):
537
+ """
538
+ Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated state_dict that can be loaded with
539
+ ``load_state_dict()`` and used for training without DeepSpeed or shared with others, for example
540
+ via a model hub.
541
+
542
+ Args:
543
+ - ``checkpoint_dir``: path to the desired checkpoint folder
544
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in 'latest' file. e.g., ``global_step14``
545
+ - ``exclude_frozen_parameters``: exclude frozen parameters
546
+ - ``lazy_mode``: get state_dict in lazy mode. It returns a dict of pesduo tensor instead of torch tensor, which is more memory efficient.
547
+ Convert the pesduo tensor to torch tensor by ``.contiguous()``
548
+
549
+ Returns:
550
+ - pytorch ``state_dict``
551
+
552
+ A typical usage might be ::
553
+
554
+ from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
555
+ # do the training and checkpoint saving
556
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir) # already on cpu
557
+ model = model.cpu() # move to cpu
558
+ model.load_state_dict(state_dict)
559
+ # submit to model hub or save the model to share with others
560
+
561
+ In this example the ``model`` will no longer be usable in the deepspeed context of the same
562
+ application. i.e. you will need to re-initialize the deepspeed engine, since
563
+ ``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
564
+
565
+ If you want it all done for you, use ``load_state_dict_from_zero_checkpoint`` instead.
566
+
567
+ Note: the above usage may not work if your application doesn't have sufficient free CPU memory.
568
+ You may need to use the offline approach using the ``zero_to_fp32.py`` script that is saved with
569
+ the checkpoint. Or you can load state_dict in lazy mode ::
570
+
571
+ from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
572
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, lazy_mode=True) # not on cpu
573
+ for name, lazy_tensor in state_dict.item():
574
+ tensor = lazy_tensor.contiguous() # to cpu
575
+ print(name, tensor)
576
+ # del tensor to release memory if it no longer in use
577
+ """
578
+ if tag is None:
579
+ latest_path = os.path.join(checkpoint_dir, 'latest')
580
+ if os.path.isfile(latest_path):
581
+ with open(latest_path, 'r') as fd:
582
+ tag = fd.read().strip()
583
+ else:
584
+ raise ValueError(f"Unable to find 'latest' file at {latest_path}")
585
+
586
+ ds_checkpoint_dir = os.path.join(checkpoint_dir, tag)
587
+
588
+ if not os.path.isdir(ds_checkpoint_dir):
589
+ raise FileNotFoundError(f"Directory '{ds_checkpoint_dir}' doesn't exist")
590
+
591
+ state_dict = _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir, exclude_frozen_parameters)
592
+ if lazy_mode:
593
+ return state_dict
594
+ else:
595
+ return to_torch_tensor(state_dict)
596
+
597
+
598
+ def convert_zero_checkpoint_to_fp32_state_dict(checkpoint_dir,
599
+ output_dir,
600
+ max_shard_size="5GB",
601
+ safe_serialization=False,
602
+ tag=None,
603
+ exclude_frozen_parameters=False):
604
+ """
605
+ Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict`` file that can be
606
+ loaded with ``torch.load(file)`` + ``load_state_dict()`` and used for training without DeepSpeed.
607
+
608
+ Args:
609
+ - ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
610
+ - ``output_dir``: directory to the pytorch fp32 state_dict output files
611
+ - ``max_shard_size``: the maximum size for a checkpoint before being sharded, default value is 5GB
612
+ - ``safe_serialization``: whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).
613
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
614
+ - ``exclude_frozen_parameters``: exclude frozen parameters
615
+ """
616
+
617
+ # Dependency pre-check
618
+ if safe_serialization:
619
+ try:
620
+ from safetensors.torch import save_file
621
+ except ImportError:
622
+ print('If you want to use `safe_serialization`, please `pip install safetensors`')
623
+ raise
624
+ if max_shard_size is not None:
625
+ try:
626
+ from huggingface_hub import split_torch_state_dict_into_shards
627
+ except ImportError:
628
+ print('If you want to use `max_shard_size`, please `pip install huggingface_hub`')
629
+ raise
630
+
631
+ # Convert zero checkpoint to state_dict
632
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir,
633
+ tag,
634
+ exclude_frozen_parameters,
635
+ lazy_mode=True)
636
+
637
+ # Shard the model if it is too big.
638
+ weights_name = "model.safetensors" if safe_serialization else "pytorch_model.bin"
639
+ if max_shard_size is not None:
640
+ filename_pattern = weights_name.replace(".bin", "{suffix}.bin").replace(".safetensors", "{suffix}.safetensors")
641
+ # an memory-efficient approach for sharding
642
+ empty_state_dict = to_torch_tensor(state_dict, return_empty_tensor=True)
643
+ state_dict_split = split_torch_state_dict_into_shards(empty_state_dict,
644
+ filename_pattern=filename_pattern,
645
+ max_shard_size=max_shard_size)
646
+ else:
647
+ from collections import namedtuple
648
+ StateDictSplit = namedtuple("StateDictSplit", ["is_sharded", "filename_to_tensors"])
649
+ state_dict_split = StateDictSplit(is_sharded=False,
650
+ filename_to_tensors={weights_name: list(state_dict.keys())})
651
+
652
+ # Save the model by shard
653
+ os.makedirs(output_dir, exist_ok=True)
654
+ filename_to_tensors = state_dict_split.filename_to_tensors.items()
655
+ for shard_file, tensors in tqdm(filename_to_tensors, desc="Saving checkpoint shards"):
656
+ shard_state_dict = {tensor_name: state_dict[tensor_name] for tensor_name in tensors}
657
+ shard_state_dict = to_torch_tensor(shard_state_dict)
658
+ output_path = os.path.join(output_dir, shard_file)
659
+ if safe_serialization:
660
+ save_file(shard_state_dict, output_path, metadata={"format": "pt"})
661
+ else:
662
+ torch.save(shard_state_dict, output_path)
663
+ # release the memory of current shard
664
+ for tensor_name in list(shard_state_dict.keys()):
665
+ del state_dict[tensor_name]
666
+ del shard_state_dict[tensor_name]
667
+ del shard_state_dict
668
+ gc.collect()
669
+
670
+ # Save index if sharded
671
+ if state_dict_split.is_sharded:
672
+ index = {
673
+ "metadata": state_dict_split.metadata,
674
+ "weight_map": state_dict_split.tensor_to_filename,
675
+ }
676
+ save_index_file = "model.safetensors.index.json" if safe_serialization else "pytorch_model.bin.index.json"
677
+ save_index_file = os.path.join(output_dir, save_index_file)
678
+ with open(save_index_file, "w", encoding="utf-8") as f:
679
+ content = json.dumps(index, indent=2, sort_keys=True) + "\n"
680
+ f.write(content)
681
+
682
+
683
+ def load_state_dict_from_zero_checkpoint(model, checkpoint_dir, tag=None):
684
+ """
685
+ 1. Put the provided model to cpu
686
+ 2. Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict``
687
+ 3. Load it into the provided model
688
+
689
+ Args:
690
+ - ``model``: the model object to update
691
+ - ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
692
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
693
+
694
+ Returns:
695
+ - ``model`: modified model
696
+
697
+ Make sure you have plenty of CPU memory available before you call this function. If you don't
698
+ have enough use the ``zero_to_fp32.py`` utility to do the conversion. You will find it
699
+ conveniently placed for you in the checkpoint folder.
700
+
701
+ A typical usage might be ::
702
+
703
+ from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
704
+ model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
705
+ # submit to model hub or save the model to share with others
706
+
707
+ Note, that once this was run, the ``model`` will no longer be usable in the deepspeed context
708
+ of the same application. i.e. you will need to re-initialize the deepspeed engine, since
709
+ ``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
710
+
711
+ """
712
+ logger.info("Extracting fp32 weights")
713
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag)
714
+
715
+ logger.info("Overwriting model with fp32 weights")
716
+ model = model.cpu()
717
+ model.load_state_dict(state_dict, strict=False)
718
+
719
+ return model
720
+
721
+
722
+ if __name__ == "__main__":
723
+ parser = argparse.ArgumentParser()
724
+ parser.add_argument("checkpoint_dir",
725
+ type=str,
726
+ help="path to the desired checkpoint folder, e.g., path/checkpoint-12")
727
+ parser.add_argument("output_dir",
728
+ type=str,
729
+ help="directory to the pytorch fp32 state_dict output files"
730
+ "(e.g. path/checkpoint-12-output/)")
731
+ parser.add_argument(
732
+ "--max_shard_size",
733
+ type=str,
734
+ default="5GB",
735
+ help="The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size"
736
+ "lower than this size. If expressed as a string, needs to be digits followed by a unit (like `5MB`"
737
+ "We default it to 5GB in order for models to be able to run easily on free-tier google colab instances"
738
+ "without CPU OOM issues.")
739
+ parser.add_argument(
740
+ "--safe_serialization",
741
+ default=False,
742
+ action='store_true',
743
+ help="Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).")
744
+ parser.add_argument("-t",
745
+ "--tag",
746
+ type=str,
747
+ default=None,
748
+ help="checkpoint tag used as a unique identifier for checkpoint. e.g., global_step1")
749
+ parser.add_argument("--exclude_frozen_parameters", action='store_true', help="exclude frozen parameters")
750
+ parser.add_argument("-d", "--debug", action='store_true', help="enable debug")
751
+ args = parser.parse_args()
752
+
753
+ debug = args.debug
754
+
755
+ convert_zero_checkpoint_to_fp32_state_dict(args.checkpoint_dir,
756
+ args.output_dir,
757
+ max_shard_size=args.max_shard_size,
758
+ safe_serialization=args.safe_serialization,
759
+ tag=args.tag,
760
+ exclude_frozen_parameters=args.exclude_frozen_parameters)