linhuixiao commited on
Commit
e5ef08b
Β·
verified Β·
1 Parent(s): 824c6c8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +496 -3
README.md CHANGED
@@ -1,3 +1,496 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ [//]: # (<br />)
5
+ <p align="center"> <h1 align="center">HiVG: Hierarchical Multimodal Fine-grained Modulation for Visual Grounding</h1>
6
+ <p align="center">
7
+ <b> ACM MM, 2024 </b>
8
+ <br />
9
+ <a href="https://scholar.google.com.hk/citations?user=4rTE4ogAAAAJ&hl=zh-CN&oi=sra"><strong> Linhui Xiao </strong></a>
10
+ Β·
11
+ <a href="https://yangxs.ac.cn/home"><strong>Xiaoshan Yang </strong></a>
12
+ Β·
13
+ <a href="https://scholar.google.com.hk/citations?user=HBZ9plsAAAAJ&hl=zh-CN"><strong>Fang Peng </strong></a>
14
+ Β·
15
+ <a href="https://scholar.google.com.hk/citations?user=o_DllmIAAAAJ&hl=zh-CN"><strong>Yaowei Wang </strong></a>
16
+ Β·
17
+ <a href="https://scholar.google.com.hk/citations?user=hI9NRDkAAAAJ&hl=zh-CN"><strong>Changsheng Xu</strong></a>
18
+ </p>
19
+
20
+ <p align="center">
21
+ <a href='https://arxiv.org/pdf/2404.13400'>
22
+ <img src='https://img.shields.io/badge/arXiv-PDF-green?style=flat&logo=arXiv&logoColor=green' alt='arXiv PDF'>
23
+ </a>
24
+ <a href='https://openreview.net/forum?id=NMMyGy1kKZ'>
25
+ <img src='https://img.shields.io/badge/ACM MM 2024-purple' alt='arXiv PDF'>
26
+ </a>
27
+ <a href='docs/ACM_MM_2024_HiVG_poster.pdf'>
28
+ <img src='https://img.shields.io/badge/ACM MM Poster-lightblue' alt='arXiv PDF'>
29
+ </a>
30
+ <br />
31
+
32
+
33
+ <p align="center"> <img src='docs/model.jpg' align="center" width="55%"> </p>
34
+
35
+
36
+ This repository is the official Pytorch implementation for the paper [**HiVG: Hierarchical Multimodal Fine-grained
37
+ Modulation for Visual Grounding**](https://arxiv.org/abs/2404.13400), which is an advanced version
38
+ of our preliminary work **CLIP-VG** ([github](https://github.com/linhuixiao/CLIP-VG), [publication](
39
+ https://ieeexplore.ieee.org/abstract/document/10269126), [Arxiv](https://arxiv.org/abs/2305.08685)).
40
+
41
+ If you have any questions, please feel free to open an issue or contact me with emails: <[email protected]>.
42
+ Any kind discussions are welcomed!
43
+
44
+
45
+ <h3 align="left">
46
+ Links:
47
+ <a href="https://arxiv.org/pdf/2404.13400">ArXiv</a>,
48
+ <a href="https://openreview.net/forum?id=NMMyGy1kKZ">ACM MM 2024</a>
49
+ </h3>
50
+
51
+ **Please leave a <font color='orange'>STAR ⭐</font> if you like this project!**
52
+
53
+ ## News
54
+ - πŸ”₯πŸ”₯πŸ”₯ **Our Grounding survey ([TPAMI](https://doi.org/10.1109/TPAMI.2025.3630635), [Arxiv](https://arxiv.org/abs/2412.20206), [Project](https://github.com/linhuixiao/Awesome-Visual-Grounding)) has been accepted by TPAMI on October 30, 2025 !!!**
55
+
56
+ - :fire: **Update on 2025/01/30: The full code and models of HiVG have been released!**
57
+ - :fire: **Update on 2024/12/28: We conducted a survey of Visual Grounding over the past decade, entitled "Towards Visual Grounding: A Survey" ([Paper](https://arxiv.org/pdf/2412.20206), [Project](https://github.com/linhuixiao/Awesome-Visual-Grounding)), Comments are welcome !!!**
58
+ - :fire: **Update on 2024/10/10: Our advanced one-tower grounding work **OneRef** ([Paper](https://arxiv.org/abs/2410.08021), [Code](https://github.com/linhuixiao/OneRef)) has been accepted by the top conference NeurIPS 2024 !**
59
+ - :fire: **Update on 2024/07/16: Our advanced grounding work HiVG ([Paper](https://openreview.net/pdf?id=NMMyGy1kKZ), [Code](https://github.com/linhuixiao/HiVG)) has been accepted by the top conference ACM MM 2024 !**
60
+ - **Update on 2024/04/20: Release the HiVG project repository.**
61
+ - **Update on 2023/9/25: Our preliminary work CLIP-VG ([Paper](https://ieeexplore.ieee.org/abstract/document/10269126), [Code](https://github.com/linhuixiao/CLIP-VG)) has been accepted by the top journal IEEE Transaction on Multimedia (2023)!**
62
+
63
+ ## Citation
64
+
65
+ If you find our work helpful for your research, please consider citing the following BibTeX entry.
66
+
67
+ ```bibtex
68
+ @inproceedings{xiao2024hivg,
69
+ title={HiVG: Hierarchical Multimodal Fine-grained Modulation for Visual Grounding},
70
+ author={Linhui Xiao and Xiaoshan Yang and Fang Peng and Yaowei Wang and Changsheng Xu},
71
+ booktitle={ACM Multimedia 2024},
72
+ year={2024},
73
+ url={https://openreview.net/forum?id=NMMyGy1kKZ}
74
+ }
75
+ ```
76
+
77
+ ## Contents
78
+
79
+ 1. [Introduction](#introduction)
80
+ 2. [Usage](#usage)
81
+ 3. [Results](#results)
82
+ 4. [Contacts](#contacts)
83
+ 5. [Acknowledgments](#acknowledgments)
84
+
85
+
86
+ ## Highlight
87
+ - **A concise hierarchical multimodal modulation framework**, which utilizes the hierarchical structure to gradually adapt CLIP to grounding. HiVG achieves fine-grained interaction between multi-level visual representations and language semantics, and significantly alleviates the task gap between CLIP and grounding.
88
+ - **The first to propose the hierarchical multimodal low-rank adaptation paradigm.** Hi LoRA is a basic and concise hierarchical adaptation paradigm, which is task-agnostic.
89
+ - **Extensive experiments are conducted to verify the effectiveness of HiVG approaches.** Results show that our method achieves promising results, surpassing the SOTA methods under the same setting by a significant margin. Besides, our model offers significant computing efficiency advantages.
90
+
91
+
92
+ ## TODO
93
+ - [x] Release all the checkpoints.
94
+ - [x] Release the full model code, training and inference code.
95
+
96
+
97
+
98
+
99
+ ## Introduction
100
+
101
+ Visual grounding, which aims to ground a visual region via natural language, is a task that heavily relies on cross-modal
102
+ alignment. Existing works utilized uni-modal pre-trained models to transfer visual/linguistic knowledge separately while
103
+ ignoring the multimodal corresponding information. Motivated by recent advancements in contrastive language-image
104
+ pre-training and low-rank adaptation (LoRA) methods, we aim to solve the grounding task based on multimodal pre-training.
105
+ However, there exists significant task gaps between pre-training and grounding. Therefore, to address these gaps, we
106
+ propose **a concise and efficient hierarchical multimodal fine-grained modulation framework**, namely **HiVG**. Specifically,
107
+ HiVG consists of a multi-layer adaptive cross-modal bridge and a hierarchical multimodal low-rank adaptation (Hi LoRA)
108
+ paradigm. The cross-modal bridge can address the inconsistency between visual features and those required for grounding,
109
+ and establish a connection between multi-level visual and text features. Hi LoRA prevents the accumulation of perceptual
110
+ errors by adapting the cross-modal features from shallow to deep layers in a hierarchical manner. Experimental results
111
+ on five datasets demonstrate the effectiveness of our approach and showcase the significant grounding capabilities as well
112
+ as promising energy efficiency advantages.
113
+
114
+ For more details, please refer to [our paper](https://arxiv.org/abs/2404.13400).
115
+
116
+ ## Usage
117
+ ### Dependencies
118
+ - Python 3.9.10
119
+ - Pytorch 2.2.2
120
+ - transformers==4.30.0
121
+ - peft==0.3.0
122
+ - Check [requirements.txt](requirements.txt) for other dependencies.
123
+ - It is recommended that the code be run under Anaconda env. If a library is missing while the code is running,
124
+ you can simply install it using `pip install <library_name>` or `conda install <library_name>`.
125
+
126
+ Our model is **easy to deploy** in a variety of environments and **has been successfully tested** on multiple pytorch versions.
127
+
128
+ ❗❗❗️
129
+ **(Updated April 15, 2025) Please note that some researchers tested HiVG models in the latest peft library and found
130
+ that the CLIP model weights did not match, which reduced the accuracy. To solve this problem, you only need to ensure
131
+ the peft version is 0.3.0.**
132
+
133
+
134
+ ### Image Data Preparation
135
+ 1.You can download the images from the original source and place them in your disk folder, such as `$/path_to_image_data`:
136
+ - [MS COCO 2014](download_mscoco2014.sh) (for RefCOCO, RefCOCO+, RefCOCOg dataset, almost 13.0GB)
137
+ - [ReferItGame](https://drive.google.com/drive/folders/1D4shieeoKly6FswpdjSpaOrxJQNKTyTv)
138
+ - [Flickr30K Entities](http://shannon.cs.illinois.edu/DenotationGraph/#:~:text=make%20face-,Downloads,-Please%20fill%20in)
139
+
140
+ We provide a script to download the mscoco2014 dataset, you just need to run the script in terminal with the following command:
141
+ ```
142
+ bash download_mscoco2014.sh
143
+ ```
144
+ Or you can also follow the data preparation of TransVG, which can be found in [GETTING_STARTED.md](https://github.com/djiajunustc/TransVG/blob/main/docs/GETTING_STARTED.md).
145
+
146
+ Only the image data in these datasets is used, and these image data is easily find in similar repositories of visual grounding work, such as [TransVG](https://github.com/linhuixiao/TransVG) etc.
147
+ Finally, the `$/path_to_image_data` folder will have the following structure:
148
+
149
+ ```angular2html
150
+ |-- image_data
151
+ |-- Flickr30k
152
+ |-- flickr30k-images
153
+ |-- other
154
+ |-- images
155
+ |-- mscoco
156
+ |-- images
157
+ |-- train2014
158
+ |-- referit
159
+ |-- images
160
+ ```
161
+ - ```$/path_to_image_data/image_data/Flickr30k/flickr30k-images/```: Image data for the Flickr30K dataset, please download from this [link](http://shannon.cs.illinois.edu/DenotationGraph/#:~:text=make%20face-,Downloads,-Please%20fill%20in). Fill the form and download the images.
162
+ - ```$/path_to_image_data/image_data/other/images/```: Image data for RefCOCO/RefCOCO+/RefCOCOg, i.e., mscoco2014.
163
+ - ```$/path_to_image_data/image_data/referit/images/```: Image data for ReferItGame.
164
+
165
+ ## Text-Box Anotations
166
+ The labels are consistent with previous works such as [TransVG](https://github.com/linhuixiao/TransVG). **However,
167
+ this paper employs contrastive learning and shuffles the training examples; therefore,
168
+ you will need to re-download the data from us. Additionally, we also provide the `mixup` dataset for mixup grounding training,
169
+ which comprises by the five training sets (i.e., RefCOCO/+/g, ReferIt, Flickr30k). Note that the RefCOCOg-g (i.e., gref)
170
+ training set is excluded in the `mixup` because it exists test set data leakage. The val and test split in `mixup` are
171
+ copied from the RefCOCOg dataset.**
172
+
173
+
174
+ ### text-box anotations download
175
+ <table>
176
+ <tr> <!-- line 3 -->
177
+ <th style="text-align:center" > Datasets </th>
178
+ <th style="text-align:center" > RefCOCO </th>
179
+ <th style="text-align:center" > RefCOCO+ </th>
180
+ <th style="text-align:center" > RefCOCOg-u </th>
181
+ <th style="text-align:center" > ReferIt </th>
182
+ <th style="text-align:center" > Flickr </th>
183
+ <th style="text-align:center" > Mixup pretraining </th>
184
+ </tr>
185
+ <tr> <!-- line 2 -->
186
+ <th style="text-align:center" rowspan="1"> url, size </th> <!-- table head -->
187
+ <th style="text-align:center" colspan="6"> <a href="https://drive.google.com/file/d/1oaKlHeEECr-KFSDcWUG3X0UNUhqjGugr/view?usp=drive_link">ref_data_shuffled</a>, 267.0MB </th> <!-- table head -->
188
+ </tr>
189
+ </table>
190
+
191
+ Download the above annotations to a disk directory such as `$/path_to_split/ref_data_shuffled`; then will have the following similar directory structure:
192
+
193
+ ```angular2html
194
+ |-- /ref_data_shuffled
195
+ β”œβ”€β”€ flickr
196
+ β”‚ β”œβ”€β”€ flickr_test.pth
197
+ β”‚ β”œβ”€β”€ flickr_train.pth
198
+ β”‚ └── flickr_val.pth
199
+ β”œβ”€β”€ gref_umd
200
+ β”‚ β”œβ”€β”€ gref_umd_test.pth
201
+ β”‚ β”œβ”€β”€ gref_umd_train.pth
202
+ β”‚ └── gref_umd_val.pth
203
+ β”œβ”€β”€ referit
204
+ β”‚ β”œβ”€β”€ referit_test.pth
205
+ β”‚ β”œβ”€β”€ referit_train.pth
206
+ β”‚ └── referit_val.pth
207
+ β”œβ”€β”€ unc
208
+ β”‚ β”œβ”€β”€ unc_testA.pth
209
+ β”‚ β”œβ”€β”€ unc_testB.pth
210
+ β”‚ β”œβ”€β”€ unc_train.pth
211
+ β”‚ └── unc_val.pth
212
+ β”œβ”€β”€ unc+
213
+ β”‚ β”œβ”€β”€ unc+_testA.pth
214
+ β”‚ β”œβ”€β”€ unc+_testB.pth
215
+ β”‚ β”œβ”€β”€ unc+_train.pth
216
+ β”‚ └── unc+_val.pth
217
+ └── mixup
218
+ β”œβ”€β”€ mixup_test.pth
219
+ β”œβ”€β”€ mixup_train.pth
220
+ └── mixup_val.pth
221
+ ```
222
+
223
+
224
+ ## Pre-trained Checkpoints
225
+
226
+ The checkpoints include the Base model and Large model under the single-dataset fine-tuning setting and dataset-mixed
227
+ grounding pretraining setting.
228
+
229
+ ### Single-dataset fine-tuning checkpoints download
230
+
231
+ <table>
232
+ <tr> <!-- line 3 -->
233
+ <th style="text-align:center" > Datasets </th>
234
+ <th style="text-align:center" > RefCOCO </th>
235
+ <th style="text-align:center" > RefCOCO+ </th>
236
+ <th style="text-align:center" > RefCOCOg-u </th>
237
+ <th style="text-align:center" > ReferIt </th>
238
+ <th style="text-align:center" > Flickr </th>
239
+ </tr>
240
+ <tr> <!-- line 2 -->
241
+ <th style="text-align:center" rowspan="1"> base model </th> <!-- table head -->
242
+ <th style="text-align:center" colspan="6"> <a href="https://drive.google.com/file/d/1vM_568M7DwnYmjEiJgXRnrDL5UT65CGJ/view?usp=drive_link"> finetuning_base (for all), ~4.0 GB </a> </th> <!-- table head -->
243
+ </tr>
244
+ <tr> <!-- line 2 -->
245
+ <th style="text-align:center" rowspan="1"> Large model </th> <!-- table head -->
246
+ <th style="text-align:center" colspan="6"> <a href="https://drive.google.com/file/d/1Yw_AVaYnw4amPsemFwKFurXgaKvJ11CB/view?usp=drive_link">finetuning_large (for all), ~8.0 GB </a> </th> <!-- table head -->
247
+ </tr>
248
+ </table>
249
+
250
+
251
+
252
+
253
+ ### Mixup grounding pre-training checkpoints download
254
+
255
+ <table>
256
+ <tr> <!-- line 3 -->
257
+ <th style="text-align:center" > Datasets </th>
258
+ <th style="text-align:center" > Mixup </th>
259
+ </tr>
260
+ <tr> <!-- line 2 -->
261
+ <th style="text-align:center" rowspan="1"> base model </th> <!-- table head -->
262
+ <th style="text-align:center" colspan="1"> <a href="https://drive.google.com/file/d/1TzDLWjS-lXEr2M9uwaSBlU0MRmaRLSmN/view?usp=sharing">mixup_pretraining_base, ~1.0 GB </a> </th> <!-- table head -->
263
+ </tr>
264
+ <tr> <!-- line 3 -->
265
+ <th style="text-align:center" > Large model </th>
266
+ <th style="text-align:center" > <a href="https://drive.google.com/file/d/1H_tv9QcDK712Ie9flLgSCZmmj0HEcjb8/view?usp=drive_link">mixup_pretraining_large, ~2.0 GB</a> </th>
267
+ </tr>
268
+ </table>
269
+
270
+
271
+ After downloading all of these checkpoints, you can save them in the following directory, allowing you to train and test
272
+ the five datasets at once and just using a single script.
273
+
274
+ ```angular2html
275
+ |-- /finetuning_checkpoints (base or large model)
276
+ β”œβ”€β”€ flickr
277
+ β”‚ └── best_checkpoint.pth
278
+ β”œβ”€β”€ gref_umd
279
+ β”‚ └── best_checkpoint.pth
280
+ β”œβ”€β”€ referit
281
+ β”‚ └── best_checkpoint.pth
282
+ β”œβ”€β”€ unc
283
+ β”‚ └── best_checkpoint.pth
284
+ └── unc+
285
+ └── best_checkpoint.pth
286
+
287
+ |-- /mixup_grounding_pretraining (base or large model)
288
+ └── mixup
289
+ └── best_checkpoint.pth
290
+ ```
291
+
292
+
293
+
294
+ ### CLIP domain generalized checkpoints download
295
+
296
+ Due to the domain bias of CLIP on the MSCOCO dataset, we follow previous work, such as TransVG++, VG-LAW, etc., to conduct
297
+ pre-training for the backbone network on the MSCOCO dataset while excluding RefCOCO/+/g related images.
298
+ For this pre-training, the [Detectron2](https://github.com/facebookresearch/detectron2) framework is used for detection and segmentation training under the vanilla LoRA paradigm.
299
+ If you want to training HiVG, please download the fine-tuned CLIP model using LoRA on MSCOCO dataset from the link below.
300
+
301
+
302
+ <table>
303
+ <tr> <!-- line 3 -->
304
+ <th style="text-align:center" > Model </th>
305
+ <th style="text-align:center" > Debiased CLIP model using LoRA on the MSCOCO dataset </th>
306
+ </tr>
307
+ <tr> <!-- line 2 -->
308
+ <th style="text-align:center" rowspan="1"> base model (ViT-B/224) </th> <!-- table head -->
309
+ <th style="text-align:center" colspan="1"> <a href="https://drive.google.com/file/d/1pgso4gjHselrj4ExqJP3PYRbbX754aRq/view?usp=sharing">clip_b_ml_cascade_maskrcnn_model_224, 580 MB </a> </th> <!-- table head -->
310
+ </tr>
311
+ <tr> <!-- line 3 -->
312
+ <th style="text-align:center" > Large model (ViT-L/224) </th>
313
+ <th style="text-align:center" > <a href="https://drive.google.com/file/d/18T4g6P-duKifx5Ksw6gHmL0ttKW39Wa6/view?usp=sharing">clip_l_ml_cascade_maskrcnn_model_224, 1.6 GB</a> </th>
314
+ </tr>
315
+ </table>
316
+
317
+ Alternatively, you can also use the original CLIP Hugging Face model for training, for which we provide a download link.
318
+ In this case, the performance may be degraded.
319
+
320
+ <table>
321
+ <tr> <!-- line 3 -->
322
+ <th style="text-align:center" > Model </th>
323
+ <th style="text-align:center" > original CLIP Hugging Face model </th>
324
+ </tr>
325
+ <tr> <!-- line 2 -->
326
+ <th style="text-align:center" rowspan="1"> base model (ViT-B/224) </th> <!-- table head -->
327
+ <th style="text-align:center" colspan="1"> <a href="https://drive.google.com/file/d/1SgWSK6vOKgPpEaULlHGZBnxotZ241phG/view?usp=drive_link">clip-vit-base-patch16, 375 MB </a> </th> <!-- table head -->
328
+ </tr>
329
+ <tr> <!-- line 3 -->
330
+ <th style="text-align:center" > Large model (ViT-L/224) </th>
331
+ <th style="text-align:center" > <a href="https://huggingface.co/openai/clip-vit-large-patch14/tree/main">clip-vit-large-patch14, 1.6 GB</a> </th>
332
+ </tr>
333
+ </table>
334
+
335
+
336
+ ## Training and Evaluation
337
+
338
+
339
+ ### Evaluation
340
+
341
+
342
+ 1. Download the images and text annotations for the five datasets, as well as the trained HiVG model and CLIP initialization model.
343
+ You need to change the ```$/path_to_clip``` in [models/HiVG.py](models/HiVG.py) to your ```original CLIP Hugging Face model``` CLIP model directory.
344
+
345
+ 2. The evaluation script are as follows:
346
+ ```angular2html
347
+ |-- /train_and_eval_script
348
+ β”œβ”€β”€ eval_single_dataset_finetuning_base.sh
349
+ β”œβ”€β”€ eval_single_dataset_finetuning_large.sh
350
+ β”œβ”€β”€ eval_mixup_grounding_pretraining_base.sh
351
+ └── eval_mixup_grounding_pretraining_large.sh
352
+ ```
353
+
354
+ 3. You just need to change ```$/path_to_split```, ``` $/path_to_image_data```, ``` $/path_to_output``` to your own file directory to execute the above command.
355
+ We strongly recommend to use the following commands to training or testing with different datasets and splits, which will significant reduce the training workforce. Such as:
356
+ ```
357
+ bash train_and_eval_script/eval_single_dataset_finetuning_base.sh
358
+ ```
359
+
360
+ 4. For a specific dataset, the instruction is just like follows:
361
+ ```
362
+ CUDA_VISIBLE_DEVICES=1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=7 --master_port 28888 --use_env hivg_eval.py --num_workers 2 --batch_size 60 --dataset unc --vl_hidden_dim 512 --imsize 224 --max_query_len 77 --normalize_before --enable_adaptive_weights --use_mask_loss --save_hilora_clip --hi_lora_stage 3 --data_root /path_to_image_data --split_root /path_to_split/ref_data_shuffled --eval_model /patch_to_output/finetuning_base/unc/best_checkpoint.pth --eval_set testA --output_dir /patch_to_output/finetuning_base/unc;
363
+ ```
364
+ Please refer to the files in [train_and_eval_script](train_and_eval_script) for evaluation commands on other splits or datasets under different settings.
365
+
366
+ 5. If you need to save the CLIP model for the current stage, you need to use flags ```--save_hilora_clip```
367
+
368
+ ### Training
369
+
370
+ 1. Download the images and text annotations for the five datasets, as well as the trained HiVG model and CLIP initialization model.
371
+ You need to change the ```$/path_to_clip``` in [models/HiVG.py](models/HiVG.py) to your ```original CLIP Hugging Face model``` CLIP model directory.
372
+
373
+ 2. The evaluation script are as follows:
374
+ ```angular2html
375
+ |-- /train_and_eval_script
376
+ β”œβ”€β”€ train_single_dataset_finetuning_base.sh
377
+ β”œβ”€β”€ train_single_dataset_finetuning_large.sh
378
+ β”œβ”€β”€ train_mixup_grounding_pretraining_base.sh
379
+ └── train_mixup_grounding_pretraining_large.sh
380
+ ```
381
+
382
+ 3. You just need to change ```$/path_to_split```, ``` $/path_to_image_data```, ``` $/path_to_output``` to your own file directory to execute the above command.
383
+ We strongly recommend to use the following commands to training or testing with different datasets and splits, which will significant reduce the training workforce. Such as:
384
+ ```
385
+ bash train_and_eval_script/train_single_dataset_finetuning_base.sh
386
+ ```
387
+
388
+ 4. **Notably, for a specific dataset, if you want to enable HiLoRA, your training may involve 4 stages: the warmup stage,
389
+ HiLoRA stage 1, HiLoRA stage 2, and HiLoRA stage 3.**
390
+
391
+ **In the warm-up phase, MACA is not turned on, only the fusion Transformer encoder is trained, and HiLoRA training is
392
+ not turned on for the CLIP model. Note that during the loading process of multiple rounds of HiLoRA training,
393
+ CLIP needs to be loaded separately. This will cause some parameters to mismatch, which is normal.**
394
+
395
+ **Note that the essence of the HiLoRA mechanism is a process of decomposing parameter learning, and its effectiveness
396
+ is influenced by the learning rate and the number of epochs. Therefore, HiLoRA requires different learning rates and numbers of epochs at various stages for specific model
397
+ configurations. If you do not need to enable HiLoRA, simply leave `args.hi_lora_stage=0` as the default.**
398
+
399
+ 5. **The Large version of the model is somewhat difficult to train and empirically requires one or two stages of warmup.**
400
+ In the first stage, `arg.warmup` needs to be enabled, and the visual adapt layer must be forced to be empty `[]`
401
+ to train the cross-modal fusion encoder, which is equivalent to freezing the CLIP model.
402
+ Only 5-10 epochs are needed for this phase. In the second stage, `arg.warmup` is turned off, and normal training
403
+ is performed; at this time, linguistic information can fine-tune the visual features through the cross-modal bridge.
404
+
405
+ Please refer to the files in [train_and_eval_script](train_and_eval_script) for training commands on other splits or datasets under different settings.
406
+
407
+
408
+ ## Results
409
+
410
+ ### 1. RefCOCO, RefCOCO+, RefCOCOg, ReferIt, Flickr, datasets
411
+ <details open>
412
+ <summary><font size="4">
413
+ SOTA Result Table
414
+ </font></summary>
415
+ <img src="docs/sota.jpg" alt="COCO" width="100%">
416
+ </details>
417
+
418
+ **(1) When compared to the CLIP-based fine-tuning SOTA work**, i.e., Dynamic-MDETR, our approach consistently
419
+ outperforms it by achieving an increase of 3.15%(testB), 3.11%(testA), 4.30%(test), 5.55%(test),
420
+ 0.22%(test) on all five datasets.
421
+
422
+ **(2) When compared to the detector-based fine-tuning SOTA work**, i.e.,
423
+ TransVG++, our approach demonstrates superior performance (improved by 2.30%(testB), 4.36%(testA), 2.49%(test),
424
+ 1.22%(test), 0.62%(test)) across all five datasets. The improvement of our results on the RefCOCO+/g datasets is
425
+ considerably more significant, indicating our model exhibits a stronger capacity for semantic comprehension in complex
426
+ sentences.
427
+
428
+ **(3) When compared with the dataset-mixed pre-training works**, the base model of our work outperforms
429
+ Grounding-DINO by 1.24%(testB), 1.81%(testA), and 1.68%(testA) on the RefCOCO/+/g
430
+ datasets, and it also outperforms OFA by 3.93%(testB), 2.06%(testA), and 4.31%(testA).
431
+ After dataset-mixed pre-training, our performance has significantly improved, further demonstrating the effectiveness
432
+ of our method.
433
+
434
+ ### 2. Our model also has significant energy efficiency advantages.
435
+
436
+ <details open>
437
+ <summary><font size="4">
438
+ Illustration
439
+ </font></summary>
440
+ <div align=center>
441
+ <img src="docs/result_performance.jpg" alt="COCO" width="100%"></div>
442
+ </details>
443
+
444
+ **Comparison between HiVG (base) and SOTA models, as well as the ablation study of HiVG on the main modules.** (a) HiVG
445
+ achieves significant energy efficiency advantages, **8.2x** faster than TransVG++ while
446
+ outperforming it on RefCOCO-val. (b) The computational complexity of HiVG is **only 13.0%** compared with
447
+ TransVG++. (c) HiVG outperforms SOTA models in different expression lengths on RefCOCOg-test. (d) Hi LoRA method brings
448
+ significant performance gains to HiVG model.
449
+
450
+
451
+ ## Methods
452
+
453
+ <p align="center"> <img src='docs/motivation.jpg' align="center" width="60%"> </p>
454
+
455
+ **Visual attentions and grounding results of CLIP and the proposed HiVG.** The attentions are perceived by the
456
+ [CLS] token over vision tokens.
457
+
458
+ <p align="center"> <img src='docs/hilora.jpg' align="center" width="60%"> </p>
459
+
460
+ **Hi LoRA and vanilla LoRA.** (a) The vanilla LoRA learns the global low-rank matrix utilizing the entire set of
461
+ pre-trained weights in a single round. (b) The proposed Hi LoRA employs a hierarchical approach to adapt the pre-trained
462
+ model in a progressive manner, thereby finely reducing the task gap between pre-training and transfer tasks.
463
+
464
+ ## Visualization
465
+ <p align="center"> <img src='docs/visualization.jpg' align="center" width="70%"> </p>
466
+
467
+ **Qualitative results of our HiVG framework on the RefCOCOg-val split.** The CLIP-VG model is compared. We present the
468
+ prediction box with IoU (in cyan) and the ground truth box (in green) in a unified image to visually display the
469
+ grounding accuracy. We show the [REG] token’s attention over vision tokens from the last
470
+ grounding block of each framework. The examples exhibit the relatively more challenging instances for grounding, thereby
471
+ showcasing HiVG's robust semantic comprehension capabilities.
472
+
473
+ ## Contacts
474
+ Email: <[email protected]>.
475
+ Any kind discussions are welcomed!
476
+
477
+ ## Acknowledgement
478
+
479
+ Our model is related to [CLIP](https://github.com/openai/CLIP), [CLIP-VG](https://github.com/linhuixiao/CLIP-VG). Thanks for their great work!
480
+
481
+ We also thank the great previous work including [TransVG++](https://github.com/linhuixiao/TransVG),
482
+ [DETR](https://github.com/facebookresearch/detr), [QRNet](https://github.com/LukeForeverYoung/QRNet), etc.
483
+
484
+ Thanks [OpenAI](https://github.com/openai) for their awesome models.
485
+
486
+
487
+
488
+
489
+ ## Star History
490
+
491
+ [![Star History Chart](https://api.star-history.com/svg?repos=linhuixiao/HiVG&type=Date)](https://star-history.com/#linhuixiao/HiVG&Date)
492
+
493
+
494
+
495
+
496
+