| | --- |
| | license: cc-by-nc-4.0 |
| | task_categories: |
| | - image-segmentation |
| | language: |
| | - en |
| | tags: |
| | - reasoning |
| | - zero-shot |
| | - reinforcement-learning |
| | - multi-modal |
| | - VLM |
| | size_categories: |
| | - n<1K |
| | dataset_info: |
| | features: |
| | - name: image |
| | dtype: image |
| | - name: text |
| | dtype: string |
| | - name: mask |
| | sequence: |
| | sequence: bool |
| | - name: image_id |
| | dtype: string |
| | - name: ann_id |
| | dtype: string |
| | - name: img_height |
| | dtype: int64 |
| | - name: img_width |
| | dtype: int64 |
| | splits: |
| | - name: test |
| | num_bytes: 1666685613.0 |
| | num_examples: 779 |
| | download_size: 1235514015 |
| | dataset_size: 1666685613.0 |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: test |
| | path: data/test-* |
| | --- |
| | |
| | # ReasonSeg Test Dataset |
| |
|
| | This repository contains the **ReasonSeg Test Dataset**, which serves as an evaluation benchmark for the paper [Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement](https://arxiv.org/abs/2503.06520). |
| |
|
| | **Code:** [https://github.com/dvlab-research/Seg-Zero](https://github.com/dvlab-research/Seg-Zero) |
| |
|
| | ## Paper Abstract |
| |
|
| | Traditional methods for reasoning segmentation rely on supervised fine-tuning with categorical labels and simple descriptions, limiting its out-of-domain generalization and lacking explicit reasoning processes. To address these limitations, we propose Seg-Zero, a novel framework that demonstrates remarkable generalizability and derives explicit chain-of-thought reasoning through cognitive reinforcement. Seg-Zero introduces a decoupled architecture consisting of a reasoning model and a segmentation model. The reasoning model interprets user intentions, generates explicit reasoning chains, and produces positional prompts, which are subsequently used by the segmentation model to generate precious pixel-level masks. We design a sophisticated reward mechanism that integrates both format and accuracy rewards to effectively guide optimization directions. Trained exclusively via reinforcement learning with GRPO and without explicit reasoning data, Seg-Zero achieves robust zero-shot generalization and exhibits emergent test-time reasoning capabilities. Experiments show that Seg-Zero-7B achieves a zero-shot performance of 57.5 on the ReasonSeg benchmark, surpassing the prior LISA-7B by 18%. This significant improvement highlights Seg-Zero's ability to generalize across domains while presenting an explicit reasoning process. |
| |
|
| | ## About Seg-Zero |
| |
|
| | Seg-Zero is a novel framework for reasoning segmentation that utilizes cognitive reinforcement to achieve remarkable generalizability and explicit chain-of-thought reasoning. |
| |
|
| | <div align=center> |
| | <img width="98%" src="https://github.com/dvlab-research/Seg-Zero/raw/main/assets/overview.png"/> |
| | </div> |
| |
|
| | Seg-Zero demonstrates the following features: |
| | 1. Seg-Zero exhibits emergent test-time reasoning ability. It generates a reasoning chain before producing the final segmentation mask. |
| | 2. Seg-Zero is trained exclusively using reinforcement learning, without any explicit supervised reasoning data. |
| | 3. Compared to supervised fine-tuning, our Seg-Zero achieves superior performance on both in-domain and out-of-domain data. |
| |
|
| | ### Model Pipeline |
| |
|
| | Seg-Zero employs a decoupled architecture, including a reasoning model and segmentation model. A sophisticated reward mechanism integrates both format and accuracy rewards. |
| |
|
| | <div align=center> |
| | <img width="98%" src="https://github.com/dvlab-research/Seg-Zero/raw/main/assets/pipeline.png"/> |
| | </div> |
| |
|
| | ### Examples |
| |
|
| | <div align=center> |
| | <img width="98%" src="https://github.com/dvlab-research/Seg-Zero/raw/main/assets/examples.png"/> |
| | </div> |
| |
|
| | ## Sample Usage: Evaluation |
| |
|
| | This dataset (`ReasonSeg-Test`) is designed for evaluating the zero-shot performance of models like Seg-Zero on reasoning-based image segmentation tasks. |
| |
|
| | First, install the necessary dependencies for the Seg-Zero project: |
| |
|
| | ```bash |
| | git clone https://github.com/dvlab-research/Seg-Zero.git |
| | cd Seg-Zero |
| | conda create -n visionreasoner python=3.12 |
| | conda activate visionreasoner |
| | pip install torch==2.6.0 torchvision==0.21.0 |
| | pip install -e . |
| | ``` |
| |
|
| | Then, you can run evaluation using the provided scripts. Make sure to download pretrained models first: |
| |
|
| | ```bash |
| | mkdir pretrained_models |
| | cd pretrained_models |
| | git lfs install |
| | git clone https://huggingface.co/Ricky06662/VisionReasoner-7B |
| | ``` |
| |
|
| | With the pretrained models downloaded, you can run the evaluation script for ReasonSeg: |
| |
|
| | ```bash |
| | bash evaluation_scripts/eval_reasonseg_visionreasoner.sh |
| | ``` |
| |
|
| | Adjust `'--batch_size'` in the bash scripts based on your GPU. You will see the gIoU in your command line. |
| |
|
| | <div align=center> |
| | <img width="98%" src="https://github.com/dvlab-research/Seg-Zero/raw/main/assets/val_results.png"/> |
| | </div> |
| |
|
| | ## The GRPO Algorithm |
| |
|
| | Seg-Zero generates several samples, calculates the rewards and then optimizes towards samples that achieve higher rewards. |
| |
|
| | <div align=center> |
| | <img width="48%" src="https://github.com/dvlab-research/Seg-Zero/raw/main/assets/rl_sample.png"/> |
| | </div> |
| |
|
| | ## Citation |
| |
|
| | If you use this dataset or the Seg-Zero framework, please cite the associated papers: |
| |
|
| | ```bibtex |
| | @article{liu2025segzero, |
| | title = {Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement}, |
| | author = {Liu, Yuqi and Peng, Bohao and Zhong, Zhisheng and Yue, Zihao and Lu, Fanbin and Yu, Bei and Jia, Jiaya}, |
| | journal = {arXiv preprint arXiv:2503.06520}, |
| | year = {2025} |
| | } |
| | |
| | @article{liu2025visionreasoner, |
| | title = {VisionReasoner: Unified Visual Perception and Reasoning via Reinforcement Learning}, |
| | author = {Liu, Yuqi and Qu, Tianyuan and Zhong, Zhisheng and Peng, Bohao and Liu, Shu and Yu, Bei and Jia, Jiaya}, |
| | journal = {arXiv preprint arXiv:2505.12081}, |
| | year = {2025} |
| | } |
| | ``` |