Datasets:

Modalities:
Image
Text
Formats:
arrow
ArXiv:
Libraries:
Datasets
License:
sienna223 commited on
Commit
cd4982d
·
verified ·
1 Parent(s): 6da249b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +162 -3
README.md CHANGED
@@ -1,3 +1,162 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ <p align="center">
6
+ <img src="assets/logo.png" width="65%">
7
+ </p>
8
+
9
+ <p align="center">
10
+ <a href="https://vectorspacelab.github.io/EditScore"><img src="https://img.shields.io/badge/Project%20Page-EditScore-yellow" alt="project page"></a>
11
+ <a href="https://arxiv.org/abs/2509.23909"><img src="https://img.shields.io/badge/arXiv%20paper-2509.23909-b31b1b.svg" alt="arxiv"></a>
12
+ <a href="https://huggingface.co/collections/EditScore/editscore-68d8e27ee676981221db3cfe"><img src="https://img.shields.io/badge/EditScore-🤗-yellow" alt="model"></a>
13
+ <a href="https://huggingface.co/datasets/EditScore/EditReward-Bench"><img src="https://img.shields.io/badge/EditReward--Bench-🤗-yellow" alt="dataset"></a>
14
+ </p>
15
+
16
+ <h4 align="center">
17
+ <p>
18
+ <a href=#-news>News</a> |
19
+ <a href=#-quick-start>Quick Start</a> |
20
+ <a href=#-benchmark-your-image-editing-reward-model usage>Benchmark Usage</a> |
21
+ <a href=#%EF%B8%8F-citing-us>Citation</a>
22
+ <p>
23
+ </h4>
24
+
25
+ **EditScore** is a series of state-of-the-art open-source reward models (7B–72B) designed to evaluate and enhance instruction-guided image editing.
26
+ ## ✨ Highlights
27
+ - **State-of-the-Art Performance**: Effectively matches the performance of leading proprietary VLMs. With a self-ensembling strategy, **our largest model surpasses even GPT-5** on our comprehensive benchmark, **EditReward-Bench**.
28
+ - **A Reliable Evaluation Standard**: We introduce **EditReward-Bench**, the first public benchmark specifically designed for evaluating reward models in image editing, featuring 13 subtasks, 11 state-of-the-art editing models (*including proprietary models*) and expert human annotations.
29
+ - **Simple and Easy-to-Use**: Get an accurate quality score for your image edits with just a few lines of code.
30
+ - **Versatile Applications**: Ready to use as a best-in-class reranker to improve editing outputs, or as a high-fidelity reward signal for **stable and effective Reinforcement Learning (RL) fine-tuning**.
31
+
32
+ ## 🔥 News
33
+ - **2025-09-30**: We release **OmniGen2-EditScore7B**, unlocking online RL For Image Editing via high-fidelity EditScore. LoRA weights are available at [Hugging Face](https://huggingface.co/OmniGen2/OmniGen2-EditScore7B) and [ModelScope](https://www.modelscope.cn/models/OmniGen2/OmniGen2-EditScore7B).
34
+ - **2025-09-30**: We are excited to release **EditScore** and **EditReward-Bench**! Model weights and the benchmark dataset are now publicly available. You can access them on Hugging Face: [Models Collection](https://huggingface.co/collections/EditScore/editscore-68d8e27ee676981221db3cfe) and [Benchmark Dataset](https://huggingface.co/datasets/EditScore/EditReward-Bench), and on ModelScope: [Models Collection](https://www.modelscope.cn/collections/EditScore-8b0d53aa945d4e) and [Benchmark Dataset](https://www.modelscope.cn/datasets/EditScore/EditReward-Bench).
35
+
36
+ ## 📖 Introduction
37
+ While Reinforcement Learning (RL) holds immense potential for this domain, its progress has been severely hindered by the absence of a high-fidelity, efficient reward signal.
38
+
39
+ To overcome this barrier, we provide a systematic, two-part solution:
40
+
41
+ - **A Rigorous Evaluation Standard**: We first introduce **EditReward-Bench**, a new public benchmark for the direct and reliable evaluation of reward models. It features 13 diverse subtasks and expert human annotations, establishing a gold standard for measuring reward signal quality.
42
+
43
+ - **A Powerful & Versatile Tool**: Guided by our benchmark, we developed the **EditScore** model series. Through meticulous data curation and an effective self-ensembling strategy, EditScore sets a new state of the art for open-source reward models, even surpassing the accuracy of leading proprietary VLMs.
44
+
45
+ <p align="center">
46
+ <img src="assets/table_reward_model_results.png" width="95%">
47
+ <br>
48
+ <em>Benchmark results on EditReward-Bench.</em>
49
+ </p>
50
+
51
+ We demonstrate the practical utility of EditScore through two key applications:
52
+
53
+ - **As a State-of-the-Art Reranker**: Use EditScore to perform Best-of-*N* selection and instantly improve the output quality of diverse editing models.
54
+ - **As a High-Fidelity Reward for RL**: Use EditScore as a robust reward signal to fine-tune models via RL, enabling stable training and unlocking significant performance gains where general-purpose VLMs fail.
55
+
56
+ This repository releases both the **EditScore** models and the **EditReward-Bench** dataset to facilitate future research in reward modeling, policy optimization, and AI-driven model improvement.
57
+
58
+ <p align="center">
59
+ <img src="assets/figure_edit_results.png" width="95%">
60
+ <br>
61
+ <em>EditScore as a superior reward signal for image editing.</em>
62
+ </p>
63
+
64
+
65
+ ## 📌 TODO
66
+ We are actively working on improving EditScore and expanding its capabilities. Here's what's next:
67
+ - [ ] Release RL training code applying EditScore to OmniGen2.
68
+ - [ ] Provide Best-of-N inference scripts for OmniGen2, Flux-dev-Kontext, and Qwen-Image-Edit.
69
+
70
+ ## 🚀 Quick Start
71
+
72
+ ### 🛠️ Environment Setup
73
+
74
+ #### ✅ Recommended Setup
75
+
76
+ ```bash
77
+ # 1. Clone the repo
78
+ git clone [email protected]:VectorSpaceLab/EditScore.git
79
+ cd EditScore
80
+
81
+ # 2. (Optional) Create a clean Python environment
82
+ conda create -n editscore python=3.12
83
+ conda activate editscore
84
+
85
+ # 3. Install dependencies
86
+ # 3.1 Install PyTorch (choose correct CUDA version)
87
+ pip install torch==2.7.1 torchvision --extra-index-url https://download.pytorch.org/whl/cu126
88
+
89
+ # 3.2 Install other required packages
90
+ pip install -r requirements.txt
91
+
92
+ # EditScore runs even without vllm, though we recommend install it for best performance.
93
+ pip install vllm
94
+ ```
95
+
96
+ #### 🌏 For users in Mainland China
97
+
98
+ ```bash
99
+ # Install PyTorch from a domestic mirror
100
+ pip install torch==2.7.1 torchvision --index-url https://mirror.sjtu.edu.cn/pytorch-wheels/cu126
101
+
102
+ # Install other dependencies from Tsinghua mirror
103
+ pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
104
+
105
+ # EditScore runs even without vllm, though we recommend install it for best performance.
106
+ pip install vllm -i https://pypi.tuna.tsinghua.edu.cn/simple
107
+ ```
108
+
109
+ ---
110
+
111
+ ### 🧪 Usage Example
112
+ Using EditScore is straightforward. The model will be automatically downloaded from the Hugging Face Hub on its first run.
113
+ ```python
114
+ from PIL import Image
115
+ from editscore import EditScore
116
+
117
+ # Load the EditScore model. It will be downloaded automatically.
118
+ # Replace with the specific model version you want to use.
119
+ model_path = "Qwen/Qwen2.5-VL-7B-Instruct"
120
+ lora_path = "EditScore/EditScore-7B"
121
+
122
+ scorer = EditScore(
123
+ backbone="qwen25vl", # set to "qwen25vl_vllm" for faster inference
124
+ model_name_or_path=model_path,
125
+ enable_lora=True,
126
+ lora_path=lora_path,
127
+ score_range=25,
128
+ num_pass=1, # Increase for better performance via self-ensembling
129
+ )
130
+
131
+ input_image = Image.open("example_images/input.png")
132
+ output_image = Image.open("example_images/output.png")
133
+ instruction = "Adjust the background to a glass wall."
134
+
135
+ result = scorer.evaluate([input_image, output_image], instruction)
136
+ print(f"Edit Score: {result['final_score']}")
137
+ # Expected output: A dictionary containing the final score and other details.
138
+ ```
139
+
140
+ ---
141
+
142
+ ## 📊 Benchmark Your Image-Editing Reward Model
143
+ We provide an evaluation script to benchmark reward models on **EditReward-Bench**. To evaluate your own custom reward model, simply create a scorer class with a similar interface and update the script.
144
+ ```bash
145
+ # This script will evaluate the default EditScore model on the benchmark
146
+ bash evaluate.sh
147
+
148
+ # Or speed up inference with VLLM
149
+ bash evaluate_vllm.sh
150
+ ```
151
+
152
+ ## ❤️ Citing Us
153
+ If you find this repository or our work useful, please consider giving a star ⭐ and citation 🦖, which would be greatly appreciated:
154
+
155
+ ```bibtex
156
+ @article{luo2025editscore,
157
+ title={EditScore: Unlocking Online RL for Image Editing via High-Fidelity Reward Modeling},
158
+ author={Xin Luo and Jiahao Wang and Chenyuan Wu and Shitao Xiao and Xiyan Jiang and Defu Lian and Jiajun Zhang and Dong Liu and Zheng Liu},
159
+ journal={arXiv preprint arXiv:2509.23909},
160
+ year={2025}
161
+ }
162
+ ```