Improve model card: Add comprehensive metadata, paper link, and GitHub details
#1
by
nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,8 +1,41 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
-
### GT-GRPO: Qwen3-4B-Base trained on OpenRS
|
| 5 |
|
| 6 |
-
|
| 7 |
|
| 8 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
pipeline_tag: text-generation
|
| 4 |
+
library_name: transformers
|
| 5 |
+
tags:
|
| 6 |
+
- qwen
|
| 7 |
+
- reasoning
|
| 8 |
+
- self-supervised-learning
|
| 9 |
+
- reinforcement-learning
|
| 10 |
+
datasets:
|
| 11 |
+
- TMLR-Group-HF/Co-rewarding-RephrasedOpenRS
|
| 12 |
---
|
|
|
|
| 13 |
|
| 14 |
+
# Co-rewarding: GT-GRPO Qwen3-4B-Base trained on OpenRS
|
| 15 |
|
| 16 |
+
This model is a checkpoint of the `Qwen3-4B-Base` model, specifically trained using the **GT-GRPO** (Ground-Truth Guided Policy Optimization) method on the **OpenRS** training set. It is part of the work presented in the paper [**Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models**](https://huggingface.co/papers/2508.00410).
|
| 17 |
+
|
| 18 |
+
## Paper Abstract Summary
|
| 19 |
+
|
| 20 |
+
The paper introduces **Co-rewarding**, a novel self-supervised reinforcement learning (RL) framework designed to enhance the reasoning abilities of large language models (LLMs). It aims to achieve training stability by leveraging complementary supervision from multiple views. The framework is instantiated in two ways:
|
| 21 |
+
1. **Co-rewarding-I**: A data-side approach that derives reward signals from contrastive agreement across semantically analogous questions.
|
| 22 |
+
2. **Co-rewarding-II**: A model-side approach that uses a slowly-updated reference teacher with pseudo labels for self-distillation.
|
| 23 |
+
These instantiations introduce discrepancies to prevent training collapse on trivial reasoning solutions. Empirically, Co-rewarding demonstrates stable training and superior performance compared to other self-rewarding baselines on various mathematical reasoning benchmarks, notably surpassing ground-truth RLVR in some cases.
|
| 24 |
+
|
| 25 |
+
## GitHub Repository
|
| 26 |
+
|
| 27 |
+
For comprehensive details on the Co-rewarding framework, installation instructions, training scripts, and additional checkpoints, please visit the official GitHub repository:
|
| 28 |
+
[https://github.com/tmlr-group/Co-rewarding](https://github.com/tmlr-group/Co-rewarding)
|
| 29 |
+
|
| 30 |
+
## Citation
|
| 31 |
+
|
| 32 |
+
If you use this model or any resources from the Co-rewarding project, please cite the following paper:
|
| 33 |
+
|
| 34 |
+
```bibtex
|
| 35 |
+
@article{zhang2025co,
|
| 36 |
+
title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
|
| 37 |
+
author={Zhang, Zizhuo and Zhu, Jianing and Ge, Xinmu and Zhao, Zihua and Zhou, Zhanke and Li, Xuan and Feng, Xiao and Yao, Jiangchao and Han, Bo},
|
| 38 |
+
journal={arXiv preprint arXiv:2508.00410},
|
| 39 |
+
year={2025}
|
| 40 |
+
}
|
| 41 |
+
```
|