ldwang commited on
Commit
3d8a174
·
verified ·
1 Parent(s): 1448e50

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -0
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: transformers
4
+ datasets:
5
+ - AI-MO/NuminaMath-CoT
6
+ - KbsdJames/Omni-MATH
7
+ - RUC-AIBOX/STILL-3-Preview-RL-Data
8
+ - hendrycks/competition_math
9
+ language:
10
+ - en
11
+ base_model:
12
+ - deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
13
+ pipeline_tag: text-generation
14
+ ---
15
+
16
+ <span style="font-family: default; font-size: 1.5em;">DeepScaleR-1.5B-Preview-Reproduce</span>
17
+
18
+ ## Overview
19
+ This model is a reproduction of the [agentica-project/deepscaler](https://github.com/agentica-project/deepscaler) project.
20
+ We have reproduced the results in the repo on an **8x80G** node, achieving an average score of **TBU**.
21
+
22
+
23
+ ## Training
24
+ ```bash
25
+ export VLLM_ATTENTION_BACKEND=XFORMERS
26
+
27
+ # Run 8K context length training, 580 steps
28
+ export MODEL_PATH="deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B"
29
+ nohup bash run_deepscaler_1.5b_8k.sh --model $MODEL_PATH > stage1.log 2>&1 &
30
+
31
+ # Run 16K context length training, 430 steps
32
+ export MODEL_PATH="./checkpoints/deepscaler/deepscaler-1.5b-8k/actor/global_step_580"
33
+ nohup bash run_deepscaler_1.5b_16k.sh --model $MODEL_PATH > stage2.log 2>&1 &
34
+
35
+ # Run 24K context length training, 430 steps
36
+ export MODEL_PATH="./checkpoints/deepscaler/deepscaler-1.5b-16k/actor/global_step_430"
37
+ nohup bash run_deepscaler_1.5b_24k.sh --model $MODEL_PATH > stage3.log 2>&1 &
38
+ ```
39
+
40
+ ## Evaluation
41
+ | Model | AIME 2024 | MATH 500 | AMC 2023 | Minerva Math | OlympiadBench | Avg. |
42
+ |-------|-----------|-----------|-----------|--------------|---------------|------|
43
+ | Qwen-2.5-7B-Instruct | 13.3 | 79.8 | 50.6 | 34.6 | 40.7 | 43.8 |
44
+ | rStar-Math-7B | 26.7 | 78.4 | 47.5 | - | 47.1 | - |
45
+ | Eurus-2-7B-PRIME | 26.7 | 79.2 | 57.8 | 38.6 | 42.1 | 48.9 |
46
+ | Qwen2.5-7B-SimpleRL | 26.7 | 82.4 | 62.5 | 39.7 | 43.3 | 50.9 |
47
+ | DeepSeek-R1-Distill-Qwen-1.5B | 28.8 | 82.8 | 62.9 | 26.5 | 43.3 | 48.9 |
48
+ | Still-1.5B | 32.5 | 84.4 | 66.7 | 29.0 | 45.4 | 51.6 |
49
+ | DeepScaleR-1.5B-Preview | 43.1 |87.8 |73.6 | 30.2 |50.0 |57.0 |
50
+ | [DeepScaleR-1.5B-Preview-Reproduce](https://huggingface.co/junnyu/DeepScaleR-1.5B-Preview-Reproduce) | 40.4 |87.9 | 72.0 | 31.5 | 50.2 |56.4|
51
+ | <strong>🎉 DeepScaleR-1.5B-Preview-Reproduce</strong> | 42.3 |- | - | - | - |-|
52
+ | O1-Preview | 40.0 | 81.4 | - | - | - | - |
53
+
54
+
55
+
56
+ ## Citation
57
+ ```bibtex
58
+ @misc{deepscaler2025,
59
+ title={DeepScaleR: Surpassing O1-Preview with a 1.5B Model by Scaling RL},
60
+ author={Michael Luo and Sijun Tan and Justin Wong and Xiaoxiang Shi and William Y. Tang and Manan Roongta and Colin Cai and Jeffrey Luo and Tianjun Zhang and Li Erran Li and Raluca Ada Popa and Ion Stoica},
61
+ year={2025},
62
+ howpublished={\url{https://pretty-radio-b75.notion.site/DeepScaleR-Surpassing-O1-Preview-with-a-1-5B-Model-by-Scaling-RL-19681902c1468005bed8ca303013a4e2}},
63
+ note={Notion Blog}
64
+ year={2025}
65
+ }