Xenova HF Staff commited on
Commit
f449d73
·
verified ·
1 Parent(s): 161c5e4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +186 -0
README.md ADDED
@@ -0,0 +1,186 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ language:
5
+ - en
6
+ - fr
7
+ - es
8
+ - it
9
+ - pt
10
+ - zh
11
+ - ar
12
+ - ru
13
+ base_model:
14
+ - HuggingFaceTB/SmolLM3-3B
15
+ ---
16
+
17
+
18
+ # SmolLM3
19
+
20
+
21
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/zy0dqTCCt5IHmuzwoqtJ9.png)
22
+
23
+
24
+ ## Table of Contents
25
+
26
+ 1. [Model Summary](#model-summary)
27
+ 2. [How to use](#how-to-use)
28
+ 3. [Evaluation](#evaluation)
29
+ 4. [Training](#training)
30
+ 5. [Limitations](#limitations)
31
+ 6. [License](#license)
32
+
33
+ ## Model Summary
34
+
35
+ SmolLM3 is a 3B parameter language model designed to push the boundaries of small models. It supports 6 languages, advanced reasoning and long context. SmolLM3 is a fully open model that offers strong performance at the 3B–4B scale.
36
+
37
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6200d0a443eb0913fa2df7cc/db3az7eGzs-Sb-8yUj-ff.png)
38
+
39
+ The model is a decoder-only transformer using GQA and NoPE (with 3:1 ratio), it was pretrained on 11.2T tokens with a staged curriculum of web, code, math and reasoning data. Post-training included midtraining on 140B reasoning tokens followed by supervised fine-tuning and alignment via Anchored Preference Optimization (APO).
40
+
41
+ ### Key features
42
+ - Instruct model optimized for **hybrid reasoning**
43
+ - **Fully open model**: open weights + full training details including public data mixture and training configs
44
+ - **Long context:** Trained on 64k context and suppots up to **128k tokens** using YARN extrapolation
45
+ - **Multilingual**: 6 natively supported (English, French, Spanish, German, Italian, and Portuguese)
46
+
47
+ For more details refer to our blog post: TODO
48
+
49
+ ## How to use
50
+
51
+ [TODO]
52
+
53
+ ## Evaluation
54
+
55
+ In this section, we report the evaluation results of SmolLM3 model. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them.
56
+
57
+ We highlight the best score in bold and underline the second-best score.
58
+
59
+ ### Instruction Model
60
+
61
+ #### No Extended Thinking
62
+ Evaluation results of non reasoning models and reasoning models in no thinking mode. We highlight the best and second-best scores in bold.
63
+ | Category | Metric | SmoLLM3-3B | Qwen2.5-3B | Llama3.1-3B | Qwen3-1.7B | Qwen3-4B |
64
+ |---------|--------|------------|------------|-------------|------------|----------|
65
+ | High school math competition | AIME 2025 | <u>9.3</u> | 2.9 | 0.3 | 8.0 | **17.1** |
66
+ | Math problem-solving | GSM-Plus | 72.8 | <u>74.1</u> | 59.2 | 68.3 | **82.1** |
67
+ | Competitive programming | LiveCodeBench v4 | <u>15.2</u> | 10.5 | 3.4 | 15.0 | **24.9** |
68
+ | Graduate-level reasoning | GPQA Diamond | <u>35.7</u> | 32.2 | 29.4 | 31.8 | **44.4** |
69
+ | Instruction following | IFEval | **76.7** | 65.6 | 71.6 | <u>74.0</u> | 68.9 |
70
+ | Alignment | MixEval Hard | 26.9 | <u>27.6</u> | 24.9 | 24.3 | **31.6** |
71
+ | Tool Calling | BFCL| <u>92.3</u> | - | <u>92.3</u> * | 89.5 | **95.0** |
72
+ | Multilingual Q&A | Global MMLU | <u>53.5</u> | 50.54 | 46.8 | 49.5 | **65.1** |
73
+
74
+ (*): this is a tool calling finetune
75
+
76
+ #### Extended Thinking
77
+ Evaluation results in reasoning mode for SmolLM3 and Qwen3 models:
78
+ | Category | Metric | SmoLLM3-3B | Qwen3-1.7B | Qwen3-4B |
79
+ |---------|--------|------------|------------|----------|
80
+ | High school math competition | AIME 2025 | <u>36.7</u> | 30.7 | **58.8** |
81
+ | Math problem-solving | GSM-Plus | <u>83.4</u> | 79.4 | **88.2** |
82
+ | Competitive programming | LiveCodeBench v4 | 30.0 | <u>34.4</u> | **52.9** |
83
+ | Graduate-level reasoning | GPQA Diamond | <u>41.7</u> | 39.9 | **55.3** |
84
+ | Instruction following | IFEval | 71.2 | <u>74.2</u> | **85.4** |
85
+ | Alignment | MixEval Hard | 30.8 | <u>33.9</u> | **38.0** |
86
+ | Tool Calling | BFCL | <u>88.8</u> | <u>88.8</u> | **95.5** |
87
+ | Multilingual Q&A | Global MMLU | <u>64.1</u> | 62.3 | **73.3** |
88
+
89
+
90
+ ### Base Pre-Trained Model
91
+
92
+ #### English benchmarks
93
+ Note: All evaluations are zero-shot unless stated otherwise. For Ruler 64k evaluation, we apply YaRN to the Qwen models with 32k context to extrapolate the context length.
94
+
95
+ | Category | Metric | SmolLM3-3B | Qwen2.5-3B | Llama3-3.2B | Qwen3-1.7B-Base | Qwen3-4B-Base |
96
+ |---------|--------|---------------------|------------|--------------|------------------|---------------|
97
+ | Reasoning & Commonsense| HellaSwag | **76.15** | 74.19 |<u>75.52</u> | 60.52 | 74.37 |
98
+ | | ARC-CF (Average) | **65.61** | 59.81 | 58.58 | 55.88 | <u>62.11</u> |
99
+ | | Winogrande | 58.88 | **61.41** | 58.72 | 57.06 | <u>59.59</u> |
100
+ | | CommonsenseQA | <u>55.28</u> | 49.14 | **60.60** | 48.98 | 52.99 |
101
+ | Knowledge & Understanding | MMLU-CF (Average) | <u>44.13</u> | 42.93 | 41.32 | 39.11 | **47.65** |
102
+ | | MMLU Pro CF | <u>19.61</u> | 16.66 | 16.42 | 18.04 | **24.92** |
103
+ | | MMLU Pro MCF | <u>32.70</u> | 31.32 | 25.07 | 30.39 | **41.07** |
104
+ | | PIQA | **78.89** | 78.35 | <u>78.51</u> | 75.35 | 77.58 |
105
+ | | OpenBookQA | 40.60 | 40.20 | <u>42.00</u> | 36.40 | **42.40** |
106
+ | | BoolQ | **78.99** | 73.61 | <u>75.33</u> | 74.46 | 74.28 |
107
+ | **Math & Code** | | | | | | |
108
+ | Coding & math | HumanEval+ | 30.48 | 34.14| 25.00 | <u>43.29</u>| **54.87** |
109
+ | | MBPP+ | 52.91 | 52.11 | 38.88| <u>59.25</u> | **63.75** |
110
+ | | MATH (4-shot) | <u>46.10</u> | 40.10 | 7.44 | 41.64 | **51.20** |
111
+ | | GSM8k (5-shot) | 67.63 | <u>70.13</u> | 25.92 | 65.88 | **74.14** |
112
+ | **Long context** | | | | | | |
113
+ | | Ruler 32k | 76.35 | 75.93 | <u>77.58</u> | 70.63 | **83.98** |
114
+ | | Ruler 64k | <u>67.85</u> | 64.90 | **72.93** | 57.18 | 60.29 |
115
+ | | Ruler 128k | 61.03 | <u>62.23</u> | **71.30** | 43.03 | 47.23 |
116
+
117
+ #### Multilingual benchmarks
118
+
119
+
120
+ | Category | Metric | SmolLM3 3B Base | Qwen2.5-3B | Llama3.2 3B | Qwen3 1.7B Base | Qwen3 4B Base |
121
+ |---------|--------|---------------------|------------|--------------|------------------|---------------|
122
+ | Main supported languages | | | | | | | |
123
+ | French| MLMM Hellaswag | **63.94** | 57.47 | 57.66 | 51.26 | <u>61.00</u> |
124
+ | | Belebele | 51.00 | <u>51.55</u> | 49.22 |49.44| **55.00** |
125
+ | | Global MMLU (CF) | <u>38.37</u> | 34.22 | 33.71 | 34.94 |**41.80** |
126
+ | | Flores-200 (5-shot) | 62.85| 61.38| <u>62.89<u/u> | 58.68 | **65.76** |
127
+ | Spanish| MLMM Hellaswag | **65.85** | 58.25 | 59.39 | 52.40 | <u>61.85</u> |
128
+ | | Belebele | 47.00 | <u>48.88</u> | 47.00 | 47.56 | **50.33** |
129
+ | | Global MMLU (CF) | <u>38.51</u> | 35.84 | 35.60 | 34.79 |**41.22** |
130
+ | | Flores-200 (5-shot) | <u>48.25</u>| 50.00| 44.45 | 46.93 | **50.16** |
131
+ | German| MLMM Hellaswag | **59.56** | 49.99| 53.19|46.10| <u>56.43</u>|
132
+ | | Belebele | <u>48.44</u> | 47.88 | 46.22 | 48.00 | **53.44**|
133
+ | | Global MMLU (CF) | <u>35.10</u> | 33.19 | 32.60 | 32.73 |**38.70** |
134
+ | | Flores-200 (5-shot) | **56.60**| 50.63| <u>54.95</u> | 52.58 | 50.48 |
135
+ | Italian| MLMM Hellaswag | **62.49** | 53.21 | 54.96 | 48.72 | <u>58.76</u> |
136
+ | | Belebele | <u>46.44</u> | 44.77 | 43.88 | 44.00 | **48.78** | 44.88 |
137
+ | | Global MMLU (CF) | <u>36.99</u> | 33.91 | 32.79 | 35.37 |**39.26** |
138
+ | | Flores-200 (5-shot) | <u>52.65<u/>| **54.87**| 48.83 | 48.37 | 49.11 |
139
+ | Portuguese| MLMM Hellaswag | **63.22** | 57.38 | 56.84 | 50.73 | <u>59.89</u> |
140
+ | | Belebele | 47.67 | **49.22** | 45.00 | 44.00 | 50.00 | <u>49.00</U> |
141
+ | | Global MMLU (CF) | <u>36.88</u> | 34.72 | 33.05 | 35.26 |**40.66** |
142
+ | | Flores-200 (5-shot) | <u>60.93</u> |57.68| 54.28 | 56.58 | **63.43** |
143
+
144
+ The model has also been trained on Arabic (standard), Chinese and Russian data, but has seen fewer tokens in these languages compared to the 6 above. We report the performance on these langages for information.
145
+ | Category | Metric | SmolLM3 3B Base | Qwen2.5-3B | Llama3.2 3B | Qwen3 1.7B Base | Qwen3 4B Base |
146
+ |---------|--------|---------------------|------------|--------------|------------------|---------------|
147
+ | Other supported languages | | | | | | | |
148
+ | Arabic| Belebele | 40.22 | 44.22 | <u>45.33</u> | 42.33 | **51.78** |
149
+ | | Global MMLU (CF) | 28.57 | 28.81 | 27.67 | <u>29.37</u> | **31.85** |
150
+ | | Flores-200 (5-shot) | <u>40.22</u> | 39.44 | **44.43** | 35.82 | 39.76 |
151
+ | Chinese| Belebele | 43.78 | 44.56 | <u>49.56</u> | 48.78 | **53.22** |
152
+ | | Global MMLU (CF) | 36.16 | 33.79 | <u>39.57</u> | 38.56 | **44.55** |
153
+ | | Flores-200 (5-shot) | 29.17 | **33.21** | 31.89 | 25.70 | <u>32.50</u> |
154
+ | Russian| Belebele | <u>47.44</u> | 45.89 | <u>47.44</u> | 45.22 | **51.44** |
155
+ | | Global MMLU (CF) | <u>36.51</u> | 32.47 | 34.52 | 34.83 | **38.80** |
156
+ | | Flores-200 (5-shot) | 47.13 | 48.74 | 50.74 | <u>54.70</u> | **60.53** |
157
+
158
+ ## Training
159
+
160
+ ### Model
161
+
162
+ - **Architecture:** Transformer decoder
163
+ - **Pretraining tokens:** 11T
164
+ - **Precision:** bfloat16
165
+
166
+ ### Software & hardware
167
+
168
+ - **GPUs:** 384 H100
169
+ - **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/smollm3)
170
+ - **Data processing framework:** [datatrove](https://github.com/huggingface/datatrove)
171
+ - **Evaluation framework:** [lighteval](https://github.com/huggingface/lighteval)
172
+ - **Post-training Framework:** [TRL](https://github.com/huggingface/trl)
173
+
174
+ ### Open resources
175
+ Here is an infographic with all the training details [TODO].
176
+ - The datasets used for pretraining can be found in this [collection](https://huggingface.co/collections/HuggingFaceTB/smollm3-pretraining-datasets-685a7353fdc01aecde51b1d9) and those used in mid-training and pos-training can be found here [TODO]
177
+ - The training and evaluation configs and code can be found in the [huggingface/smollm](https://github.com/huggingface/smollm) repository.
178
+
179
+
180
+ ## Limitations
181
+
182
+ SmolLM3 can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
183
+
184
+
185
+ ## License
186
+ [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)