Update README.md
Browse files
README.md
CHANGED
|
@@ -27,7 +27,7 @@ pipeline_tag: text-generation
|
|
| 27 |
<br><br>
|
| 28 |
π Trained with <a href="https://github.com/PrimeIntellect-ai/prime-rl">prime-rl</a> infra and <a href="https://github.com/PrimeIntellect-ai/verifiers">verifiers</a> environments
|
| 29 |
<br>
|
| 30 |
-
π Environments on <a href="https://app.primeintellect.ai/dashboard/environments">Environments Hub</a> | π Read the <a href="https://primeintellect.ai/blog/intellect-3">Technical Report</a>
|
| 31 |
<br>
|
| 32 |
π Follow us on <a href="https://discord.gg/RC5GvMbfDf">X</a> | π¬ Join our <a href="https://discord.gg/RC5GvMbfDf">Discord</a> | π Infra for training frontier AI at <a href="https://app.primeintellect.ai/dashboard/create-cluster">Prime Compute Platform</a>
|
| 33 |
</p>
|
|
@@ -48,12 +48,14 @@ For more details, see the [technical report](PAPER_LINK_PLACEHOLDER).
|
|
| 48 |
|
| 49 |
INTELLECT-3 achieves best-in-class performance on math, coding, and reasoning benchmarks:
|
| 50 |
|
| 51 |
-
| Benchmark |
|
| 52 |
-
|
| 53 |
-
|
|
| 54 |
-
|
|
| 55 |
-
|
|
| 56 |
-
|
|
|
|
|
|
|
|
| 57 |
|
| 58 |
## Model Variants
|
| 59 |
|
|
|
|
| 27 |
<br><br>
|
| 28 |
π Trained with <a href="https://github.com/PrimeIntellect-ai/prime-rl">prime-rl</a> infra and <a href="https://github.com/PrimeIntellect-ai/verifiers">verifiers</a> environments
|
| 29 |
<br>
|
| 30 |
+
π Environments on <a href="https://app.primeintellect.ai/dashboard/environments">Environments Hub</a> | π Read the <a href="https://primeintellect.ai/blog/intellect-3">Blog</a> & <a href="https://huggingface.co/PrimeIntellect/INTELLECT-3">Technical Report</a>
|
| 31 |
<br>
|
| 32 |
π Follow us on <a href="https://discord.gg/RC5GvMbfDf">X</a> | π¬ Join our <a href="https://discord.gg/RC5GvMbfDf">Discord</a> | π Infra for training frontier AI at <a href="https://app.primeintellect.ai/dashboard/create-cluster">Prime Compute Platform</a>
|
| 33 |
</p>
|
|
|
|
| 48 |
|
| 49 |
INTELLECT-3 achieves best-in-class performance on math, coding, and reasoning benchmarks:
|
| 50 |
|
| 51 |
+
| Benchmark | MATH-500 | AIME24 | AIME25 | LCBΒ²Β² | GPQA | HLE | MMLU-Pro |
|
| 52 |
+
|-----------|----------|---------|---------|--------|------|-----|----------|
|
| 53 |
+
| INTELLECT-3 | **98.1** | **90.8** | **88.0** | 69.3 | 74.4 | 14.6 | 81.9 |
|
| 54 |
+
| GLM-4.5-Air | 97.8 | 84.6 | 82.0 | 61.5 | 73.3 | 13.3 | 73.9 |
|
| 55 |
+
| GLM-4.5 | 97.0 | 85.8 | 83.3 | 64.5 | 77.0 | X | X |
|
| 56 |
+
| DeepSeek R1 0528 | 87.3 | 83.2 | 73.4 | 62.5 | 77.5 | 15.9 | 75.3 |
|
| 57 |
+
| DeepSeek v3.2 | 96.8 | 88.1 | 84.7 | **71.6** | **81.4** | **17.9** | **84.6** |
|
| 58 |
+
| GPT-O5S 120B | 96.0 | 75.8 | 77.7 | 69.9 | 70.0 | 10.6 | 67.1 |
|
| 59 |
|
| 60 |
## Model Variants
|
| 61 |
|