NVIDIA-Nemotron-3-Super-120B-A12B-Base

Model Overview

Model Developer: NVIDIA Corporation

Model Dates:

December 2025 - January 2026

Data Freshness:

  • The post-training data has a cutoff date of February 2026.
  • The pre-training data has a cutoff date of December 2025.

Description

Nemotron-3-Super-120B-A12B-Base is a base large language model (LLM) trained from scratch by NVIDIA, with next token prediction loss. It provides a good starting platform for further training, including instruction follow, and coding.

The model employs a hybrid Latent Mixture-of-Experts (LatentMoE) architecture, utilizing interleaved Mamba-2 and MoE layers, along with select Attention layers. Distinct from the Nano model, the Super model incorporates Multi-Token Prediction (MTP) layers for faster text generation and improved quality, and it is trained using NVFP4 quantization to maximize compute efficiency. The model has 12B active parameters and 120B parameters in total.

The supported languages include: English, Spanish, French, German, Japanese, Italian, Chinese, Arabic, Hebrew, Hindi, Korean, Czech, Danish, Dutch, Finnish, Polish, Portuguese, Thai, Swedish, and Russian.

This model is ready for commercial use.

What is Nemotron?

NVIDIA Nemotron™ is a family of open models with open weights, training data, and recipes, delivering leading efficiency and accuracy for building specialized AI agents.

To get started, you can use our quickstart guide below.

License/Terms of Use

Use of these model weights is governed by the NVIDIA Nemotron Open Model License.

Base Benchmarks

Comparison of Ling-flash-base-2.0, GLM-4.5-Air-Base, and Nemotron Super 120B-A12B Base. Best results per row in bold.

Task Metric N-3-Super 120B-A12B-Base Ling-flash base-2.0 GLM-4.5 Air-Base
General Knowledge
MMLU 5-shot, acc 86.01 81.00 81.00
MMLU-Pro 5-shot, CoT EM 75.65 62.10 58.20
AGIEval-En 3/5-shot, CoT EM 77.92 61.70 62.40
GPQA-Diamond 5-shot, CoT EM 60.00 36.00 23.20
Math
GSM8K 8-shot, EM 90.67 90.75 82.60
MATH 4-shot, EM 84.84 63.80 50.36
MATH Level 5 4-shot, EM 70.00 39.80 26.30
AIME 2024 pass@32 53.33 30.00 20.00
Code
HumanEval 0-shot, pass@1 n=32 79.40 70.10 76.30
MBPP-Sanitized 3-shot, pass@1 n=32 78.38 77.30 77.50
Commonsense Understanding
ARC-Challenge 25-shot, acc_norm 96.08 94.80 93.90
HellaSwag 10-shot, acc_norm 88.97 84.69 87.70
OpenBookQA 0-shot, acc_norm 50.20 47.00 48.60
PIQA 0-shot, acc_norm 85.47 84.00 84.22
WinoGrande 5-shot, acc 78.93 78.37 83.82
Reading Comprehension
RACE 0-shot, acc 91.00 90.10 89.50
Multilingual
MMLU Global Lite 5-shot, avg 85.72 74.94 79.25
MGSM 8-shot, avg 87.47 82.73 80.33
Long Context
RULER 64K 0-shot 93.17 81.58
RULER 128K 0-shot 89.00 57.56 63.62
RULER 256K 0-shot 86.18
RULER 512K 0-shot 82.16
RULER 1M 0-shot 74.39

All evaluation results were collected via Nemo Evaluator SDK and NVIDIA's open source container of LM Evaluation Harness, unless otherwise stated. For reproducibility purposes, more details on the evaluation settings can be found in the Nemo Evaluator SDK examples folder and the reproducibility tutorial for Nemotron 3 Super. The open source container on LM Evaluation Harness packaged via NVIDIA's Nemo Evaluator SDK used for evaluations can be found here

Deployment Geography: Global

Use Case

This model is intended for developers and researchers building LLMs.

Release Date

NGC - 03/04/2026 via NGC Hugging Face - 03/04/2026 via Hugging Face

Reference(s)

Model Architecture

  • Architecture Type: Mamba2-Transformer Hybrid Latent Mixture of Experts (LatentMoE) with Multi-Token Prediction (MTP)
  • Network Architecture: Nemotron Hybrid LatentMoE
  • Number of model parameters: 120B Total / 12B Active

Model Design

The model utilizes the LatentMoE architecture, where tokens are projected into a smaller latent dimension for expert routing and computation, improving accuracy per byte. The Super model is trained using NVFP4 (weight, activation, and gradient tensors are quantized to NVFP4) to maximize throughput on supported hardware. The model includes Multi-Token Prediction (MTP) layers, which predict multiple future tokens to provide richer training signals and enable faster inference via speculative decoding.

Training Methodology

Stage 1: Pre-Training

NVIDIA-Nemotron-3-Super-120B-A12B-Base model is a result of the above work.

The end-to-end training recipe is available in the NVIDIA Nemotron Developer Repository. Evaluation results can be replicated using the NeMo Evaluator SDK. Data Designer is one of the libraries used to prepare the pre and post training datasets. More details on the datasets and synthetic data generation methods can be found in the technical report NVIDIA Nemotron 3 Super Technical Report.

Input

  • Input Type(s): Text
  • Input Format(s): String
  • Input Parameters: One-Dimensional (1D): Sequences
  • Other Properties Related to Input: Maximum context length up to 1M tokens. Supported languages include: English, Spanish, French, German, Japanese, Italian, Chinese, Arabic, Hebrew, Hindi, Korean, Czech, Danish, Dutch, Finnish, Polish, Portuguese, Thai, Swedish, and Russian.

Output

  • Output Type(s): Text
  • Output Format: String
  • Output Parameters: One-Dimensional (1D): Sequences
  • Other Properties Related to Output: Maximum context length up to 1M tokens

Our AI models are designed and optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration

  • Runtime Engine(s): NeMo 25.11.01
  • Supported Hardware Microarchitecture Compatibility: NVIDIA Ampere - A100; NVIDIA Blackwell; NVIDIA Hopper - H100-80GB
  • Operating System(s): Linux

The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.

Model Version(s)

  • v1.0 - Base

Training and Evaluation Datasets:

Training

Data Modality: Text The total size: 15,087,602,908,990 Total number of datasets: 153 Dataset partition: Training [100%], testing [0%], validation [0%] Time period for training data collection: 2013 to February 24, 2026 Time period for testing data collection: 2013 to February 24, 2026 Time period for validation data collection: 2013 to February 24, 2026 Data Collection Method by dataset: Hybrid: Automated, Human, Synthetic Labeling Method by dataset: Hybrid: Automated, Human, Synthetic

NVIDIA-Nemotron-3-Super-120B-A12B-Base is pre-trained on a large corpus of high-quality curated and synthetically-generated data. It is trained in the English language, as well as 19 other languages and 43 programming languages. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. We also include a small portion of question-answering, and alignment style data to improve model accuracy. The model was trained for approximately 25T tokens.

More details on the datasets and synthetic data generation methods can be found in the technical report NVIDIA Nemotron 3 Super.

Click to explore the full dataset catalogue used for training

Base Pre-Training Corpus (Nemotron 3 Foundation)

The foundation of the model is trained on the Nemotron-3-Nano corpus, comprising the following collections:

Dataset Collection Token Counts Description
Nemotron-CC-v2 & v2.1 9.13T A massive collection of English web data filtered from Common Crawl, including 2.5T+ tokens of new organic, translated, and synthetically rephrased content.
Nemotron-CC-Code-v1 427.9B High-quality code tokens extracted from Common Crawl using the Lynx + LLM pipeline to preserve structure and equations.
Nemotron-Pretraining-Code-v1 & v2 1.09T Curated GitHub code references with multi-stage filtering, deduplication, and large-scale synthetic code data.
Nemotron-CC-Math-v1 133.3B High-quality math pre-training dataset preserving LaTeX formatting and mathematical structures.
Nemotron-Pretraining-Specialized-v1 336.4B Synthetic datasets targeting specialized domains such as STEM reasoning and scientific coding.

Public Datasets

Dataset Collection Period
GSM8K 4/23/2025
CC-NEWS 4/23/2025
Common Crawl 4/23/2025
Wikimedia 4/23/2025
Bespoke-Stratos-17k 4/23/2025
tigerbot-kaggle-leetcodesolutions-en-2k 4/23/2025
glaive-function-calling-v2 4/23/2025
APIGen Function-Calling 4/23/2025
LMSYS-Chat-1M 4/23/2025
Open Textbook Library - CC BY-SA & GNU subset and OpenStax - CC BY-SA subset 4/23/2025
Advanced Reasoning Benchmark, tigerbot-kaggle-leetcodesolutions-en-2k, PRM800K, and SciBench 4/23/2025
FineWeb-2 4/23/2025
Court Listener Legacy Download
peS2o Legacy Download
OpenWebMath Legacy Download
BioRxiv Legacy Download
PMC Open Access Subset Legacy Download
OpenWebText2 Legacy Download
Stack Exchange Data Dump Legacy Download
PubMed Abstracts Legacy Download
NIH ExPorter Legacy Download
arXiv Legacy Download
BigScience Workshop Datasets Legacy Download
Reddit Dataset Legacy Download
SEC's Electronic Data Gathering, Analysis, and Retrieval (EDGAR) Legacy Download
Advanced Mathematical Problem Solving Legacy Download
MathPile Legacy Download
NuminaMath CoT Legacy Download
PMC Article Legacy Download
FLAN Legacy Download
Advanced Reasoning Benchmark Legacy Download
SciBench Legacy Download
WikiTableQuestions Legacy Download
FinQA Legacy Download
Riddles Legacy Download
Problems in Elementary Mathematics for Home Study Legacy Download
MedMCQA Legacy Download
Cosmos QA Legacy Download
MCTest Legacy Download
AI2's Reasoning Challenge Legacy Download
OpenBookQA Legacy Download
MMLU Auxiliary Train Legacy Download
social-chemestry-101 Legacy Download
Moral Stories Legacy Download
The Common Pile v0.1 Legacy Download
FineMath Legacy Download
MegaMath Legacy Download
MegaMath Legacy Download
MultiverseMathHard 10/2/2025
News Commentary 10/2/2025
Essential-Web 10/2/2025
finepdfs 10/2/2025
HotpotQA 10/2/2025
SQuAD2.0 10/2/2025
NLTK Words Lists 10/2/2025

Crawled and Scraped from Online Sources by NVIDIA

The English Common Crawl data was downloaded from the Common Crawl Foundation (see their FAQ for details on their crawling) and includes the snapshots CC-MAIN-2013-20 through CC-MAIN-2025-13. The data was subsequently deduplicated and filtered in various ways described in the Nemotron-CC paper. Additionally, we extracted data for fifteen languages from the following three Common Crawl snapshots: CC-MAIN-2024-51, CC-MAIN-2025-08, CC-MAIN-2025-18. The fifteen languages included were Arabic, Chinese, Danish, Dutch, French, German, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, Swedish, and Thai. As we did not have reliable multilingual model-based quality classifiers available, we applied just heuristic filtering instead—similar to what we did for lower quality English data in the Nemotron-CC pipeline, but selectively removing some filters for some languages that did not work well. Deduplication was done in the same way as for Nemotron-CC.

The GitHub Crawl was collected using the GitHub REST API and the Amazon S3 API. Each crawl was operated in accordance with the rate limits set by its respective source, either GitHub or S3. We collect raw source code and subsequently remove any having a license which does not exist in our permissive-license set (for additional details, refer to the technical report).

Dataset Modality Dataset Size Collection Period Collecting Organisation
English Common Crawl Text 3.36T 4/8/2025 NVIDIA Advanced Deep Learning Research
English Common Crawl 1.1 Text Not disclosed 10/2/2025 NVIDIA Advanced Deep Learning Research
Multilingual Common Crawl Text 812.7B 5/1/2025 NVIDIA Advanced Deep Learning Research
GitHub Crawl Text 747.4B 4/29/2025 NVIDIA Advanced Deep Learning Research

Private Non-publicly Accessible Datasets of Third Parties

Dataset Model(s) used
Global Regulation Unknown
TAUS Translation Memory Unknown
Scale HLE Unknown
HackerRank Coding Unknown
RL data for Search Gemini 3; GPT-5

Private Non-publicly Accessible Datasets by NVIDIA

Dataset Model(s) used
Simple Minesweeper -
Simple Sudoku -
Multitool Typewriter Hard -
Machine Translation of News Commentary and TAUS Translation Memory -
Machine Translation of STEM - Qwen2.5-14B-Instruct
Competitive Coding RL data from Nemotron Cascade -
Long context RL -
Single-step SWE RL for patch generation -
OpenHands SWE -

NVIDIA-Sourced Synthetic Datasets

Dataset Modality Dataset Size Seed Dataset Model(s) used for generation
Nemotron-Pretraining-Formal-Logic Text 128M Nemotron Personas Qwen3-235B-A22B-Thinking-2507
Nemotron-Pretraining-Economics Text 73.4M - Qwen3-235B-A22B-Thinking-2507
Nemotron-Pretraining-Multiple-Choice Text 1.6B MMLU Auxiliary Train DeepSeek-V3; Qwen3-235B-A22B
Nemotron-Pretraining-Code-Concepts Text 7.3B - gpt-oss-20b; gpt-oss-120b
Nemotron-Pretraining-Unconditional-Algorithmic Text 196.5M - gpt-oss-120b; Qwen3-235B-A22B
Synthetic Tasks from DeepSeek-V3 and Qwen3-235B-A22B Text 6.7B train splits of Into the Unknown; AI2 ARC (AI2 Reasoning Challenge); BLiMP (Benchmark of Linguistic Minimal Pairs); CommonSenseQA; GLUE; HeadQA; Hendrycks Ethics; Memo Trap; modus-tollens; NeQA; pattern-matching-suppression; mastermind_24_mcq_random; mastermind_24_mcq_close; quote-repetition; redefine-math; Repetitive Algebra; sig-figs; MMLU-Pro; MC-TACO; MedConceptsQA; MMLU_dataset; OpenbooksQA; PIQA (Physical Interaction Question Answering); SocialIQA; SuperGLUE; tinyAI2_arc; tinyMMLU; tinyWinogrande; TruthfulQA; WebQuestions; Winogrande; GPQA; MBPP DeepSeek v3; Qwen3-235B-A22B
Synthetic Art of Problem Solving from DeepSeek-R1 Text 40B Art of Problem Solving; American Mathematics Competitions 8; American Mathematics Competitions 10; DeepSeek-R1
Synthetic Moral Stories and Social Chemistry from Mixtral-8x22B-v0.1 Text 327M social-chemestry-101; Moral Stories Mixtral-8x22B-v0.1
Synthetic Social Sciences seeded with OpenStax from DeepSeek-V3, Mixtral-8x22B-v0.1, and Qwen2.5-72B Text 83.6M OpenStax - CC BY-SA subset DeepSeek-V3; Mixtral-8x22B-v0.1; Qwen2.5-72B
Synthetic Health Sciences seeded with OpenStax from DeepSeek-V3, Mixtral-8x22B-v0.1, and Qwen2.5-72B Text 9.7M OpenStax - CC BY-SA subset DeepSeek-V3; Mixtral-8x22B-v0.1; Qwen2.5-72B
Synthetic STEM seeded with OpenStax, Open Textbook Library, and GSM8K from DeepSeek-R1, DeepSeek-V3, DeepSeek-V3-0324, and Qwen2.5-72B Text 175M OpenStax - CC BY-SA subset; GSM8K; Open Textbook Library - CC BY-SA & GNU subset DeepSeek-R1, DeepSeek-V3; DeepSeek-V3-0324; Qwen2.5-72B
Nemotron-PrismMath Text 4.6B Big-Math-RL-Verified; OpenR1-Math-220k Qwen2.5-0.5B-instruct, Qwen2.5-72B-Instruct; DeepSeek-R1-Distill-Qwen-32B
Synthetic Question Answering Data from Papers and Permissible Books from Qwen2.5-72B-Instruct Text 350M arXiv; National Institutes of Health ExPorter; BioRxiv; PMC Article; USPTO Backgrounds; peS2o; Global Regulation; CORE; PG-19; DOAB CC BY & CC BY-SA subset; NDLTD Qwen2.5-72B-Instruct
Synthetic Rephrased Math Data from Common Crawl from phi-4 Text 73B Common Crawl phi-4
Synthetic Math Data from Common Crawl 4plus Text 52.3B Common Crawl phi-4
Synthetic Math Data from Common Crawl 3 Text 80.9B Common Crawl phi-4
Synthetic AGIEval seeded with AQUA-RAT, LogiQA, and AR-LSAT from DeepSeek-V3 and DeepSeek-V3-0324 Text 4.0B AQUA-RAT; LogiQA; AR-LSAT DeepSeek-V3; DeepSeek-V3-0324
Synthetic AGIEval seeded with AQUA-RAT, LogiQA, and AR-LSAT from Qwen3-30B-A3B Text 4.2B AQUA-RAT; LogiQA; AR-LSAT Qwen3-30B-A3B
Synthetic Art of Problem Solving from Qwen2.5-32B-Instruct, Qwen2.5-Math-72B, Qwen2.5-Math-7B, and Qwen2.5-72B-Instruct Text Art of Problem Solving; American Mathematics Competitions 8; American Mathematics Competitions 10; GSM8K; PRM800K Qwen2.5-32B-Instruct; Qwen2.5-Math-72B; Qwen2.5-Math-7B; Qwen2.5-72B-Instruct
Synthetic MMLU Auxiliary Train from DeepSeek-R1 Text 0.5B MMLU Auxiliary Train DeepSeek-R1
Synthetic Long Context Continued Post-Training Data from Papers and Permissible Books from Qwen2.5-72B-Instruct Text arXiv; National Institutes of Health ExPorter; BioRxiv; PMC Article; USPTO Backgrounds; peS2o; Global Regulation; CORE; PG-19; DOAB CC BY & CC BY-SA subset; NDLTD Qwen2.5-72B-Instruct
Synthetic Common Crawl from Qwen3-30B-A3B and Mistral-Nemo-12B-Instruct Text 415.8B Common Crawl Qwen3-30B-A3B; Mistral-NeMo-12B-Instruct
Synthetic Multilingual Data from Common Crawl from Qwen3-30B-A3B Text Common Crawl Qwen3-30B-A3B
Synthetic Multilingual Data from Wikimedia from Qwen3-30B-A3B Text Wikimedia Qwen3-30B-A3B
Synthetic Math Data from Wikimedia from Nemotron-4-340B-Instruct Text - Nemotron-4-340B-Instruct
Synthetic Common Crawl Code from phi-4 Text 427.9B Common Crawl phi-4
Synthetic Scientific Coding from Qwen3-235B-A22B Text 1.2B Wikimedia Qwen3-235B-A22B
Tool Calling Data Text 26.2B Qwen3-235B-A22B-2507; gpt-oss-120b
Synthetic Essential-Web from QwQ-32B Text 28.1B Essential-Web QwQ-32B
Translated Synthetic Crawl Text 389.9B Common Crawl Qwen3-30B-A3B
Translated Synthetic Wikipedia Text 7.9B Wikimedia Qwen3-30B-A3B
Synthetic Long Context from Qwen3-235B-A22B-Instruct-2507 Text Undisclosed CORE; PG-19; DOAB CC BY & CC BY-SA subset; NDLTD Qwen3-235B-A22B-Instruct-2507
Synthetic Search STEM OPENQ from DeepSeek-R1-0528 Text Undisclosed - DeepSeek-R1-0528
Synthetic MCQ from Qwen2.5-32B-Instruct and DeepSeek-R1-0528 Text Undisclosed - Qwen2.5-32B-Instruct; DeepSeek-R1-0528
Synthetic Offline Search MCQA HLE from DeepSeek-R1-0528 Text Undisclosed - DeepSeek-R1-0528
Synthetic Offline Search MCQA GPQA from Qwen3-235B-A22B and DeepSeek-R1-0528 Text Undisclosed - Qwen3-235B-A22B; DeepSeek-R1-0528
Synthetic Human Preference from QwQ-32B, Qwen3-30B-A3B, Qwen3-235B-A22B, Qwen3-235B-A22B-Instruct-2507, Mistral-Small-3.1-24B-Instruct-2503, Mistral-Small-3.2-24B-Instruct-2506, MiniMax-M1-80k, MiniMax-M1-40k, Kimi-K2-Instruct, DeepSeek-V3-0324, DeepSeek-R1-0528 Text Undisclosed - QwQ-32B; Qwen3-30B-A3B; Qwen3-235B-A22B; Qwen3-235B-A22B-Instruct-2507; Mistral-Small-3.1-24B-Instruct-2503; Mistral-Small-3.2-24B-Instruct-2506; MiniMax-M1-80k; MiniMax-M1-40k; Kimi-K2-Instruct; DeepSeek-V3-0324; DeepSeek-R1-0528
Synthetic Code from Qwen3-32B Text Undisclosed English Common Crawl; English Common Crawl 1.1 Qwen3-32B
Synthetic OpenCodeReasoning from DeepSeek-R1 Text Undisclosed OpenCodeReasoning DeepSeek-R1
Synthetic LIMO from DeepSeek-R1-0528 Text Undisclosed LIMO DeepSeek-R1-0528
Synthetic SCP from DeepSeek-R1-0528 Text Undisclosed SCP-116K DeepSeek-R1-0528
Synthetic Stack Exchange from DeepSeek-R1-0528 Text Undisclosed Stack Exchange DeepSeek-R1-0528
Synthetic Common Crawl from Qwen3-30B-A3B Text Undisclosed Common Crawl Qwen3-30B-A3B
Synthetic Wikipedia from Qwen3-30B-A3B Text Undisclosed Wikimedia Qwen3-30B-A3B
Synthetic Essential-Web from Qwen3-30B-A3B and Qwen3-235B-A22B-Thinking-2507 Text Undisclosed Essential-Web Qwen3-30B-A3B; Qwen3-235B-A22B-Thinking-2507
Synthetic Textbook Math from Qwen3-30B-A3B, Qwen3-235B-A22B, phi-4 Text Undisclosed Common Crawl; FineMath Qwen3-30B-A3B; Qwen3-235B-A22B; phi-4
Synthetic Math and Code from DeepSeek-R1 and DeepSeek-R1-0528 Text Undisclosed Magicoder-Evol-Instruct-110K; opc-sft-stage2; TACO; OpenCodeReasoning; OpenMathReasoning; NuminaMath CoT DeepSeek-R1; DeepSeek-R1-0528

Evaluation Dataset

  • Data Collection Method by dataset: Hybrid: Human, Synthetic
  • Labeling Method by dataset: Hybrid: Automated, Human, Synthetic

Inference

  • Acceleration Engine: PyTorch
  • Test Hardware:
    • NVIDIA Hopper:
      • 8x H100
      • 1x H200
    • NVIDIA Grace Blackwell
      • GB200

Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

We advise against circumvention of any provided safety guardrails contained in the Model without a substantially similar guardrail appropriate for your use case. For more details: Safety and Explainability Subcards.

For more detailed information on ethical considerations for this model, please see the Model Card++ Bias, and Privacy Subcards.

Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here.

Citation

@misc{nvidia_nemotron_3_2025,
  title  = {NVIDIA Nemotron 3: Efficient and Open Intelligence},
  author = {{NVIDIA}},
  year   = {2025},
  url    = {https://arxiv.org/abs/2512.20856},
  note   = {White Paper}
}
Downloads last month
193
Safetensors
Model size
124B params
Tensor type
BF16
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-Base-BF16

Papers for nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-Base-BF16