Image-Text-to-Text
Transformers
Safetensors
llava
image-to-text
conversational

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Apriel-1.6-15B-Thinker: Cost-efficient Frontier Multimodal Performance

thumbnail /ˈɑː.pri.əl/


Table of Contents

  1. Summary
  2. Evaluation
  3. Intended Use
  4. How to Use
  5. Training Details
  6. Limitations
  7. Security and Responsible Use
  8. License
  9. Citation

Summary

Apriel-1.6-15B-Thinker is an updated multimodal reasoning model in ServiceNow’s Apriel SLM series, building on Apriel-1.5-15B-Thinker. With significantly improved text and image reasoning capabilities, Apriel-1.6 achieves competitive performance against models up to 10x its size. Like its predecessor, it benefits from extensive continual pre-training across both text and image domains. We additionally perform post-training that focuses on Supervised Finetuning (SFT) and Reinforcement Learning (RL). Apriel-1.6 obtains frontier performance without sacrificing reasoning token efficiency. The model improves or maintains task performance when compared with Apriel-1.5-15B-Thinker, while reducing reasoning token usage by more than 30%.

Highlights

  • Achieves a score of 57 on the Artificial Analysis index outperforming models like Gemini 2.5 Flash, Claude Haiku 4.5 and GPT OSS 20b. It obtains a score on par with Qwen3 235B A22B, while being significantly more efficient.
  • Reduces reasoning token usage by more than 30%, delivering significantly better efficiency than Apriel-1.5-15B-Thinker.
  • Scores 69 on Tau2 Bench Telecom and 69 on IFBench, which are key benchmarks for the enterprise domain.
  • At 15B parameters, the model fits on a single GPU, making it highly memory-efficient.
  • Based on community feedback on Apriel-1.5-15B-Thinker, we simplified the chat template by removing redundant tags and introduced four special tokens to the tokenizer (<tool_calls>, </tool_calls>, [BEGIN FINAL RESPONSE], <|end|>) for easier output parsing.

Please see our blog post for more details


Evaluation

  • Text benchmarks included in the Artificial Analysis Index v3.0 use scores reported by Artificial Analysis. All other benchmarks were evaluated internally.

    Category Benchmark Apriel-1.6-15B-Thinker Apriel-1.5-15B-Thinker GPT OSS 120B DeepSeek R1 0528 Gemini 2.5 Flash (Sep) GPT 5 mini (high) Claude 4.5 Sonnet (thinking) o3-mini (high)
    Function Calling BFCL v3 only 63.50 51.88 50.62 39.75 39.75 17.62 - 50
    Tau2 bench Telecom 69 57.8 66 37 32 68 50.8 31
    Tau2 bench Retail 66.67 46.78 61.4 59.94 61.69 73.39 69.8 75.73
    Tau2 bench Airline 58 52 45.3 47.33 56.66 59.33 58 61.33
    ComplexFuncBench 33.2 19 24.6 24.2 26.3 37.5 24.6 18.9
    Instruction Following Agent IF 57.2 55 54.20 52.20 49.70 57.60 54.50 54.90
    Multi IF 83.34 76.91 82.95 73.76 82.49 85.37 84.32 87.28
    Multi-Challenge 46.15 41.39 46.90 44.50 49.08 57.90 42.49 38.46
    IF Bench 69 62 69 40 50 75 57 70.07
    Math AIME 25 88 88 93 76 73 91 88 86.67
    Coding Struct Eval 79 48.50 71 73 70 69.92 76 73
    LCB 81 73 65 77 70 84 71 73
    SciCode 37 35 36 40 41 39 45 40
    Agentic DeepresearchBench 36.47 32.73 36.30 34.19 38.15 - - 33.40
    GAIA 40 30.91 21.21 32.12 47.88 65.45 69.09 23.03
    Work-Arena L1 58 51.5 50.9 63.9 51.8 65.5 62.7 52.4
    OS World Small 16.70 13.90 16.70 25 19.40 22.20 30.60 19.40
    SWE Bench Verified 23 16 31 29.60 34.20 61 64.2 22.60
    Terminal Bench 14 10 22 15 13 31 33 5.67
    Aider Polyglot 37.68 26.37 42 71.40 40 71.60 78 60.40
    Knowledge MMLU Pro 79 77 85 85 83 84 88 80
    Creative Writing Creative writing v3 / EQ Bench 59.73 60.24 53.70 79.40 74.25 75.25 80.70 30.40
    Others GPQA Diamond 73 71 78 81 79 83 83 77
    HLE 10 12 18.5 14.9 11.1 19.7 17.3 12.3
    Long Context AA LCR 50* 20 51 55 62 68 66 -

* AA LCR score in the table is with DCA enabled. With default config, the model scores 36 on AA LCR.


Benchmark Apriel-1.6-15B-Thinker Apriel-1.5-15B-Thinker GPT-5 (high) GLM-4.5V (Thinking) Gemini 2.5 Flash (high) Claude Sonnet 3.7 (Thinking) GPT-5 (Minimal) Grok 4 Fast (Thinking)
MMMU (validation) 72 70.22 81.33 74.33 70.66 73.66 66.66 70.11
MMMU-PRO (10 choice) 60.28 55.38 74.73 64.16 67.86 64.50 66.06 61.61
MMMU-PRO (Vision Only) 52.89 48.21 66.93 61.50 56.76 60.11 57.68 22.94
LogicVista 58.61 58.39 69.35 63.53 63.75 69.12 44.51 47.42
MathVision 60.85 50.99 67.10 59.53 59.21 50.32 35.52 48.35
MathVista 79.90 75.50 83.30 83.60 78.50 74.60 61.20 68.20
MathVerse (Vision Dominant) 66.75 58.38 79.82 68.65 70.68 56.09 39.84 54.69
MathVerse (Text Dominant) 79.06 76.40 84.64 77.41 78.80 69.28 43.78 72.20
MMStar 70.66 67.73 77.74 74.46 73.86 70 63.60 64.80
CharXiv (descriptive) 89.85 88.20 91.25 90.80 83.60 93.27 82.45 68.15
CharXiv (reasoning) 56.00 50.10 71.50 63.00 56.50 70.90 52.80 33.50
AI2D Test 86.04 82.87 90.05 87.75 82.09 84.19 85.16 81.86
BLINK 63.96 58.71 70.22 66.59 65.64 64.49 64.59 54.39

Intended Use

The Apriel family of models are designed for a variety of general-purpose instruction tasks, including:

  • Code assistance and generation
  • Logical reasoning and multi-step tasks
  • Question answering and information retrieval
  • Function calling, complex instruction following and agent use cases

They are not intended for use in safety-critical applications without human oversight or in scenarios requiring guaranteed factual accuracy.


How to Use

pip install transformers

Running the Reasoning model

Here is a code snippet demonstrating the model's usage with the transformers library's generate function:

# Tested with transformers==4.48

import re
import requests
import torch
from PIL import Image
from transformers import AutoProcessor, AutoModelForImageTextToText

# Load model
model_id = "ServiceNow-AI/Apriel-1.6-15b-Thinker"
model = AutoModelForImageTextToText.from_pretrained(
    model_id, 
    torch_dtype=torch.bfloat16, 
    device_map="auto"
)
processor = AutoProcessor.from_pretrained(model_id)

# Example 1: Text-only prompt
chat = [
    {
        "role": "user",
        "content": [
            {"type": "text", "text": "What is the capital for France?"},
        ],
    }
]

inputs = processor.apply_chat_template(chat, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt")
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
inputs.pop("token_type_ids", None)

with torch.no_grad():
    output_ids = model.generate(**inputs, max_new_tokens=1024, do_sample=True, temperature=0.6)

generated_ids = output_ids[:, inputs['input_ids'].shape[1]:]
output = processor.decode(generated_ids[0], skip_special_tokens=True)
response = re.findall(r"\[BEGIN FINAL RESPONSE\](.*?)(?:<\|end\|>)", output, re.DOTALL)[0].strip()

print("Text-only Response:", response)

# Example 2: Image understanding
url = "https://picsum.photos/id/237/200/300"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")

chat = [
    {
        "role": "user",
        "content": [
            {"type": "text", "text": "Which animal is this?"},
            {"type": "image"},
        ],
    }
]

prompt = processor.apply_chat_template(chat, add_generation_prompt=True, tokenize=False)
inputs = processor(text=prompt, images=[image], return_tensors="pt").to(model.device)

with torch.no_grad():
    output_ids = model.generate(**inputs, max_new_tokens=1024, do_sample=True, temperature=0.6)

generated_ids = output_ids[:, inputs['input_ids'].shape[1]:]
output = processor.decode(generated_ids[0], skip_special_tokens=True)
response = re.findall(r"\[BEGIN FINAL RESPONSE\](.*?)(?:<\|end\|>)", output, re.DOTALL)[0].strip()

print("Image Response:", response)

Usage Guidelines

  1. Use the model’s default chat template, which already includes a system prompt.
  2. We recommend setting temperature to 0.6.
  3. We ensure the model starts with Here are my reasoning steps:\n during all our evaluations. This is implemented in the default chat template.
  4. For multi-turn conversations, intermediate turns (historical model outputs) are expected to contain only the final response, without reasoning steps.

Chat Template

<|begin_system|>
You are a thoughtful, systematic AI assistant from ServiceNow Language Models (SLAM) lab. Analyze each question carefully, present your reasoning step-by-step, then provide the final response after the marker [BEGIN FINAL RESPONSE].
<|begin_user|>
# user message here
<|begin_assistant|>
Here are my reasoning steps:
# thoughts here
[BEGIN FINAL RESPONSE]
# assistant response here
<|end|>

The model will first generate its thinking process and then generate its final response, starting with [BEGIN FINAL RESPONSE]. Here is a code snippet demonstrating the application of the chat template:

from transformers import AutoTokenizer
model_name = "ServiceNow-AI/Apriel-1.6-15b-Thinker"
tokenizer = AutoTokenizer.from_pretrained(model_name)

# prepare the model input
custom_system_prompt = "Answer like a pirate."
prompt = "You are an expert assistant in the implementation of customer experience management aspect of retail applications \n \nYou will be using Python as the programming language. \n \nYou will utilize a factory design pattern for the implementation and following the dependency inversion principle \n \nYou will modify the implementation based on user requirements. \n \nUpon user request, you will add, update, and remove the features & enhancements in the implementation provided by you. \n \nYou will ask whether the user wants to refactor the provided code or needs a sample implementation for reference. Upon user confirmation, I will proceed accordingly. \n \n**Guidelines:** \n 1. **User Requirements:** \n - You have to ask users about their requirements, clarify the user expectations, and suggest the best possible solution by providing examples of Python code snippets. \n - Ask users about which type of reports they need to assess the AI model's performance, accuracy, and reliability. \n - After providing the solution, you have to ask the user about the trial of the solution and modify the solution based on the user feedback. \n \n 2. **Libraries/Frameworks:** \n - You will be utilizing Python as a programming language. \n - You will be using Flask framework for REST APIS implementation \n \n 3. **Communication Gesture:** \n - Your conversation with the user should be interactive, supportive, courageous, and professional. \n - You have to break down the complex concepts into sub-concepts and try to explain them to the user. \n - You have to ask the user for the required parameters. If the user refuses to provide in 2 attempts, politely exit the conversation. \n - You have to provide your supported parameters to the user, if the user refuses to accept them then you have to put an apology note and exit the conversation. \n - You have to track the conversation about unasked questions by the user. If some/one of the questions remain then you have to remind the user about these questions and proceed to answer them based on the user's confirmation \n \n 4. **Implementation:** \n - Your code/implementations should be reliable, scaleable, modular, and reusable. \n - You will be providing unit tests for the implementation upon user request. \n - You will be following MVC architecture for the applications \n - Your implementations must be well-commented and readable \n \n \n- Today's date is 23rd August 2024. \n- The default sender email is [email protected].\nHi, I am conducting research on retail customer feedback systems and I need assistance with designing and implementing them. Could you kindly provide me with a list of general customer feedback system modules?"
messages = [
    {"role": "user", "content": custom_system_prompt + "\n\n" + prompt}
]
# example tools
tools = [{"type": "function", "function": {"name": "getRetailFeedbackModules", "description": "Returns the list of modules usually present in the retail industry", "parameters": {"type": "object", "properties": {"page": {"type": "integer", "description": "The current page number.", "default": 1}, "page_size": {"type": "integer", "description": "The number of items per page.", "default": 3}}}}}, {"type": "function", "function": {"name": "verifyImplementation", "description": "Returns the list of modules usually present in the retail industry", "parameters": {"type": "object", "properties": {"coding_language": {"type": "string", "description": "The supported languages for verification of implementation.", "default": "python", "enum": ["python", "java", "php"]}, "code": {"type": "string", "description": "The code which needs verification"}, "design_pattern": {"type": "string", "description": "The design pattern to verify in the implementation", "enum": ["factory", "strategy", "singleton"]}, "verify_best_practices": {"type": "boolean", "description": "The verification of the coding style based on the language selected", "default": true}}}}}]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
    tools=tools
)
model_inputs = tokenizer([text], return_tensors="pt")

Running with vLLM

As the upstream PR is not yet merged, you can use this custom image as an alternate way to run the model with tool and reasoning parsers enabled.

Docker Image

docker.io/amant555/vllm_apriel:latest

Start Command

python3 -m vllm.entrypoints.openai.api_server \
  --model ServiceNow-AI/Apriel-1.6-15b-Thinker \
  --served-model-name Apriel-1p6-15B-Thinker \
  --trust_remote_code \
  --max-model-len 131072 \
  --enable-auto-tool-choice \
  --tool-call-parser apriel \
  --reasoning-parser apriel

Training Details

Training stack: Fast-LLM, VERL

Continual Pre-training: Billions of tokens covering math, code, science, logical reasoning, and multimodal image-text data

SFT: 2.4M samples spanning math, code, instruction-following, function calling, and conversation, followed by an incremental lightweight multimodal SFT.

RL: Multi-stage RL with verifiable rewards and GSPO on text and vision tasks. Our RL stage optimizes reasoning efficiency: using fewer tokens by discouraging unnecessary intermediate steps, stopping earlier when confident, and giving direct answers on simple queries.

For more details on our training methodology, see our blog post.


Limitations

  • Factual accuracy: May produce incorrect, misleading, or outdated content. Outputs should be verified before use in critical contexts.
  • Bias: May reflect societal, cultural, or systemic biases present in training data.
  • Ethics: Do not use the model to produce harmful, unlawful, or unethical content.
  • Language: Strongest performance is in English. Output quality may degrade in underrepresented languages.
  • Critical use: Not suitable for medical, legal, financial, or other high-risk applications without safeguards.

Security and Responsible Use

Security Responsibilities:
Deployers and users are strongly encouraged to align their security practices with established frameworks and regulatory guidelines such as the EU AI Act and the NIST AI Risk Management Framework (RMF).

Guidelines for Deployers
  • Regularly conduct robustness assessments to identify and mitigate adversarial inputs.
  • Implement validation and filtering processes to prevent harmful or biased outputs.
  • Continuously perform data privacy checks to guard against unintended data leaks.
  • Document and communicate the model's limitations, intended usage, and known security risks to all end-users.
  • Schedule periodic security reviews and updates to address emerging threats and vulnerabilities.
Guidelines for Users
  • Follow established security policies and usage guidelines provided by deployers.
  • Protect and manage sensitive information when interacting with the model.
  • Report anomalies, suspicious behavior, or unsafe outputs to deployers or developers.
  • Maintain human oversight and apply judgment to mitigate potential security or ethical risks during interactions.

Disclaimer:
Users accept responsibility for securely deploying, managing, and using this open-source LLM. The model is provided "as-is," without explicit or implied warranty regarding security or fitness for any specific application or environment.


License

MIT


Citation

@misc{radhakrishna2025apriel1515bthinker,
      title={Apriel-1.5-15b-Thinker}, 
      author={Shruthan Radhakrishna and Aman Tiwari and Aanjaneya Shukla and Masoud Hashemi and Rishabh Maheshwary and Shiva Krishna Reddy Malay and Jash Mehta and Pulkit Pattnaik and Saloni Mittal and Khalil Slimi and Kelechi Ogueji and Akintunde Oladipo and Soham Parikh and Oluwanifemi Bamgbose and Toby Liang and Ahmed Masry and Khyati Mahajan and Sai Rajeswar Mudumba and Vikas Yadav and Sathwik Tejaswi Madhusudhan and Torsten Scholak and Sagar Davasam and Srinivas Sunkara and Nicholas Chapados},
      year={2025},
      eprint={2510.01141},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2510.01141}, 
}
Downloads last month
5
Safetensors
Model size
15B params
Tensor type
BF16
·
Inference Providers NEW
Input a message to start chatting with ServiceNow-AI/Apriel-1.6-15b-Thinker.

Collection including ServiceNow-AI/Apriel-1.6-15b-Thinker