Feb 16 2026: Upgraded Jinja Template with direct thinking logic to improve thinking activation.
Gemma-3-27b-it-Gemini-Deep-Reasoning
Gemma 27B it tuned via Unsloth using "gemini-3-pro-preview-high-reasoning" Dataset to add thinking/reasoning (compact) to this model.
If the model knows the answer -> Instruct -> No thinking.
If the model needs to "think" -> Deep Reasoning happens automatically.
(thought blocks are 4-6 paragraphs on average, no 2000+ tokens "lost in the woods")
Regardless of "instruct" or "reasoning" -> Vast improvement in model output quality.
Works with images too...
Thinking block is generated automatically, no system prompt needed.
If you want to force "thinking" to activate [if it does not activate], add something like:
Think deeply: - prompt here -
NOTE:
This is a very limited fine tune to add "Gemini" type reasoning to Gemma 3 27B.
It adds thinking/deep reasoning.
Even when it is in "instruct" mode; this tuning will help model output quality.
This tuning does not update the knowledge / skills of the model beyond this - that requires a much larger, in depth tuning.
Image processing is intact and fully functional.
Reasoning affects:
- Image "intelligence"
- Output generation
Model Features:
- 128k context
- Temp range .1 to 2.5.
- Reasoning is temp stable.
- You can activate using "think deeply: prompt" (not required in most cases)
- System prompt will affect image, reasoning and output generation.
- System prompt / template NOT required for reasoning generation in most cases.
Benchmarks are off the scale.
Enjoy the freedom!
SPECIAL THANKS TO:
- Team "TeichAI" for the excellent dataset.
- Team "Nightmedia" for the benchmarking and colab'ing.
- Team "Unsloth" for making the training painless.
BENCHMARKS:
arc_challenge,arc_easy,boolq,hellaswag,openbookqa,piqa, winogrande
0.590 ,0.742 ,0.883,0.781 ,0.458 ,0.822 ,0.751
VS (Heretic, uncensored base):
0.557 ,0.711 ,0.868,0.533 ,0.452 ,0.706 ,0.695
OPTIONAL: System prompts
This will enhance thinking and output generation.
In most cases you do not need to use these.
One is "all business", and the other one is for "fun".
Think deeply and carefully about the user's request. Compose your thoughts about the user's prompt between <think> and </think> tags, then output the final answer based on your thoughts.
You are the JOKER from Batman. You think (put your thoughts between <think> and </think> tags), act and talk like the joker. Be Evil.
Thinking Activation: JINJA "Regular" and "Thinking" TEMPLATES:
There is also an option to use "chat-template-thinking.jinja" template (in place of the regular "chat-template.jinja").
Simply rename the "default" to another name and "chat-template-thinking.jinja" to "chat-template.jinja" to use in source and/or quanting.
You can also edit the "chat-template-thinking.jinja" in NOTEPAD too to adjust the "thinking system prompt" (very top of the script).
Using the "thinking system prompt" or "chat-template-thinking.jinja" is useful in your application requires always on thinking, your use case(s) do not always activate thinking and so on.
Generally "thinking" will activate automatically due to the fine tuning, however in some cases it will not, require a system prompt/thinking jinja template and/or "think deeply:" (prompt here)
Note that you can use "chat-template-thinking.jinja" with other system prompts too.
Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
Set the "Smoothing_factor" to 1.5
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
: in text-generation-webui -> parameters -> lower right.
: In Silly Tavern this is called: "Smoothing"
NOTE: For "text-generation-webui"
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
Source versions (and config files) of my models are here:
OTHER OPTIONS:
Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers
This a "Class 1" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
- Downloads last month
- 169