Ameena-Qwen3-8B-Tajik-v3
A fully fine-tuned Qwen3-8B model trained on 10B+ tokens of native Tajik, Uzbek, and Russian educational content over 3 epochs. Powers the core AI tutor in Ameena.tj — Central Asia’s first sovereign learning platform.
Model Details
- Developed by: Saidzoda AI Research Lab (IT Park Tajikistan)
- Base model:
Qwen/Qwen3-8B - Training data: 10B+ tokens from textbooks, exams, literature, and technical manuals
- Languages: Tajik (primary), Uzbek, Russian, English
- Epochs: 3
- License: Apache 2.0
Use Case
- AI-powered course generation
- Step-by-step tutoring (explains, doesn’t solve)
- Homework analysis & certification
- Offline inference on consumer devices
Training Infrastructure
- Hardware: Nvidia H200
- Framework: Hugging Face Transformers + Accelerate
- Precision: BF16 mixed precision
- Duration: ~220 hours
Environmental Impact
- Cloud Provider: Runpod
- Region: europe-west4
- Carbon Emitted: ~2.2 kg COâ‚‚eq
(Estimated via ML CO2 Impact Calculator)
How to Use
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "SaidzodaEng/Ameena_Qwen3-8B_e3"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto"
)
Out-of-Scope Use
Not for medical, legal, or financial advice
Not intended for high-stakes decision-making
Citation
APA:
Saidzoda AI Research Lab. (2025). Ameena_Qwen3-8B_e3. Hugging Face.
BibTeX
@misc{saidzoda_ameena_qwen3_2025,
author = {Saidzoda AI Research Lab},
title = {Ameena-Qwen3-8B-Tajik-v3},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/SaidzodaEng/Ameena_Qwen3-8B_e3}}
}
- Downloads last month
- 4