--- license: apache-2.0 datasets: - open-r1/Mixture-of-Thoughts language: - en base_model: - prithivMLmods/Qwen3-0.6B-ft-bf16 pipeline_tag: text-generation library_name: transformers tags: - text-generation-inference - code - science - math - moe --- ![12.png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F65bb837dbfb878f46c77de4c%2FFAnKrTFpUHZbu9J0fLKCL.png) # **Eta-Aurigae-0.6B-Echelon1** > **Eta-Aurigae-0.6B-Echelon1** is a compact, efficient model specialized in **science, factual accuracy**, and **structured reasoning**. Fine-tuned on **Qwen3-0.6B** using the **MoT (Mixture of Thoughts)** dataset—focused on scientific understanding and expert factual domains—it delivers high-precision outputs for STEM education, tutoring, and analytical thinking in resource-constrained environments. > \[!note] > GGUF: [https://huggingface.co/prithivMLmods/Eta-Aurigae-0.6B-Echelon1-GGUF](https://huggingface.co/prithivMLmods/Eta-Aurigae-0.6B-Echelon1-GGUF) --- ## **Key Features** 1. **MoT Fine-Tuning for Science & Facts** Trained on a **Mixture of Thoughts** dataset emphasizing scientific accuracy, explanatory depth, and structured reasoning across biology, physics, chemistry, and factual domains. 2. **Scientific Precision in a Small Footprint** Delivers clear, step-by-step reasoning in scientific problems—ideal for students, educators, and lightweight educational tools. 3. **Factually Consistent Output Generation** Optimized for **high factual alignment** and structured explanations—reliable for knowledge recall, concept breakdowns, and factual analysis. 4. **Supports Markdown, LaTeX, and JSON** Outputs clean, structured formats like **Markdown**, **LaTeX**, and **JSON**, useful for technical documentation and educational content. 5. **Multilingual Science-Aware Responses** Handles factual content in 20+ languages, especially in academic and technical contexts. 6. **Lightweight and Inference-Ready** Efficient on **CPUs**, **low-VRAM GPUs**, and **offline edge deployments** without sacrificing factual clarity. --- ## **Quickstart with Transformers** ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "prithivMLmods/Eta-Aurigae-0.6B-Echelon1" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "What causes the northern lights (Aurora Borealis)? Explain in simple terms." messages = [ {"role": "system", "content": "You are a science tutor that explains complex concepts clearly."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` --- ## **Intended Use** * Science education and fact-based tutoring * Concept explanations in physics, biology, and chemistry * Structured technical content generation (e.g., LaTeX, Markdown) * Deployment in low-resource, educational, or mobile scenarios * Lightweight inference with high factual fidelity --- ## **Limitations** * Not optimized for general conversation or creative writing * Short context limits multi-document scientific reasoning * Performance dips in abstract reasoning outside scientific scope * Not tuned for code or free-form generation --- ## **References** 1. [Qwen2.5 Technical Report (2024)](https://arxiv.org/pdf/2412.15115) 2. [Mixture of Thoughts Dataset](https://huggingface.co/datasets/open-r1/Mixture-of-Thoughts) 3. [YaRN: Efficient Context Extension for LLMs](https://arxiv.org/pdf/2309.00071)