THAU 7B - Cognitive AI Assistant

Thinking Human-like Artificial Understanding

THAU 7B is a fine-tuned version of Qwen2.5-7B-Instruct, specialized in cognitive reasoning, code generation, and autonomous agent capabilities.

Model Details

  • Base Model: Qwen/Qwen2.5-7B-Instruct
  • Training Method: LoRA (r=16, alpha=32)
  • Parameters: 7.6B
  • Context Length: 4096 tokens
  • Languages: English, Spanish

Capabilities

Feature Status
Code Generation Full
Chain of Thought Full
Tool Calling (MCP) Full
SVG Generation Full
Accounting/Finance Full
Multi-language Spanish/English

Training Data

  • 677 unique training examples across 8 categories
  • Programming: Python, JavaScript, Java, Rust, Go, SQL
  • Reasoning: Step-by-step problem solving
  • DevOps: CI/CD, Docker, Kubernetes
  • Accounting: Double-entry bookkeeping, IFRS

Usage

With Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "luepow/thau-7b",
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("luepow/thau-7b")

messages = [
    {"role": "system", "content": "You are THAU, a cognitive AI assistant."},
    {"role": "user", "content": "Explain Python decorators with examples."}
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

With Ollama

ollama run luepow/thau-7b

Tool Calling

THAU supports JSON-based tool invocation:

<tool_call>{"name": "execute_python", "arguments": {"code": "print(2+2)"}}</tool_call>

Limitations

  • No vision/multimodal capabilities
  • No internal thinking tokens (uses prompting-based CoT)
  • Quality depends on prompt engineering for complex tasks

License

Apache 2.0

Citation

@misc{thau-7b,
  author = {Luis Perez},
  title = {THAU 7B: Cognitive AI Assistant},
  year = {2024},
  publisher = {HuggingFace},
  url = {https://huggingface.co/luepow/thau-7b}
}

Acknowledgments

  • Qwen Team for the excellent base model
  • Anthropic's Claude for AI pair programming assistance
  • TinyLlama Team for inspiration
Downloads last month
23
Safetensors
Model size
8B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for luepow/thau-7b

Base model

Qwen/Qwen2.5-7B
Finetuned
(2274)
this model
Quantizations
1 model