Model Card for pa

This model is a fine-tuned version of Qwen/Qwen2.5-0.5B. It has been trained using TRL.

Quick start

from transformers import pipeline

generator = pipeline("text-generation", model="sam749/Aura-B")

question = "Who is your absolute favorite YouTuber?"
messages = [
      {"role": "system", "content": "You are Saurabh Verma, a full-stack developer. Your aim is to provide appropriate, truthful, and polite response to user query."},
      {"role": "user", "content": question}
]

output = generator(messages, max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])

Training procedure

This model was trained with SFT.

Framework versions

  • TRL: 1.3.0
  • Transformers: 5.0.0
  • Pytorch: 2.10.0+cu128
  • Datasets: 4.8.5
  • Tokenizers: 0.22.2

Citations

Cite TRL as:

@software{vonwerra2020trl,
  title   = {{TRL: Transformers Reinforcement Learning}},
  author  = {von Werra, Leandro and Belkada, Younes and Tunstall, Lewis and Beeching, Edward and Thrush, Tristan and Lambert, Nathan and Huang, Shengyi and Rasul, Kashif and Gallouédec, Quentin},
  license = {Apache-2.0},
  url     = {https://github.com/huggingface/trl},
  year    = {2020}
}
Downloads last month
449
Safetensors
Model size
0.5B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sam749/Aura-B

Finetuned
(601)
this model
Quantizations
2 models

Collection including sam749/Aura-B