Model Card for qwen25-dpo

This model is a fine-tuned version of Qwen/Qwen2.5-1.5B-Instruct. It has been trained using TRL.

Quick start

from transformers import pipeline

question = "Explain diabetes simply"
generator = pipeline("text-generation", model="azherali/Qwen2.5-1.5B-Instruct-dpo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])

Training procedure

This model was trained with DPO, a method introduced in Direct Preference Optimization: Your Language Model is Secretly a Reward Model.

Framework versions

  • TRL: 0.26.2
  • Transformers: 4.57.3
  • Pytorch: 2.8.0+cu126
  • Datasets: 4.4.1
  • Tokenizers: 0.22.1

Downloads last month
11
Safetensors
Model size
2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for azherali/Qwen2.5-1.5B-Instruct-dpo

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(1409)
this model

Paper for azherali/Qwen2.5-1.5B-Instruct-dpo