Model Card for Model ID

πŸ¦™ Mistral-7B Instruct – Fine-Tuned on Databricks Dolly 15k

This model is a fine-tuned version of Mistral-7B-Instruct trained using a subset of the Databricks Dolly 15k instruction dataset.
Training was performed with LoRA using the trl library’s SFTTrainer.


πŸš€ Model Purpose

The goal of this fine-tuning run is to:

  • Improve instruction-following abilities
  • Enhance short-form question-answering
  • Produce more concise and helpful outputs
  • Evaluate ROUGE improvements on Dolly-style tasks

This model is not intended to exceed top-tier instruct models but to provide a lightweight, reproducible fine-tune for research and experimentation.


🧠 Base Model

Mistral-7B-Instruct

  • 7B parameters
  • Optimized for instruction following
  • Strong general capabilities
  • License: Apache 2.0

πŸ“š Training Dataset

Dataset: databricks/databricks-dolly-15k
Split: 95% train / 5% test
Seed: 4016

Dolly is a small, mixed-quality instruction dataset containing multiple task types:

  • open-QA
  • summarization
  • classification
  • brainstorming
  • hypothetical reasoning

πŸ‹οΈ Training Details

Hyperparameters

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Alpha7a/Databricks-Dolly15k-Mistral-Finetuning

Finetuned
(411)
this model