HACK_DOC (Medical Assistant)
This is a Llama-3 based medical assistant model.
It was fine-tuned on the ChatDoctor-HealthCareMagic-100k dataset to provide empathetic, doctor-style responses to medical queries.
Model Details
- Base Model:
unsloth/llama-3-8b-instruct-bnb-4bit - Adapter Type: LoRA (Rank 64)
- Training Framework: Unsloth / PyTorch
- Trigger Phrase: The model is trained to start responses with: "THANKS FOR ASKING HACK_DOC. Here is my answer:"
How to use
You must load this adapter on top of the base Llama-3 model.
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
# 1. Load Base Model
base_model_name = "unsloth/llama-3-8b-instruct-bnb-4bit"
base_model = AutoModelForCausalLM.from_pretrained(
base_model_name,
load_in_4bit=True,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
# 2. Load HACK_DOC Adapter
model = PeftModel.from_pretrained(base_model, "shri171981/genai_hack_doc")
# 3. Run Inference
inputs = tokenizer("I have a severe headache.", return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 49