xingqiang's picture
Upload README.md with huggingface_hub
ec9d1ea verified
metadata
language: en
license: apache-2.0
base_model: unsloth/Qwen3-4B-Thinking-2507
tags:
  - medical
  - llm
  - qwen3
  - thinking-model
  - entity-extraction
  - relation-extraction
  - lora
  - peft
library_name: transformers
pipeline_tag: text-generation

Medical-NER-Qwen-4B-Thinking

Model Description

This is a fine-tuned medical LLM based on Qwen3-4B-Thinking, specialized for medical entity and relationship extraction.

Model Details

  • Base Model: unsloth/Qwen3-4B-Thinking-2507
  • Fine-tuning Method: LoRA (Low-Rank Adaptation)
  • Domain: Medical Literature Analysis
  • Tasks: Entity Recognition, Relationship Extraction
  • Language: English

Performance Metrics

Metric Entity Extraction Relationship Extraction
Precision 0.000 0.000
Recall 0.000 0.000
F1-Score 0.000 0.000

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen2.5-3B-Thinking",
    torch_dtype="auto",
    device_map="auto"
)

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "xingqiang/Medical-NER-Qwen-4B-Thinking")
tokenizer = AutoTokenizer.from_pretrained("xingqiang/Medical-NER-Qwen-4B-Thinking")

# Generate medical analysis
text = "Hepatitis C virus causes chronic liver infection."
messages = [
    {"role": "user", "content": f"Extract medical entities and relationships from: {text}"}
]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)