Instructions to use MBZUAI/MedMO-8B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MBZUAI/MedMO-8B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="MBZUAI/MedMO-8B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("MBZUAI/MedMO-8B") model = AutoModelForImageTextToText.from_pretrained("MBZUAI/MedMO-8B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use MBZUAI/MedMO-8B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "MBZUAI/MedMO-8B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MBZUAI/MedMO-8B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/MBZUAI/MedMO-8B
- SGLang
How to use MBZUAI/MedMO-8B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "MBZUAI/MedMO-8B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MBZUAI/MedMO-8B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "MBZUAI/MedMO-8B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MBZUAI/MedMO-8B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use MBZUAI/MedMO-8B with Docker Model Runner:
docker model run hf.co/MBZUAI/MedMO-8B
MedMO-8B-Next: Grounding and Understanding Multimodal Large Language Model for Medical Images
MedMO-8B-Next is the latest and most powerful iteration of the MedMO family — an open-source multimodal foundation model purpose-built for comprehensive medical image understanding and grounding. Trained on 26M+ diverse medical samples across 45 datasets, MedMO-8B-Next achieves state-of-the-art performance across all major medical imaging benchmarks, outperforming both open-source and closed-source competitors on VQA, Text QA, grounding, and report generation tasks.
🏆 Benchmark Performance
VQA & Text QA Results
MedMO-8B-Next sets a new state-of-the-art across the board, achieving the highest average scores on both medical VQA and Text QA benchmarks — surpassing strong baselines including Lingshu-7B and Fleming-VL-8B.
OMIVQA = OmniMedVQA · MedXQA = MedXpertQA · Medbullets reported as op4/op5
Medical VQA Benchmarks
| Model | MMMU-Med | VQA-RAD (closed/all) | SLAKE (closed/all) | PathVQA | PMC-VQA | OmniMedVQA | MedXpertQA | Avg. |
|---|---|---|---|---|---|---|---|---|
| Lingshu-7B | 54.0 | 77.2 / 43.0 | 82.4 / 33.2 | 41.9 | 54.2 | 82.9 | 26.9 | 55.1 |
| Fleming-VL-8B | 63.3 | 78.4 / 56.4 | 86.9 / 80.0 | 56.5 | 64.3 | 88.2 | 21.6 | 66.1 |
| MediX-R1-8B | 63.3 | 75.2/51.6 | 70.3/54.4 | 41.0 | 55.3 | 73.8 | 24.9 | 57.1 |
| MedMO-4B | 54.6 | 50.9 / 35.0 | 41.0 / 30.0 | 42.4 | 50.6 | 79.7 | 24.8 | 45.4 |
| MedMO-8B | 64.6 | 72.3 / 64.7 | 70.6 / 70.0 | 56.3 | 59.4 | 84.8 | 26.2 | 63.2 |
| MedMO-4B-Next | 58.7 | 79.7 / 59.6 | 78.0 / 74.0 | 73.3 | 75.7 | 90.6 | 27.0 | 68.5 |
| MedMO-8B-Next | 69.3 | 86.4 / 68.0 | 83.0 / 81.6 | 56.3 | 74.1 | 93.3 | 42.9 | 72.7 |
Medical Text QA Benchmarks
| Model | MMLU-Med | PubMedQA | MedMCQA | MedQA | Medbullets (op4/op5) | MedXpertQA | SGPQA | Avg. |
|---|---|---|---|---|---|---|---|---|
| Lingshu-7B | 69.6 | 75.8 | 56.3 | 63.5 | 62.0 / 53.8 | 16.4 | 27.5 | 53.1 |
| Fleming-VL-8B | 71.8 | 74.0 | 51.8 | 53.7 | 40.5 / 37.3 | 12.1 | 24.9 | 45.7 |
| MediX-R1-8B | 79.0 | 73.4 | 60.1 | 85.8 | 55.1/47.0 | 14.4 | 34.3 | 56.1 |
| MedMO-4B | 75.7 | 78.0 | 58.0 | 78.5 | 57.5 / 47.7 | 16.4 | 29.4 | 55.1 |
| MedMO-8B | 81.0 | 77.6 | 65.0 | 84.3 | 66.5 / 60.2 | 19.9 | 36.0 | 61.3 |
| MedMO-4B-Next | 74.8 | 78.2 | 58.1 | 78.3 | 57.4 / 47.6 | 16.5 | 29.5 | 55.0 |
| MedMO-8B-Next | 80.2 | 75.6 | 62.0 | 83.8 | 65.2 / 57.8 | 20.9 | 35.5 | 60.1 |
Bold = best result, underline = second-best result.
- Benchmarked on AMD MI210 GPU.
Supported Imaging Modalities
| Domain | Modalities |
|---|---|
| Radiology | X-ray, CT, MRI, Ultrasound |
| Pathology | Whole-slide imaging, Microscopy |
| Ophthalmology | Fundus photography, OCT |
| Dermatology | Clinical skin images |
| Nuclear Medicine | PET, SPECT |
🚀 Quick Start
Installation
pip install transformers torch qwen-vl-utils
Basic Usage
from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
# Load model
model = Qwen3VLForConditionalGeneration.from_pretrained(
"MBZUAI/MedMO-8B-Next",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map="auto",
)
processor = AutoProcessor.from_pretrained("MBZUAI/MedMO-8B-Next")
# Prepare input
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "path/to/medical/image.png",
},
{"type": "text", "text": "What abnormalities are present in this chest X-ray?"},
],
}
]
# Process and generate
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
).to(model.device)
generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text[0])
Example: Disease Localization with Bounding Boxes
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "chest_xray.png"},
{"type": "text", "text": "Detect and localize all abnormalities in this image."},
],
}
]
# Example output:
# "Fractures <box>[[156, 516, 231, 607], [240, 529, 296, 581]]</box>"
Example: Radiology Report Generation
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "ct_scan.png"},
{"type": "text", "text": "Generate a detailed radiology report for this CT scan."},
],
}
]
# MedMO-8B-Next generates comprehensive clinical reports with findings and impressions
📦 Model Family
| Model | Parameters | Best For |
|---|---|---|
| MedMO-8B-Next | 8B | SOTA highest accuracy, all tasks — recommended |
| MedMO-4B-Next | 4B | 2nd SOTA, high accuracy in resource-constrained environments |
| MedMO-8B | 8B | Previous generation |
| MedMO-4B | 4B | Resource-constrained environments |
📄 Citation
If you use MedMO in your research, please cite our paper:
@article{deria2026medmo,
title={MedMO: Grounding and Understanding Multimodal Large Language Model for Medical Images},
author={Deria, Ankan and Kumar, Komal and Dukre, Adinath Madhavrao and Segal, Eran and Khan, Salman and Razzak, Imran},
journal={arXiv preprint arXiv:2602.06965},
year={2026}
}
📜 License
This project is licensed under the Apache License 2.0 — see the LICENSE file for details.
- Downloads last month
- 2,622