ShieldGemma-2-4B-IT GGUF
This repository contains GGUF format conversions of google/shieldgemma-2-4b-it for use with llama.cpp and compatible inference engines.
Model Description
ShieldGemma 2 is a 4 billion parameter multimodal model built on Gemma 3 for image safety classification. It evaluates images against safety policies to detect:
- Sexually Explicit content
- Dangerous Content
- Violence & Gore
This model takes images and text prompts as input and outputs safety classifications.
Files
- shieldgemma-2-4b-it.gguf (7.3GB) - F16 text/language model
- mmproj-shieldgemma-2-4b-it-F16.gguf (812MB) - F16 vision encoder (multimodal projector)
Both files are required for inference.
Usage
With llama.cpp
# Build llama.cpp with multimodal support
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
cmake -B build
cmake --build build --target llama-mtmd-cli
# Run inference
./build/bin/llama-mtmd-cli \
-m shieldgemma-2-4b-it.gguf \
--mmproj mmproj-shieldgemma-2-4b-it-F16.gguf \
--image <your_image.jpg> \
-p "You are a policy expert trying to help determine whether a generated or real image is in violation of the safety policy
below. Your decision must very clearly follow the specific language used in the safety policy and should not include
any interpretation outside of the safety policy.
<BEGIN_SAFETY_POLICY>
No Adult Material
</END_SAFETY_POLICY>
Does the image violate the above policy? Your Answer must start with with 'Yes' or 'No'." \
-n 50 \
--temp 0.1
The model will output "Yes" or "No" based on whether the image violates the specified policy.
Quantization Details
These are F16 (16-bit floating point) conversions of the original model:
- Maintains high accuracy
- Suitable for systems with sufficient RAM/VRAM
- Can be further quantized with llama.cpp quantization tools
Intended Use
ShieldGemma 2 is intended to be used as a safety content moderator for:
- Input filtering for vision language models
- Output filtering for image generation systems
- Content moderation in user-facing applications
License and Terms
Gemma is provided under and subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms
This model is a derivative work (GGUF format conversion) of the original ShieldGemma 2 model and is subject to:
Key License Requirements
When using or redistributing this model:
- You must comply with the Gemma Terms of Use
- You must not use the model for prohibited uses as defined in the policy
- You must include this license notice in any redistribution
- You must comply with applicable laws and regulations
Prohibited Uses
This model must not be used for:
- Violating intellectual property rights
- Child exploitation or illegal activities
- Generating harmful, hateful, or harassing content
- Creating misinformation or deceptive content
- Generating sexually explicit content for pornography
- Circumventing safety filters
See the full Prohibited Use Policy for details.
Citation
Original Model:
@misc{zeng2025shieldgemma2robusttractable,
title={ShieldGemma 2: Robust and Tractable Image Content Moderation},
author={Wenjun Zeng and Dana Kurniawan and Ryan Mullins and Yuchi Liu and Tamoghna Saha and Dirichi Ike-Njoku and Jindong Gu and Yiwen Song and Cai Xu and Jingjing Zhou and Aparna Joshi and Shravan Dheep and Mani Malek and Hamid Palangi and Joon Baek and Rick Pereira and Karthik Narasimhan},
year={2025},
eprint={2504.01081},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.01081},
}
Model Card Contact
For questions about the original model, see google/shieldgemma-2-4b-it.
For questions about this GGUF conversion, please open an issue in this repository.
Acknowledgements
- Original model by Google DeepMind
- GGUF conversion using llama.cpp
- Conversion enabled by modifications to llama.cpp's
convert_hf_to_gguf.pyto support ShieldGemma2ForImageClassification architecture
- Downloads last month
- 301