ShieldGemma-2-4B-IT GGUF

This repository contains GGUF format conversions of google/shieldgemma-2-4b-it for use with llama.cpp and compatible inference engines.

Model Description

ShieldGemma 2 is a 4 billion parameter multimodal model built on Gemma 3 for image safety classification. It evaluates images against safety policies to detect:

  • Sexually Explicit content
  • Dangerous Content
  • Violence & Gore

This model takes images and text prompts as input and outputs safety classifications.

Files

  • shieldgemma-2-4b-it.gguf (7.3GB) - F16 text/language model
  • mmproj-shieldgemma-2-4b-it-F16.gguf (812MB) - F16 vision encoder (multimodal projector)

Both files are required for inference.

Usage

With llama.cpp

# Build llama.cpp with multimodal support
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
cmake -B build
cmake --build build --target llama-mtmd-cli

# Run inference
./build/bin/llama-mtmd-cli \
  -m shieldgemma-2-4b-it.gguf \
  --mmproj mmproj-shieldgemma-2-4b-it-F16.gguf \
  --image <your_image.jpg> \
  -p "You are a policy expert trying to help determine whether a generated or real image is in violation of the safety policy
below. Your decision must very clearly follow the specific language used in the safety policy and should not include
any interpretation outside of the safety policy.

<BEGIN_SAFETY_POLICY>
No Adult Material
</END_SAFETY_POLICY>

Does the image violate the above policy? Your Answer must start with with 'Yes' or 'No'." \
  -n 50 \
  --temp 0.1

The model will output "Yes" or "No" based on whether the image violates the specified policy.

Quantization Details

These are F16 (16-bit floating point) conversions of the original model:

  • Maintains high accuracy
  • Suitable for systems with sufficient RAM/VRAM
  • Can be further quantized with llama.cpp quantization tools

Intended Use

ShieldGemma 2 is intended to be used as a safety content moderator for:

  • Input filtering for vision language models
  • Output filtering for image generation systems
  • Content moderation in user-facing applications

License and Terms

Gemma is provided under and subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms

This model is a derivative work (GGUF format conversion) of the original ShieldGemma 2 model and is subject to:

Key License Requirements

When using or redistributing this model:

  1. You must comply with the Gemma Terms of Use
  2. You must not use the model for prohibited uses as defined in the policy
  3. You must include this license notice in any redistribution
  4. You must comply with applicable laws and regulations

Prohibited Uses

This model must not be used for:

  • Violating intellectual property rights
  • Child exploitation or illegal activities
  • Generating harmful, hateful, or harassing content
  • Creating misinformation or deceptive content
  • Generating sexually explicit content for pornography
  • Circumventing safety filters

See the full Prohibited Use Policy for details.

Citation

Original Model:

@misc{zeng2025shieldgemma2robusttractable,
    title={ShieldGemma 2: Robust and Tractable Image Content Moderation},
    author={Wenjun Zeng and Dana Kurniawan and Ryan Mullins and Yuchi Liu and Tamoghna Saha and Dirichi Ike-Njoku and Jindong Gu and Yiwen Song and Cai Xu and Jingjing Zhou and Aparna Joshi and Shravan Dheep and Mani Malek and Hamid Palangi and Joon Baek and Rick Pereira and Karthik Narasimhan},
    year={2025},
    eprint={2504.01081},
    archivePrefix={arXiv},
    primaryClass={cs.CV},
    url={https://arxiv.org/abs/2504.01081},
}

Model Card Contact

For questions about the original model, see google/shieldgemma-2-4b-it.

For questions about this GGUF conversion, please open an issue in this repository.

Acknowledgements

  • Original model by Google DeepMind
  • GGUF conversion using llama.cpp
  • Conversion enabled by modifications to llama.cpp's convert_hf_to_gguf.py to support ShieldGemma2ForImageClassification architecture
Downloads last month
301
GGUF
Model size
4B params
Architecture
gemma3
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for infil00p/shieldgemma-2-4b-it-GGUF

Quantized
(1)
this model

Paper for infil00p/shieldgemma-2-4b-it-GGUF