CPU support

#12
by travisking - opened

Given that gemma-3n is meant for edge devices, is there/will there be support for CPU inference (even quantized)? All examples seem to use CUDA, and when I tried some attempts at CPU inference (both naive and with torchao quant) it just hung at inference forever and never finished.

I understand that there are llama.cpp quants but they are text-only (at time of writing) so a no-go for me as I need the multimodal input features.

I've used CPU inference, it works, it's just VERY slow - 1-3 tokens/second for a text input inference.

Still working on image processing though..

Google org

Hi,

Yes, there is a support for CPU inference for quantized Gemma models, including multimodal ones. The reason your attempts hung is likely due to unoptimized frameworks. standard inference with libraries like transformers is often optimized for a GPU, and naive CPU execution can be extremely slow, or as you experienced.

The llama.cpp project has implemented multimodal support through a dedicated library and a new command -line tool. You will need to compile llama.cpp from source or use a package that pulls from the latest HEAD of the repository to access this feature.

Kindly refer this link for more information. this link points to the specific document that explains how to use multimodal models with the llama.cpp framework , including the requirement for the separate mmproj file.

Thank you.

Sign up or log in to comment