This is a MXFP4_MOE quantization of the model MiniMax-M2

Original model: https://huggingface.co/unsloth/MiniMax-M2

Download the latest llama.cpp to use it.

It seems that the original model I quantized had chat template problems, so I re-quantized the unsloth version of it that has template fixes. Please delete the old one and download the new quant.

Also keep in mind that this a coding model.

Downloads last month
41
GGUF
Model size
229B params
Architecture
minimax-m2
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for noctrex/MiniMax-M2-MXFP4_MOE-GGUF

Quantized
(46)
this model