Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
soundsgoodai
/
GLM-4.7-NVFP4-KV-cache-BF16
like
0
Follow
SoundsGoodAI
1
Text Generation
Safetensors
glm4_moe
conversational
8-bit precision
modelopt
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
A quantization setup used for GLM-4.7:
A quantization setup used for GLM-4.7:
Weights: NVFP4
KV cache: BF16
Tooling: NVIDIA/Model-Optimizer
Deploy with TensorRT-LLM
Downloads last month
63
Safetensors
Model size
177B params
Tensor type
BF16
路
F32
路
F8_E4M3
路
U8
路
Chat template
Files info
Inference Providers
NEW
Text Generation
This model isn't deployed by any Inference Provider.
馃檵
Ask for provider support
Model tree for
soundsgoodai/GLM-4.7-NVFP4-KV-cache-BF16
Base model
zai-org/GLM-4.7
Quantized
(
42
)
this model