See the main model card: https://huggingface.co/brucethemoose/Yi-34B-200K-RPMerge

Experimental imatrix quantizations, at 16K context, using a cocktail of data taken from the exllamav2 repo, kalomaze's random token tests and some stories.

Consider them experimental! But they may be better than the equivalent non-matrix GGUFs. Or they may not!

Downloads last month
49
GGUF
Model size
34B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including brucethemoose/Yi-34B-200K-RPMerge-iMat.GGUF