--- license: apache-2.0 base_model: - allenai/Olmo-3.1-32B-Think tags: - llmcompressor --- This is [allenai/Olmo-3.1-32B-Think](https://huggingface.co/allenai/Olmo-3.1-32B-Think) quantized with [LLM Compressor](https://github.com/vllm-project/llm-compressor) with the recipe in the "recipe.yaml" file. **Not Tested** How the models perform (token efficiency, accuracy per domain, ...) and how to use them: [Quantizing Olmo 3: Most Efficient and Accurate Formats](https://kaitchup.substack.com/p/quantizing-olmo-3-most-efficient) ![image](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F64b93e6bd6c468ac7536607e%2FH3JWV_ha07IrN-Sz6C7VL.png) - **Developed by:** [The Kaitchup](https://kaitchup.substack.com/) - **License:** Apache 2.0 license ## How to Support My Work Subscribe to [The Kaitchup](https://kaitchup.substack.com/subscribe). Or you can "[buy me a kofi](https://ko-fi.com/bnjmn_marie)".