Inference Providers
Active filters: vLLM
QuantTrio/Qwen3.6-35B-A3B-AWQ
Image-Text-to-Text
• 36B • Updated • 15.6k
• 10
RohitUltimate/Qwen3.5_VL_2B_12k
Image-Text-to-Text
• 2B • Updated • 305
• 7
mistralai/Mistral-Small-4-119B-2603
119B • Updated • 80.9k
• 356
QuantTrio/Qwen3.5-27B-AWQ
Image-Text-to-Text
• 28B • Updated • 383k
• 41
unsloth/Mistral-Small-4-119B-2603-GGUF
119B • Updated • 27.6k
• 61
QuantTrio/gemma-4-31B-it-AWQ
Image-Text-to-Text
• 31B • Updated • 88.7k
• 6
mistralai/Mistral-Small-4-119B-2603-eagle
Updated • 282
• 46
QuantTrio/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-v2-AWQ
Image-Text-to-Text
• 28B • Updated • 49.8k
• 12
QuantTrio/Qwopus3.5-27B-v3-AWQ
Image-Text-to-Text
• 27B • Updated • 22.6k
• 9
JunHowie/Qwen3-4B-Instruct-2507-GPTQ-Int4
Text Generation
• 4B • Updated • 110k
• 3
QuantTrio/Qwen3-VL-30B-A3B-Instruct-AWQ
Text Generation
• 31B • Updated • 737k
• 42
QuantTrio/GLM-4.7-Flash-AWQ
Text Generation
• 31B • Updated • 89.1k
• 12
QuantTrio/MiniMax-M2.5-AWQ
Text Generation
• 229B • Updated • 89.7k
• 15
Text Generation
• 586B • Updated • 5.25k
• 6
Image-Text-to-Text
• 5B • Updated • 42.8k
• 8
unsloth/Mistral-Small-4-119B-2603
119B • Updated • 161
• 4
QuantTrio/gemma-4-31B-it-AWQ-6Bit
Image-Text-to-Text
• 31B • Updated • 12.4k
• 7
Xingyu-Zheng/Qwen3.5-9B-GLM5.1-Distill-v1-INT4-FOEM
Image-Text-to-Text
• 9B • Updated • 18
• 1
Xingyu-Zheng/Qwopus3.5-27B-v3.5-INT4-FOEM
Image-Text-to-Text
• 27B • Updated • 57
• 1
model-scope/glm-4-9b-chat-GPTQ-Int4
Text Generation
• 9B • Updated • 127
• 6
model-scope/glm-4-9b-chat-GPTQ-Int8
Text Generation
• 9B • Updated • 8
• 2
tclf90/qwen2.5-72b-instruct-gptq-int4
Text Generation
• 73B • Updated • 91
• 2
tclf90/qwen2.5-72b-instruct-gptq-int3
Text Generation
• 69B • Updated • 66
prithivMLmods/Nu2-Lupi-Qwen-14B
Text Generation
• 15B • Updated • 6
• 2
mradermacher/Nu2-Lupi-Qwen-14B-GGUF
15B • Updated • 111
• 1
mradermacher/Nu2-Lupi-Qwen-14B-i1-GGUF
15B • Updated • 235
• 1
JunHowie/Qwen3-0.6B-GPTQ-Int4
Text Generation
• 0.6B • Updated • 84
• 1
JunHowie/Qwen3-0.6B-GPTQ-Int8
Text Generation
• 0.6B • Updated • 9
JunHowie/Qwen3-1.7B-GPTQ-Int4
Text Generation
• 2B • Updated • 150
• 1
JunHowie/Qwen3-1.7B-GPTQ-Int8
Text Generation
• 2B • Updated • 12