Sam McLeod
smcleod
AI & ML interests
cool things
Recent Activity
liked a model 4 days ago
mlx-community/Mistral-Medium-3.5-128B-4bit new activity 6 days ago
Intel/Qwen3.6-27B-4.5b-mlx-AutoRound:AutoRound quant fails to load with mlx-lm liked a model 9 days ago
z-lab/Qwen3.6-27B-DFlashOrganizations
AutoRound quant fails to load with mlx-lm
👍 1
1
#1 opened 6 days ago
by
smcleod
27B is out! & DFlash model conversion script availablity?
👍 4
#5 opened 11 days ago
by
smcleod
ONNX conversation script?
#1 opened 20 days ago
by
smcleod
fix space, update packages
1
#75 opened 22 days ago
by
smcleod
This version was mistakenly uploaded as the 200G version.
👀 1
3
#1 opened 22 days ago
by
vanch007
What's the organisation & project relating to (I'm a r/localllama mod)
#1 opened 2 months ago
by
smcleod
iMatrix quants
2
#2 opened 3 months ago
by
smcleod
New Refresh with added Tool Calling in calibration dataset and improved imatrix
❤️ 9
9
#21 opened 3 months ago
by
danielhanchen
Day 0 llama.cpp support?
❤️👍 4
3
#3 opened 4 months ago
by
sbeltz
GGUF or MLX support?
👍 18
5
#2 opened 7 months ago
by
smcleod
llama.cpp support
👀👍 5
5
#1 opened 9 months ago
by
djuna
Context Size?
3
#4 opened 8 months ago
by
smcleod
Thinking tokens issue
👍 2
12
#9 opened 9 months ago
by
iyanello
jinja2 chat template is malformed
1
#13 opened 9 months ago
by
smcleod
When GGUF?
🔥🚀 16
7
#6 opened 9 months ago
by
ChuckMcSneed
I have a draft PR up to llama.cpp, keen for your input
❤️ 7
#4 opened 9 months ago
by
smcleod
Any chance of a smaller coding model in the 30-70b range?
🚀❤️ 50
4
#6 opened 9 months ago
by
smcleod
GGUF version?
👍 23
2
#6 opened 10 months ago
by
smcleod
Are you planning on adding llama.cpp support?
1
#1 opened 10 months ago
by
smcleod
Any chance of creating these with RoPE/Yarn for a context size larger than 32k?
5
#2 opened 12 months ago
by
smcleod