Could you please upload a 99GB-100GB version of the MLX quantization model so that it can be deployed locally on a 128GB RAM MAC? Thank you very much!
#3 opened 4 months ago
by
mimeng1990
Thank you
👍 2
2
#1 opened 6 months ago
by
AliceThirty