mlx-community/whisper-large-v3-asr-fp16

This model was converted to MLX format from openai/whisper-large-v3 using mlx-audio version 0.3.0. Refer to the original model card for more details on the model.

Use with mlx-audio

pip install -U mlx-audio

CLI Example:

    python -m mlx_audio.stt.generate --model mlx-community/whisper-large-v3-asr-fp16 --audio "audio.wav"

Python Example:

    from mlx_audio.stt.utils import load_model
    from mlx_audio.stt.generate import generate_transcription
    model = load_model("mlx-community/whisper-large-v3-asr-fp16")
    transcription = generate_transcription(
        model=model,
        audio_path="path_to_audio.wav",
        output_path="path_to_output.txt",
        format="txt",
        verbose=True,
    )
    print(transcription.text)
Downloads last month
46
Safetensors
Model size
2B params
Tensor type
F16
·
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support