This model is a fine-tune of OpenAI's Whisper Large v3 Turbo model (https://huggingface.co/openai/whisper-large-v3-turbo) over the following Korean datasets:

https://huggingface.co/datasets/Junhoee/STT_Korean_Dataset_80000 https://huggingface.co/datasets/Bingsu/zeroth-korean Combined they have roughly 102k sentences.

This is the last checkpoint which has achieved ~16 WER (down from ~24 WER).

Training was 10,000 iterations.

Downloads last month
33
Safetensors
Model size
0.8B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for royshilkrot/whisper-large-v3-turbo-korean-ggml

Finetuned
(403)
this model
Quantizations
1 model

Datasets used to train royshilkrot/whisper-large-v3-turbo-korean-ggml