Update README.md
Browse files
README.md
CHANGED
|
@@ -1,75 +1,267 @@
|
|
| 1 |
---
|
| 2 |
-
library_name: transformers
|
| 3 |
license: apache-2.0
|
|
|
|
|
|
|
| 4 |
base_model: openai/whisper-tiny
|
| 5 |
tags:
|
| 6 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
model-index:
|
| 8 |
- name: whisper-tiny-high-mixed-nl
|
| 9 |
-
results:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
-
|
| 13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
-
|
|
|
|
|
|
|
|
|
|
| 16 |
|
| 17 |
-
|
| 18 |
-
It achieves the following results on the evaluation set:
|
| 19 |
-
- Loss: 0.3358
|
| 20 |
|
| 21 |
-
|
| 22 |
|
| 23 |
-
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
-
|
| 26 |
|
| 27 |
-
|
| 28 |
|
| 29 |
-
|
|
|
|
|
|
|
|
|
|
| 30 |
|
| 31 |
-
|
| 32 |
|
| 33 |
-
##
|
| 34 |
|
| 35 |
-
|
|
|
|
|
|
|
|
|
|
| 36 |
|
| 37 |
-
|
| 38 |
-
- learning_rate: 5e-05
|
| 39 |
-
- train_batch_size: 256
|
| 40 |
-
- eval_batch_size: 8
|
| 41 |
-
- seed: 42
|
| 42 |
-
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 43 |
-
- lr_scheduler_type: linear
|
| 44 |
-
- lr_scheduler_warmup_ratio: 0.1
|
| 45 |
-
- num_epochs: 5
|
| 46 |
|
| 47 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
-
|
| 50 |
-
|:-------------:|:------:|:----:|:---------------:|
|
| 51 |
-
| 0.5601 | 0.2809 | 50 | 0.5881 |
|
| 52 |
-
| 0.3735 | 0.5618 | 100 | 0.4770 |
|
| 53 |
-
| 0.3024 | 0.8427 | 150 | 0.4197 |
|
| 54 |
-
| 0.2122 | 1.1236 | 200 | 0.3942 |
|
| 55 |
-
| 0.189 | 1.4045 | 250 | 0.3746 |
|
| 56 |
-
| 0.1824 | 1.6854 | 300 | 0.3613 |
|
| 57 |
-
| 0.1718 | 1.9663 | 350 | 0.3461 |
|
| 58 |
-
| 0.1095 | 2.2472 | 400 | 0.3457 |
|
| 59 |
-
| 0.1092 | 2.5281 | 450 | 0.3396 |
|
| 60 |
-
| 0.1077 | 2.8090 | 500 | 0.3364 |
|
| 61 |
-
| 0.0826 | 3.0899 | 550 | 0.3341 |
|
| 62 |
-
| 0.0709 | 3.3708 | 600 | 0.3357 |
|
| 63 |
-
| 0.0718 | 3.6517 | 650 | 0.3337 |
|
| 64 |
-
| 0.0707 | 3.9326 | 700 | 0.3323 |
|
| 65 |
-
| 0.0511 | 4.2135 | 750 | 0.3353 |
|
| 66 |
-
| 0.0515 | 4.4944 | 800 | 0.3355 |
|
| 67 |
-
| 0.0485 | 4.7753 | 850 | 0.3358 |
|
| 68 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 69 |
|
| 70 |
-
|
| 71 |
|
| 72 |
-
|
| 73 |
-
- Pytorch 2.5.1+cu124
|
| 74 |
-
- Datasets 3.6.0
|
| 75 |
-
- Tokenizers 0.21.2
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- nl
|
| 5 |
base_model: openai/whisper-tiny
|
| 6 |
tags:
|
| 7 |
+
- automatic-speech-recognition
|
| 8 |
+
- whisper
|
| 9 |
+
- dutch
|
| 10 |
+
- speech
|
| 11 |
+
- audio
|
| 12 |
+
- synthetic-data
|
| 13 |
+
- asr
|
| 14 |
+
- hf-asr-leaderboard
|
| 15 |
+
datasets:
|
| 16 |
+
- mozilla-foundation/common_voice_17_0
|
| 17 |
+
- yuriyvnv/synthetic_transcript_nl
|
| 18 |
model-index:
|
| 19 |
- name: whisper-tiny-high-mixed-nl
|
| 20 |
+
results:
|
| 21 |
+
- task:
|
| 22 |
+
type: automatic-speech-recognition
|
| 23 |
+
name: Automatic Speech Recognition
|
| 24 |
+
dataset:
|
| 25 |
+
name: Common Voice 17.0 (Dutch)
|
| 26 |
+
type: mozilla-foundation/common_voice_17_0
|
| 27 |
+
config: nl
|
| 28 |
+
split: test
|
| 29 |
+
metrics:
|
| 30 |
+
- type: wer
|
| 31 |
+
value: 25.51
|
| 32 |
+
name: Test WER
|
| 33 |
+
- task:
|
| 34 |
+
type: automatic-speech-recognition
|
| 35 |
+
name: Automatic Speech Recognition
|
| 36 |
+
dataset:
|
| 37 |
+
name: Multilingual LibriSpeech (Dutch)
|
| 38 |
+
type: facebook/multilingual_librispeech
|
| 39 |
+
config: dutch
|
| 40 |
+
split: test
|
| 41 |
+
metrics:
|
| 42 |
+
- type: wer
|
| 43 |
+
value: 43.76
|
| 44 |
+
name: Test WER (MLS)
|
| 45 |
+
pipeline_tag: automatic-speech-recognition
|
| 46 |
+
library_name: transformers
|
| 47 |
---
|
| 48 |
|
| 49 |
+
# Whisper-Tiny Dutch - High-Quality Filtered Synthetic Data
|
| 50 |
+
|
| 51 |
+
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) for Dutch automatic speech recognition (ASR). It was trained on Common Voice 17.0 Dutch combined with **WAVe-filtered synthetic speech data** using a strict high-quality threshold (q ≥ 0.8).
|
| 52 |
+
|
| 53 |
+
## Introduction
|
| 54 |
+
|
| 55 |
+
### How the Data Was Created
|
| 56 |
+
|
| 57 |
+
The training data combines real speech from Common Voice 17.0 with synthetic speech generated through a two-stage pipeline:
|
| 58 |
+
|
| 59 |
+
1. **Transcript Generation**: We used GPT-4o-mini to generate Dutch transcripts that match the word count distribution observed in Common Voice, ensuring realistic utterance lengths and diverse linguistic content.
|
| 60 |
+
|
| 61 |
+
2. **Speech Synthesis**: Each transcript was converted to audio using OpenAI's TTS-1 model with 9 different voice variants (alloy, ash, coral, echo, fable, nova, onyx, sage, shimmer), producing 34,898 synthetic samples.
|
| 62 |
+
|
| 63 |
+
3. **Quality Filtering with WAVe**: Raw synthetic speech often contains defects such as mispronunciations, omitted words, or prosodic anomalies. To address this, we applied **WAVe (Word-Aligned Verification)**, a model that assesses audio-text alignment at the word level rather than the sentence level. WAVe uses multi-head attention to align each word to its corresponding audio frames and assigns per-word confidence scores via a GLU-based scorer. For this model, only samples scoring above the strict threshold (q ≥ 0.8) were retained, resulting in 10,555 high-quality synthetic samples.
|
| 64 |
+
|
| 65 |
+
### How the Model Was Created
|
| 66 |
+
|
| 67 |
+
The model was fine-tuned from `openai/whisper-tiny` using the Hugging Face Transformers library with the following approach:
|
| 68 |
+
|
| 69 |
+
1. **Mixed Training**: Combined 34,952 real speech samples from Common Voice 17.0 Dutch with 10,555 strictly WAVe-filtered synthetic samples (45,507 total).
|
| 70 |
+
|
| 71 |
+
2. **Optimization**: Trained for 5 epochs with a learning rate of 5e-5, global batch size of 256, and BF16 precision on an NVIDIA H200 GPU.
|
| 72 |
+
|
| 73 |
+
3. **Checkpoint Selection**: The best checkpoint was selected based on validation loss, occurring at step 700 with a validation loss of 0.3323.
|
| 74 |
+
|
| 75 |
+
This high-quality filtering approach achieves **35% reduction in training steps** compared to using all synthetic data, while maintaining competitive ASR performance.
|
| 76 |
+
|
| 77 |
+
## Model Details
|
| 78 |
+
|
| 79 |
+
| Property | Value |
|
| 80 |
+
|----------|-------|
|
| 81 |
+
| **Base Model** | openai/whisper-tiny |
|
| 82 |
+
| **Language** | Dutch (nl) |
|
| 83 |
+
| **Task** | Automatic Speech Recognition (transcribe) |
|
| 84 |
+
| **Parameters** | 39M |
|
| 85 |
+
| **Training Data** | Common Voice 17.0 + High-Quality Synthetic (q ≥ 0.8) |
|
| 86 |
+
| **Total Training Samples** | 45,507 |
|
| 87 |
+
| **Sampling Rate** | 16kHz |
|
| 88 |
+
|
| 89 |
+
## Evaluation Results
|
| 90 |
+
|
| 91 |
+
### This Model (whisper-tiny-high-mixed-nl)
|
| 92 |
+
|
| 93 |
+
| Metric | Value |
|
| 94 |
+
|--------|-------|
|
| 95 |
+
| **Validation Loss** | 0.3323 |
|
| 96 |
+
| **Validation WER** | 19.59% |
|
| 97 |
+
| **Test WER (Common Voice)** | 25.51% |
|
| 98 |
+
| **Test WER (MLS)** | 43.76% |
|
| 99 |
+
| **Best Checkpoint** | Step 700 |
|
| 100 |
+
| **Max Training Steps** | 890 |
|
| 101 |
+
|
| 102 |
+
### Comparison with Other Training Configurations
|
| 103 |
+
|
| 104 |
+
| Training Data | Max Steps | Val Loss | Val WER | Test WER (CV) | Test WER (MLS) |
|
| 105 |
+
|---------------|-----------|----------|---------|---------------|----------------|
|
| 106 |
+
| Common Voice Only | 680 | 0.3382 | 19.77% | 26.00% | 44.85% |
|
| 107 |
+
| **High-Quality Filtered + CV** | **890** | **0.3323** | **19.59%** | **25.51%** | **43.76%** |
|
| 108 |
+
| Mid-High Quality Filtered + CV | 1,270 | 0.3292 | 19.36% | 25.05% | 43.11% |
|
| 109 |
+
| All Synthetic + CV (Unfiltered) | 1,365 | 0.3207 | 19.61% | 24.93% | 43.12% |
|
| 110 |
+
|
| 111 |
+
### Key Performance Highlights
|
| 112 |
+
|
| 113 |
+
- **Most efficient training**: Only 890 max steps (35% fewer than unfiltered)
|
| 114 |
+
- **1.9% relative improvement** on Common Voice test set vs baseline (25.51% vs 26.00%)
|
| 115 |
+
- **2.4% relative improvement** on MLS benchmark vs baseline (43.76% vs 44.85%)
|
| 116 |
+
- **Best quality-to-compute ratio**: Achieves strong results with minimal synthetic data
|
| 117 |
+
|
| 118 |
+
## Training Data
|
| 119 |
+
|
| 120 |
+
### Dataset Composition
|
| 121 |
+
|
| 122 |
+
| Source | Samples | Description |
|
| 123 |
+
|--------|---------|-------------|
|
| 124 |
+
| [Common Voice 17.0 Dutch](https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0) | 34,952 | Real speech from Mozilla's crowdsourced dataset |
|
| 125 |
+
| [Synthetic Transcript NL](https://huggingface.co/datasets/yuriyvnv/synthetic_transcript_nl) (q ≥ 0.8) | 10,555 | Strictly WAVe-filtered TTS audio |
|
| 126 |
+
| **Total** | **45,507** | |
|
| 127 |
+
|
| 128 |
+
### Synthetic Data Generation Pipeline
|
| 129 |
+
|
| 130 |
+
The synthetic dataset ([yuriyvnv/synthetic_transcript_nl](https://huggingface.co/datasets/yuriyvnv/synthetic_transcript_nl)) was generated using:
|
| 131 |
+
|
| 132 |
+
1. **Transcript Generation**: GPT-4o-mini, matching Common Voice word count distribution
|
| 133 |
+
2. **Speech Synthesis**: OpenAI TTS-1 model with 9 voice variants (alloy, ash, coral, echo, fable, nova, onyx, sage, shimmer)
|
| 134 |
+
3. **Quality Filtering**: WAVe model with strict threshold q ≥ 0.8
|
| 135 |
+
|
| 136 |
+
### WAVe Quality Distribution (Dutch Synthetic Data)
|
| 137 |
+
|
| 138 |
+
| Quality Level | Samples | Percentage | Used in This Model |
|
| 139 |
+
|--------------|---------|------------|-------------------|
|
| 140 |
+
| High (q ≥ 0.8) | 10,555 | 30.2% | ✓ |
|
| 141 |
+
| Medium (0.5 ≤ q < 0.8) | 19,627 | 56.2% | ✗ |
|
| 142 |
+
| Low (q < 0.5) | 4,716 | 13.5% | ✗ |
|
| 143 |
+
|
| 144 |
+
## Training Procedure
|
| 145 |
+
|
| 146 |
+
### Hyperparameters
|
| 147 |
+
|
| 148 |
+
| Parameter | Value |
|
| 149 |
+
|-----------|-------|
|
| 150 |
+
| Learning Rate | 5e-5 |
|
| 151 |
+
| Batch Size (Global) | 256 |
|
| 152 |
+
| Warmup Steps | 200 |
|
| 153 |
+
| Max Epochs | 5 |
|
| 154 |
+
| Precision | BF16 |
|
| 155 |
+
| Optimizer | AdamW (fused) |
|
| 156 |
+
| Eval Steps | 50 |
|
| 157 |
+
| Metric for Best Model | eval_loss |
|
| 158 |
+
|
| 159 |
+
### Training Infrastructure
|
| 160 |
+
|
| 161 |
+
- **GPU**: NVIDIA H200 (140GB VRAM)
|
| 162 |
+
- **Operating System**: Ubuntu 22.04
|
| 163 |
+
- **Framework**: Hugging Face Transformers
|
| 164 |
+
|
| 165 |
+
### Training Curve
|
| 166 |
+
|
| 167 |
+
```
|
| 168 |
+
Step 100: val_loss = 0.4770
|
| 169 |
+
Step 250: val_loss = 0.3746
|
| 170 |
+
Step 400: val_loss = 0.3457
|
| 171 |
+
Step 550: val_loss = 0.3341
|
| 172 |
+
Step 700: val_loss = 0.3323 ← Best checkpoint
|
| 173 |
+
Step 850: val_loss = 0.3358
|
| 174 |
+
```
|
| 175 |
+
|
| 176 |
+
## Usage
|
| 177 |
+
|
| 178 |
+
### Transcription Pipeline
|
| 179 |
+
|
| 180 |
+
```python
|
| 181 |
+
from transformers import pipeline
|
| 182 |
+
|
| 183 |
+
transcriber = pipeline(
|
| 184 |
+
"automatic-speech-recognition",
|
| 185 |
+
model="yuriyvnv/whisper-tiny-high-mixed-nl",
|
| 186 |
+
device="cuda"
|
| 187 |
+
)
|
| 188 |
+
|
| 189 |
+
result = transcriber("path/to/dutch_audio.wav")
|
| 190 |
+
print(result["text"])
|
| 191 |
+
```
|
| 192 |
+
|
| 193 |
+
### Direct Model Usage
|
| 194 |
+
|
| 195 |
+
```python
|
| 196 |
+
from transformers import WhisperProcessor, WhisperForConditionalGeneration
|
| 197 |
+
import librosa
|
| 198 |
+
|
| 199 |
+
processor = WhisperProcessor.from_pretrained("yuriyvnv/whisper-tiny-high-mixed-nl")
|
| 200 |
+
model = WhisperForConditionalGeneration.from_pretrained("yuriyvnv/whisper-tiny-high-mixed-nl")
|
| 201 |
+
model.to("cuda")
|
| 202 |
+
|
| 203 |
+
audio, sr = librosa.load("path/to/dutch_audio.wav", sr=16000)
|
| 204 |
+
input_features = processor(audio, sampling_rate=16000, return_tensors="pt").input_features.to("cuda")
|
| 205 |
+
|
| 206 |
+
predicted_ids = model.generate(input_features)
|
| 207 |
+
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
|
| 208 |
+
print(transcription)
|
| 209 |
+
```
|
| 210 |
+
|
| 211 |
+
### Specifying Language
|
| 212 |
|
| 213 |
+
```python
|
| 214 |
+
model.generation_config.language = "nl"
|
| 215 |
+
model.generation_config.task = "transcribe"
|
| 216 |
+
```
|
| 217 |
|
| 218 |
+
## Methodology
|
|
|
|
|
|
|
| 219 |
|
| 220 |
+
This model leverages **WAVe (Word-Aligned Verification)**, a word-level quality assessment method for filtering synthetic speech data. Unlike sentence-level filtering approaches, WAVe:
|
| 221 |
|
| 222 |
+
- Aligns each word to its corresponding audio frames using multi-head attention
|
| 223 |
+
- Assigns per-word confidence scores via a GLU-based scorer
|
| 224 |
+
- Detects localized synthesis errors (mispronunciations, omitted words, prosodic anomalies)
|
| 225 |
+
- Achieves **6.5% improvement** over sentence-level filtering methods
|
| 226 |
|
| 227 |
+
The strict threshold (q ≥ 0.8) retains only the top 30.2% of synthetic samples, prioritizing quality over quantity for maximum training efficiency.
|
| 228 |
|
| 229 |
+
## When to Use This Model
|
| 230 |
|
| 231 |
+
This model is ideal when:
|
| 232 |
+
- **Compute resources are limited**: 35% fewer training steps than unfiltered approaches
|
| 233 |
+
- **Quick fine-tuning is needed**: Smaller dataset enables faster iteration
|
| 234 |
+
- **Baseline improvement is sufficient**: 1.9% improvement over CV-only training
|
| 235 |
|
| 236 |
+
Consider the [mid-high quality filtered model](https://huggingface.co/yuriyvnv/whisper-tiny-mixed-nl) if you need better absolute performance and have more compute budget.
|
| 237 |
|
| 238 |
+
## Limitations
|
| 239 |
|
| 240 |
+
- **Model capacity**: Whisper-Tiny (39M params) has limited representational power
|
| 241 |
+
- **Domain specificity**: Optimized for general Dutch; may underperform on technical domains
|
| 242 |
+
- **Acoustic conditions**: Trained on clean speech; noise robustness not guaranteed
|
| 243 |
+
- **Dialect coverage**: Performance may vary across Dutch regional variants
|
| 244 |
|
| 245 |
+
## Citation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 246 |
|
| 247 |
+
```bibtex
|
| 248 |
+
@article{perezhohin2024enhancing,
|
| 249 |
+
title={Enhancing Automatic Speech Recognition: Effects of Semantic Audio Filtering on Models Performance},
|
| 250 |
+
author={Perezhohin, Yuriy and Santos, Tiago and Costa, Victor and Peres, Fernando and Castelli, Mauro},
|
| 251 |
+
journal={IEEE Access},
|
| 252 |
+
year={2024},
|
| 253 |
+
publisher={IEEE}
|
| 254 |
+
}
|
| 255 |
+
```
|
| 256 |
|
| 257 |
+
## References
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 258 |
|
| 259 |
+
- **Base Model**: [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny)
|
| 260 |
+
- **Training Data (Real)**: [mozilla-foundation/common_voice_17_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0)
|
| 261 |
+
- **Training Data (Synthetic)**: [yuriyvnv/synthetic_transcript_nl](https://huggingface.co/datasets/yuriyvnv/synthetic_transcript_nl)
|
| 262 |
+
- **Whisper Paper**: [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
|
| 263 |
+
- **IEEE Access Paper**: [Enhancing ASR with Semantic Audio Filtering](https://ieeexplore.ieee.org/document/10720758)
|
| 264 |
|
| 265 |
+
## License
|
| 266 |
|
| 267 |
+
Apache 2.0
|
|
|
|
|
|
|
|
|