Whisper-Large-v3 Portuguese - High-Quality Filtered Synthetic Data

This model is a fine-tuned version of openai/whisper-large-v3 for Portuguese automatic speech recognition (ASR). It was trained on Common Voice 17.0 Portuguese combined with WAVe-filtered high-quality synthetic speech data only using a strict threshold (q ≥ 0.8).

Purpose

This model demonstrates the effectiveness of quality-over-quantity filtering for synthetic speech data. By retaining only the top 33.3% of synthetic samples (those with WAVe scores ≥ 0.8), this model achieves:

  • 32.6% WER improvement over the CV-only baseline (7.94% vs 11.78%)
  • 18.9% better cross-domain generalization on MLS (12.41% vs 15.31%)
  • 34% increase in training steps vs baseline (575 vs 430)

The model is part of a comprehensive study on WAVe (Word-Aligned Verification) filtering for Portuguese ASR, demonstrating that strict quality filtering provides an optimal balance between performance gains and computational efficiency.

Model Details

Property Value
Base Model openai/whisper-large-v3
Language Portuguese (pt)
Task Automatic Speech Recognition (transcribe)
Parameters 1550M
Training Data Common Voice 17.0 + High-Quality Synthetic (q ≥ 0.8)
Total Training Samples 29,178
Sampling Rate 16kHz

Evaluation Results

This Model (whisper-large-v3-high-mixed-pt)

Metric Value
Validation Loss 0.1045
Validation WER 7.33%
Test WER (Common Voice) 7.94%
Test WER (MLS) 12.41%
Best Checkpoint Step 200
Max Training Steps 575

Comparison with Other Training Configurations (Whisper-Large-v3 Portuguese)

Training Data Max Steps Val Loss Val WER Test WER (CV) Test WER (MLS)
Common Voice Only 430 0.1260 11.38% 11.78% 15.31%
High-Quality (q ≥ 0.8) + CV 575 0.1045 7.33% 7.94% 12.41%
Mid-High (q ≥ 0.5) + CV 805 0.1040 7.73% 8.33% 10.27%
All Synthetic + CV 860 0.1050 7.57% 8.33% 13.43%

Key Performance Highlights

  • Best in-domain performance: Lowest Test WER (7.94%) on Common Voice among filtered models
  • Strong cross-domain: 18.9% relative improvement on MLS vs baseline
  • Most efficient filtering: Only 33.5% more samples than baseline, 33% fewer than unfiltered
  • Optimal quality-to-compute ratio: Achieves near-best performance with minimal synthetic data

Training Data

Dataset Composition

Source Samples Description
Common Voice 17.0 Portuguese 21,866 Real speech from Mozilla's crowdsourced dataset
Synthetic Transcript PT (q ≥ 0.8) 7,312 Strictly WAVe-filtered TTS audio (high quality only)
Total 29,178

Synthetic Data Generation Pipeline

The synthetic dataset (yuriyvnv/synthetic_transcript_pt) was generated using:

  1. Transcript Generation: GPT-4o-mini, matching Common Voice word count distribution
  2. Speech Synthesis: OpenAI TTS-1 model with 9 voice variants (alloy, ash, coral, echo, fable, nova, onyx, sage, shimmer)
  3. Quality Filtering: WAVe model with strict threshold q ≥ 0.8

WAVe Quality Distribution (Portuguese Synthetic Data)

Quality Level Samples Percentage Used in This Model
High (q ≥ 0.8) 7,312 33.3% ✓
Medium (0.5 ≤ q < 0.8) 11,869 54.0% ✗
Low (q < 0.5) 2,787 12.7% ✗

This strict threshold retains only the top 33.3% of synthetic samples, prioritizing quality over quantity for maximum training efficiency.

Training Procedure

Hyperparameters

Parameter Value
Learning Rate 5e-6
Batch Size (Global) 256
Warmup Steps 200
Max Epochs 5
Precision BF16
Optimizer AdamW (fused)
Eval Steps 50
Metric for Best Model eval_loss

Training Infrastructure

  • GPU: NVIDIA H200 (140GB VRAM)
  • Operating System: Ubuntu 22.04
  • Framework: Hugging Face Transformers

Usage

Transcription Pipeline

from transformers import pipeline

transcriber = pipeline(
    "automatic-speech-recognition",
    model="yuriyvnv/whisper-large-v3-high-mixed-pt",
    device="cuda"
)

result = transcriber("path/to/portuguese_audio.wav")
print(result["text"])

Direct Model Usage

from transformers import WhisperProcessor, WhisperForConditionalGeneration
import librosa

processor = WhisperProcessor.from_pretrained("yuriyvnv/whisper-large-v3-high-mixed-pt")
model = WhisperForConditionalGeneration.from_pretrained("yuriyvnv/whisper-large-v3-high-mixed-pt")
model.to("cuda")

audio, sr = librosa.load("path/to/portuguese_audio.wav", sr=16000)
input_features = processor(audio, sampling_rate=16000, return_tensors="pt").input_features.to("cuda")

predicted_ids = model.generate(input_features)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
print(transcription)

Specifying Language

model.generation_config.language = "pt"
model.generation_config.task = "transcribe"

Methodology

This model leverages WAVe (Word-Aligned Verification), a word-level quality assessment method for filtering synthetic speech data. Unlike sentence-level filtering approaches, WAVe:

  • Aligns each word to its corresponding audio frames using multi-head attention
  • Assigns per-word confidence scores via a GLU-based scorer
  • Detects localized synthesis errors (mispronunciations, omitted words, prosodic anomalies)
  • Achieves 6.5% improvement over sentence-level filtering methods

The strict threshold (q ≥ 0.8) retains only the top 33.3% of synthetic samples, ensuring that only the highest-quality synthetic speech is used for training.

When to Use This Model

This model is ideal when:

  • Best in-domain accuracy required: Achieves 7.94% WER on Common Voice Portuguese
  • Compute efficiency matters: 33% fewer synthetic samples than unfiltered approach
  • Quick fine-tuning needed: Smaller dataset (29,178 samples) enables faster iteration
  • Quality over quantity: Only top-tier synthetic data (33.3%) for clean training signal

Consider other variants based on your needs:

Limitations

  • Domain specificity: Optimized for general Portuguese; may underperform on technical domains
  • Acoustic conditions: Trained on clean speech; noise robustness not guaranteed
  • Dialect coverage: Performance may vary across Portuguese regional variants (European vs Brazilian)

Citation

This model is part of research on WAVe (Word-Aligned Verification) for synthetic speech quality assessment. While the WAVe methodology paper is currently under review, please cite our previous work that motivated this research:

@article{perezhohin2024enhancing,
  title={Enhancing Automatic Speech Recognition: Effects of Semantic Audio Filtering on Models Performance},
  author={Perezhohin, Yuriy and Santos, Tiago and Costa, Victor and Peres, Fernando and Castelli, Mauro},
  journal={IEEE Access},
  year={2024},
  publisher={IEEE}
}

References

License

Apache 2.0

Downloads last month
24
Safetensors
Model size
2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yuriyvnv/whisper-large-v3-high-mixed-pt

Finetuned
(666)
this model

Datasets used to train yuriyvnv/whisper-large-v3-high-mixed-pt

Collection including yuriyvnv/whisper-large-v3-high-mixed-pt

Evaluation results