Whisper-Large-v3 Portuguese - CAPES with WAVe Filtering (Beating State-of-the-Art)

This model is a fine-tuned version of openai/whisper-large-v3 for Portuguese automatic speech recognition (ASR). It was trained on Common Voice 17.0 Portuguese combined with the CAPES synthetic dataset filtered using WAVe (Word-Aligned Verification), achieving dramatic improvements over the baseline CAPES filtering approach.

Purpose

This model demonstrates the superiority of word-level filtering (WAVe) over sentence-level filtering by applying our new methodology to the established CAPES dataset. It directly compares with:

  1. my-north-ai/whisper-large-v3-pt: The current state-of-the-art Portuguese Whisper model using CAPES with sentence-level filtering
  2. whisper-large-v3-cv-capes-fs024-IEEE-pt: Our replication of the baseline CAPES methodology

Key Achievement

By applying WAVe filtering to the same CAPES dataset, this model achieves:

  • 49% relative improvement in cross-domain generalization (6.89% vs 13.54% MLS WER)
  • 5.7% better in-domain performance (7.95% vs 8.43% CV WER)
  • 18% fewer training steps (880 vs 1,080)
  • 30% less synthetic data (23k vs 33k samples)

This demonstrates that quality filtering is more important than data quantity for robust ASR performance.

What Makes This Different?

CAPES Dataset Characteristics:

  • Source: Academic thesis transcripts (longer, more complex utterances than Common Voice)
  • Challenge: Synthesis errors can hide within extended passages
  • Previous filtering: Sentence-level assessment (cannot detect localized errors)

WAVe's Advantage:

  • Word-level alignment: Detects synthesis errors at individual word positions
  • Fine-grained filtering: Identifies mispronunciations, omitted words, and prosodic anomalies that sentence-level methods miss
  • Better for long utterances: Particularly effective for the longer CAPES samples where sentence-level filtering fails

Model Details

Property Value
Base Model openai/whisper-large-v3
Language Portuguese (pt)
Task Automatic Speech Recognition (transcribe)
Parameters 1550M
Training Data Common Voice 17.0 + CAPES Filtered (WAVe Word-Level)
Total Training Samples ~45,000
Sampling Rate 16kHz
Filtering Method WAVe word-level (q ≥ 0.8)

Evaluation Results

This Model (whisper-large-v3-cv-capes-filtered-pt)

Metric Value
Validation Loss 0.1055
Validation WER 7.38%
Test WER (Common Voice) 7.95%
Test WER (MLS) 6.89%
Best Checkpoint Step 300
Max Training Steps 880

Comparison: Sentence-Level vs Word-Level Filtering (Same CAPES Dataset)

Model Filtering Method Synthetic Samples Max Steps Test WER (CV) Test WER (MLS) MLS Improvement
my-north-ai/whisper-large-v3-pt Sentence-level ~33k ~1,080 ~8.4% ~13.5% Baseline
CAPES Baseline Sentence-level 33.2k 1,080 8.43% 13.54% Baseline
This Model (WAVe) Word-level 23k 880 7.95% 6.89% +49%

Key Performance Highlights

  • Best cross-domain performance: 6.89% MLS WER (best among all Portuguese models evaluated)
  • 49% relative improvement over baseline CAPES on MLS benchmark
  • Superior in-domain: 7.95% CV WER (5.7% better than baseline)
  • Most efficient: 30% less synthetic data, 18% fewer training steps
  • Beats state-of-the-art: Outperforms my-north-ai/whisper-large-v3-pt significantly

Comparison with All Portuguese Large-v3 Variants

Model Dataset Filtering Test WER (CV) Test WER (MLS) Best For
CV Only Common Voice None 11.78% 15.31% Baseline
High-Quality Our Synthetic q ≥ 0.8 7.94% 12.41% In-domain
Mixed Our Synthetic q ≥ 0.5 8.33% 10.27% Balanced
CAPES Baseline CAPES Sentence-level 8.43% 13.54% State-of-the-art replication
CAPES WAVe (this) CAPES Word-level 7.95% 6.89% Cross-domain champion

Training Data

Dataset Composition

Source Samples Description
Common Voice 17.0 Portuguese 21,866 Real crowdsourced speech
CAPES Filtered (WAVe q ≥ 0.8) ~23,000 Academic thesis-derived synthetic speech with word-level filtering
Total ~45,000

WAVe Filtering Applied to CAPES

By applying WAVe's word-level filtering (q ≥ 0.8) to the CAPES dataset:

  • Original CAPES: 55k samples
  • Sentence-level filtering: Retained 33.2k samples (60%)
  • WAVe filtering: Retained 23k samples (42%)
  • Reduction: 30% fewer samples than sentence-level, but dramatically better performance

Training Procedure

Hyperparameters

Parameter Value
Learning Rate 5e-6
Batch Size (Global) 256
Warmup Steps 200
Max Epochs 5
Precision BF16
Optimizer AdamW (fused)
Eval Steps 50
Metric for Best Model eval_loss

Training Infrastructure

  • GPU: NVIDIA H200 (140GB VRAM)
  • Operating System: Ubuntu 22.04
  • Framework: Hugging Face Transformers

Usage

Transcription Pipeline

from transformers import pipeline

transcriber = pipeline(
    "automatic-speech-recognition",
    model="yuriyvnv/whisper-large-v3-cv-capes-filtered-pt",
    device="cuda"
)

result = transcriber("path/to/portuguese_audio.wav")
print(result["text"])

Direct Model Usage

from transformers import WhisperProcessor, WhisperForConditionalGeneration
import librosa

processor = WhisperProcessor.from_pretrained("yuriyvnv/whisper-large-v3-cv-capes-filtered-pt")
model = WhisperForConditionalGeneration.from_pretrained("yuriyvnv/whisper-large-v3-cv-capes-filtered-pt")
model.to("cuda")

audio, sr = librosa.load("path/to/portuguese_audio.wav", sr=16000)
input_features = processor(audio, sampling_rate=16000, return_tensors="pt").input_features.to("cuda")

predicted_ids = model.generate(input_features)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
print(transcription)

Specifying Language

model.generation_config.language = "pt"
model.generation_config.task = "transcribe"

Methodology: Why Word-Level Filtering Wins

WAVe (Word-Aligned Verification) achieves superior performance through:

  1. Fine-grained alignment: Maps each word to its corresponding audio frames using multi-head attention
  2. Per-word quality scores: GLU-based scorer assigns confidence to individual words, not entire sentences
  3. Localized error detection: Identifies synthesis defects (mispronunciations, omissions, prosodic anomalies) that hide in long utterances
  4. 6.5% improvement over sentence-level filtering methods

Why It Matters for CAPES:

  • CAPES contains longer, more complex utterances from academic theses
  • Synthesis errors can hide within extended passages
  • Sentence-level filtering misses these localized defects
  • WAVe's word-level attention catches what sentence-level methods miss

When to Use This Model

This model is ideal when:

  • Best cross-domain performance required: 6.89% MLS WER (best among all Portuguese models)
  • Robust generalization needed: Excels on out-of-domain data (MLS benchmark)
  • Quality over quantity: Achieves superior results with less data
  • Comparing filtering methodologies: Demonstrates effectiveness of word-level vs sentence-level filtering

Research Impact

This model proves a fundamental principle in synthetic speech augmentation:

Word-level quality filtering is more effective than sentence-level filtering for ASR training, especially with longer utterances.

The 49% improvement in cross-domain generalization while using 30% less data demonstrates that:

  • Quality > Quantity for synthetic speech
  • Fine-grained filtering > Coarse-grained filtering
  • Word-level alignment > Sentence-level assessment

Limitations

  • Domain specificity: Optimized for general Portuguese; may underperform on technical domains
  • Acoustic conditions: Trained on clean speech; noise robustness not guaranteed
  • Dialect coverage: Performance may vary across Portuguese regional variants (European vs Brazilian)

Citation

This model demonstrates WAVe (Word-Aligned Verification) filtering applied to the CAPES dataset. While the WAVe methodology paper is currently under review, the CAPES dataset and sentence-level filtering baseline are from our previous work:

@article{perezhohin2024enhancing,
  title={Enhancing Automatic Speech Recognition: Effects of Semantic Audio Filtering on Models Performance},
  author={Perezhohin, Yuriy and Santos, Tiago and Costa, Victor and Peres, Fernando and Castelli, Mauro},
  journal={IEEE Access},
  year={2024},
  publisher={IEEE}
}

References

License

Apache 2.0

Downloads last month
33
Safetensors
Model size
2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yuriyvnv/whisper-large-v3-cv-capes-filtered-pt

Finetuned
(663)
this model

Dataset used to train yuriyvnv/whisper-large-v3-cv-capes-filtered-pt

Collection including yuriyvnv/whisper-large-v3-cv-capes-filtered-pt

Evaluation results