Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string
whisper-large-v3-med-pl-lora
This model is a fine-tuned version of openai/whisper-large-v3 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.0768
- Model Preparation Time: 0.022
- Wer: 6.9829
- Cer: 3.1870
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Wer | Cer |
|---|---|---|---|---|---|---|
| 0.1664 | 0.4996 | 602 | 0.1688 | 0.022 | 15.9176 | 5.4619 |
| 0.1296 | 0.9992 | 1204 | 0.1216 | 0.022 | 11.5699 | 4.7418 |
| 0.082 | 1.4988 | 1806 | 0.0956 | 0.022 | 11.4329 | 5.6251 |
| 0.0863 | 1.9983 | 2408 | 0.0833 | 0.022 | 9.2359 | 4.2807 |
| 0.0543 | 2.4979 | 3010 | 0.0779 | 0.022 | 9.8054 | 5.3263 |
| 0.0505 | 2.9975 | 3612 | 0.0712 | 0.022 | 7.3651 | 3.1850 |
| 0.0371 | 3.4971 | 4214 | 0.0725 | 0.022 | 9.3228 | 5.1141 |
| 0.0334 | 3.9967 | 4816 | 0.0704 | 0.022 | 8.2300 | 4.2879 |
| 0.0204 | 4.4963 | 5418 | 0.0707 | 0.022 | 7.1219 | 3.1809 |
| 0.0213 | 4.9959 | 6020 | 0.0700 | 0.022 | 6.8613 | 3.1437 |
| 0.0151 | 5.4954 | 6622 | 0.0716 | 0.022 | 7.2146 | 3.3620 |
| 0.0106 | 5.9950 | 7224 | 0.0701 | 0.022 | 6.9385 | 3.2018 |
| 0.0082 | 6.4946 | 7826 | 0.0730 | 0.022 | 6.7416 | 3.1243 |
| 0.009 | 6.9942 | 8428 | 0.0726 | 0.022 | 6.9771 | 3.3038 |
| 0.0062 | 7.4938 | 9030 | 0.0741 | 0.022 | 6.8941 | 3.2291 |
| 0.0052 | 7.9934 | 9632 | 0.0737 | 0.022 | 6.9578 | 3.3553 |
| 0.0045 | 8.4929 | 10234 | 0.0751 | 0.022 | 6.6721 | 3.0858 |
| 0.0038 | 8.9925 | 10836 | 0.0756 | 0.022 | 7.1953 | 3.2857 |
| 0.0038 | 9.4921 | 11438 | 0.0766 | 0.022 | 7.0775 | 3.2391 |
| 0.0033 | 9.9917 | 12040 | 0.0768 | 0.022 | 6.9829 | 3.1870 |
Framework versions
- PEFT 0.17.1
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 4.3.0
- Tokenizers 0.21.4
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for AleksanderObuchowski/whisper-large-v3-med-pl-lora
Base model
openai/whisper-large-v3