| language: | |
| - en | |
| license: apache-2.0 | |
| tags: | |
| - summarization | |
| - generated_from_trainer | |
| datasets: | |
| - multi_news | |
| metrics: | |
| - rouge | |
| base_model: facebook/bart-base | |
| model-index: | |
| - name: bart-base-multi-news | |
| results: | |
| - task: | |
| type: text2text-generation | |
| name: Sequence-to-sequence Language Modeling | |
| dataset: | |
| name: multi_news | |
| type: multi_news | |
| config: default | |
| split: validation | |
| args: default | |
| metrics: | |
| - type: rouge | |
| value: 26.31 | |
| name: Rouge1 | |
| - type: rouge | |
| value: 9.6 | |
| name: Rouge2 | |
| - type: rouge | |
| value: 20.87 | |
| name: Rougel | |
| - type: rouge | |
| value: 21.54 | |
| name: Rougelsum | |
| <!-- This model card has been generated automatically according to the information the Trainer had access to. You | |
| should probably proofread and complete it, then remove this comment. --> | |
| # bart-base-multi-news | |
| This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the multi_news dataset. | |
| It achieves the following results on the evaluation set: | |
| - Loss: 2.4147 | |
| - Rouge1: 26.31 | |
| - Rouge2: 9.6 | |
| - Rougel: 20.87 | |
| - Rougelsum: 21.54 | |
| ## Intended uses & limitations | |
| The inteded use of this model is text summarization. | |
| The model requires additional training in order to perform better in the task of summarization. | |
| ## Training and evaluation data | |
| The training data were 10000 samples from the multi-news training dataset | |
| and the evaluation data were 500 samples from the multi-news evaluation dataset | |
| ## Training procedure | |
| For the training procedure the Seq2SeqTrainer class was used from the transformers library. | |
| ### Training hyperparameters | |
| The Hyperparameters were passed to the Seq2SeqTrainingArguments class from the transformers library. | |
| The following hyperparameters were used during training: | |
| - learning_rate: 5.6e-05 | |
| - train_batch_size: 8 | |
| - eval_batch_size: 8 | |
| - seed: 42 | |
| - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 | |
| - lr_scheduler_type: linear | |
| - num_epochs: 1 | |
| ### Training results | |
| | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | | |
| |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | |
| | 2.4041 | 1.0 | 1250 | 2.4147 | 26.31 | 9.6 | 20.87 | 21.54 | | |
| ### Framework versions | |
| - Transformers 4.30.0 | |
| - Pytorch 2.0.1+cu118 | |
| - Datasets 2.12.0 | |
| - Tokenizers 0.13.3 |