train_sst2_42_1763998301
This model is a fine-tuned version of meta-llama/Llama-3.2-1B-Instruct on the sst2 dataset. It achieves the following results on the evaluation set:
- Loss: 0.0963
- Num Input Tokens Seen: 30603904
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|---|---|---|---|---|
| 0.1189 | 0.5000 | 15154 | 0.1524 | 1530848 |
| 0.2758 | 1.0000 | 30308 | 0.1132 | 3059440 |
| 0.2144 | 1.5000 | 45462 | 0.1114 | 4588624 |
| 0.0135 | 2.0001 | 60616 | 0.1139 | 6120448 |
| 0.1894 | 2.5001 | 75770 | 0.0963 | 7647648 |
| 0.0008 | 3.0001 | 90924 | 0.1020 | 9179728 |
| 0.0013 | 3.5001 | 106078 | 0.1038 | 10712368 |
| 0.3086 | 4.0001 | 121232 | 0.1014 | 12240816 |
| 0.2025 | 4.5001 | 136386 | 0.0991 | 13771360 |
| 0.0009 | 5.0002 | 151540 | 0.0985 | 15302384 |
| 0.0008 | 5.5002 | 166694 | 0.0990 | 16833680 |
| 0.4378 | 6.0002 | 181848 | 0.0998 | 18362464 |
| 0.1183 | 6.5002 | 197002 | 0.1090 | 19889616 |
| 0.2138 | 7.0002 | 212156 | 0.1024 | 21421568 |
| 0.4953 | 7.5002 | 227310 | 0.1068 | 22953920 |
| 0.001 | 8.0003 | 242464 | 0.1034 | 24483488 |
| 0.0016 | 8.5003 | 257618 | 0.1026 | 26013248 |
| 0.0005 | 9.0003 | 272772 | 0.1033 | 27544816 |
| 0.0002 | 9.5003 | 287926 | 0.1060 | 29075248 |
Framework versions
- PEFT 0.17.1
- Transformers 4.51.3
- Pytorch 2.9.1+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
- Downloads last month
- 1
Model tree for rbelanec/train_sst2_42_1763998301
Base model
meta-llama/Llama-3.2-1B-Instruct