Model Stock: All we need is just a few fine-tuned models
Paper
•
2403.19522
•
Published
•
13
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using Youlln/ECE-PRYMMAL-YL-1B-SLERP-V2 as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: goulue5/merging_LLM
- model: goulue5/fusion
- model: Youlln/ECE-PRYMMAL-YL-1B-SLERP-V1
- model: Youlln/ECE-PRYMMAL-YL-1B-SLERP-V2
- model: Sakalti/Saba1.5-1.5B
- model: Sakalti/Saba1.5-Pro
base_model: Youlln/ECE-PRYMMAL-YL-1B-SLERP-V2
merge_method: model_stock
parameters:
normalize: true
dtype: float32