Model Breadcrumbs: Scaling Multi-Task Model Merging with Sparse Masks
Paper
•
2312.06795
•
Published
•
1
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Breadcrumbs merge method using meta-llama/Meta-Llama-3-8B-Instruct as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: breadcrumbs
models:
- model: shenzhi-wang/Llama3-8B-Chinese-Chat
parameters:
weight: 0.5
- model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
weight: 0.5
parameters: {}
dtype: bfloat16
tokenizer:
source: union
base_model: meta-llama/Meta-Llama-3-8B-Instruct
write_readme: README.md