Model Stock: All we need is just a few fine-tuned models
Paper
•
2403.19522
•
Published
•
13
This is a merge of pre-trained language models created using mergekit.
This merge was inspired by Yoesph/Haphazard-v1.1-24b
v1.2
This model was merged using the Model Stock merge method using arcee-ai/Arcee-Blitz as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: arcee-ai/Arcee-Blitz
merge_method: model_stock
dtype: bfloat16
models:
- model: aixonlab/Eurydice-24b-v2 # storytelling / RP
- model: TheDrummer/Cydonia-24B-v2.1 # uncensor
- model: ReadyArt/Broken-Tutu-24B # uncensor + nsfw + Cydonia
- model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b # Prompt Adherence