vim
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using Jackrong/gpt-oss-120b-Distill-Llama3.1-8B-v2 as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
base_model: Jackrong/gpt-oss-120b-Distill-Llama3.1-8B-v2
chat_template: llama3
dtype: float32
merge_method: dare_ties
modules:
default:
slices:
- sources:
- layer_range: [0, 32]
model: Locutusque/Apollo-2.0-Llama-3.1-8B
parameters:
density: 0.5
weight: 0.4
- layer_range: [0, 32]
model: Akashiurahara/Soulbound-8B
parameters:
density: 0.5
weight: 0.4
- layer_range: [0, 32]
model: Jackrong/gpt-oss-120b-Distill-Llama3.1-8B-v2
parameters:
int8_mask: 1.0
normalize: 0.0
tokenizer:
pad_to_multiple_of: 32
- Downloads last month
- 3