vim

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Karcher Mean merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

chat_template: llama3
dtype: float32
merge_method: karcher
modules:
  default:
    slices:
    - sources:
      - layer_range: [0, 32]
        model: Locutusque/Apollo-2.0-Llama-3.1-8B
      - layer_range: [0, 32]
        model: Akashiurahara/Soulbound-8B
      - layer_range: [0, 32]
        model: Jackrong/gpt-oss-120b-Distill-Llama3.1-8B-v2
parameters:
  max_iter: 10.0
  tol: 1.0e-05
tokenizer:
  pad_to_multiple_of: 32
Downloads last month
73
Safetensors
Model size
8B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for kromcomp/L3.1-Vimv2-8B