Editing Models with Task Arithmetic
Paper • 2212.04089 • Published • 8
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EstherXC/mixtral_task_arithmetic")
model = AutoModelForCausalLM.from_pretrained("EstherXC/mixtral_task_arithmetic")This is a merge of pre-trained language models created using mergekit.
This model was merged using the Task Arithmetic merge method using mistralai/Mistral-7B-v0.1 as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: mistralai/Mistral-7B-v0.1
models:
- model: EstherXC/mixtral_7b_protein_pretrain
parameters:
weight: 0.3
- model: wanglab/mixtral_7b_dna_pretrain #dnagpt/llama-dna
parameters:
weight: 0.3
merge_method: task_arithmetic
dtype: float16
tokenizer_source: "base"
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="EstherXC/mixtral_task_arithmetic")