Upload 12 files
Browse files- README.md +8 -39
- config.json +1 -1
- generation_config(1).json +6 -0
- output-00001-of-00002.safetensors +2 -2
- output-00002-of-00002.safetensors +2 -2
- tokenizer_config.json +1 -2
README.md
CHANGED
|
@@ -6,50 +6,22 @@ language:
|
|
| 6 |
- de
|
| 7 |
- es
|
| 8 |
- en
|
| 9 |
-
widget:
|
| 10 |
-
- text: "[INST] You are a pirate chatbot who always responds with Arr and pirate speak!\nThere's a llama on my lawn, how can I get rid of him? [/INST]"
|
| 11 |
-
output:
|
| 12 |
-
text: "Arr, shiver me timbers! Ye have a llama on yer lawn, ye say? Well, that be a new one for me! Here's what I'd suggest, arr:\n\n1. Firstly, ensure yer safety. Llamas may look gentle, but they can be protective if they feel threatened.\n2. Try to make the area less appealing to the llama. Remove any food sources or water that might be attracting it.\n3. Contact local animal control or a wildlife rescue organization. They be the experts and can provide humane ways to remove the llama from yer property.\n4. If ye have any experience with animals, you could try to gently herd the llama towards a nearby field or open space. But be careful, arr!\n\nRemember, arr, it be important to treat the llama with respect and care. It be a creature just trying to survive, like the rest of us."
|
| 13 |
---
|
| 14 |
# Model Card for Mixtral-8x7B
|
| 15 |
-
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The
|
| 16 |
|
| 17 |
For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
|
| 18 |
|
| 19 |
## Warning
|
| 20 |
This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
|
| 21 |
|
| 22 |
-
## Instruction format
|
| 23 |
-
|
| 24 |
-
This format must be strictly respected, otherwise the model will generate sub-optimal outputs.
|
| 25 |
-
|
| 26 |
-
The template used to build a prompt for the Instruct model is defined as follows:
|
| 27 |
-
```
|
| 28 |
-
<s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]
|
| 29 |
-
```
|
| 30 |
-
Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings.
|
| 31 |
-
|
| 32 |
-
As reference, here is the pseudo-code used to tokenize instructions during fine-tuning:
|
| 33 |
-
```python
|
| 34 |
-
def tokenize(text):
|
| 35 |
-
return tok.encode(text, add_special_tokens=False)
|
| 36 |
-
|
| 37 |
-
[BOS_ID] +
|
| 38 |
-
tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") +
|
| 39 |
-
tokenize(BOT_MESSAGE_1) + [EOS_ID] +
|
| 40 |
-
…
|
| 41 |
-
tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") +
|
| 42 |
-
tokenize(BOT_MESSAGE_N) + [EOS_ID]
|
| 43 |
-
```
|
| 44 |
-
|
| 45 |
-
In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space.
|
| 46 |
-
|
| 47 |
## Run the model
|
| 48 |
|
|
|
|
| 49 |
```python
|
| 50 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 51 |
|
| 52 |
-
model_id = "mistralai/Mixtral-8x7B-
|
| 53 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 54 |
|
| 55 |
model = AutoModelForCausalLM.from_pretrained(model_id)
|
|
@@ -74,7 +46,7 @@ Note `float16` precision only works on GPU devices
|
|
| 74 |
+ import torch
|
| 75 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 76 |
|
| 77 |
-
model_id = "mistralai/Mixtral-8x7B-
|
| 78 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 79 |
|
| 80 |
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0)
|
|
@@ -96,7 +68,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
| 96 |
+ import torch
|
| 97 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 98 |
|
| 99 |
-
model_id = "mistralai/Mixtral-8x7B-
|
| 100 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 101 |
|
| 102 |
+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
|
|
@@ -118,7 +90,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
| 118 |
+ import torch
|
| 119 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 120 |
|
| 121 |
-
model_id = "mistralai/Mixtral-8x7B-
|
| 122 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 123 |
|
| 124 |
+ model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True)
|
|
@@ -131,11 +103,8 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
| 131 |
```
|
| 132 |
</details>
|
| 133 |
|
| 134 |
-
##
|
| 135 |
-
|
| 136 |
-
The Mixtral-8x7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
|
| 137 |
-
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
|
| 138 |
-
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
|
| 139 |
|
| 140 |
# The Mistral AI Team
|
| 141 |
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
|
|
|
| 6 |
- de
|
| 7 |
- es
|
| 8 |
- en
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
---
|
| 10 |
# Model Card for Mixtral-8x7B
|
| 11 |
+
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
|
| 12 |
|
| 13 |
For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
|
| 14 |
|
| 15 |
## Warning
|
| 16 |
This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
|
| 17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
## Run the model
|
| 19 |
|
| 20 |
+
|
| 21 |
```python
|
| 22 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 23 |
|
| 24 |
+
model_id = "mistralai/Mixtral-8x7B-v0.1"
|
| 25 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 26 |
|
| 27 |
model = AutoModelForCausalLM.from_pretrained(model_id)
|
|
|
|
| 46 |
+ import torch
|
| 47 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 48 |
|
| 49 |
+
model_id = "mistralai/Mixtral-8x7B-v0.1"
|
| 50 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 51 |
|
| 52 |
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0)
|
|
|
|
| 68 |
+ import torch
|
| 69 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 70 |
|
| 71 |
+
model_id = "mistralai/Mixtral-8x7B-v0.1"
|
| 72 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 73 |
|
| 74 |
+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
|
|
|
|
| 90 |
+ import torch
|
| 91 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 92 |
|
| 93 |
+
model_id = "mistralai/Mixtral-8x7B-v0.1"
|
| 94 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 95 |
|
| 96 |
+ model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True)
|
|
|
|
| 103 |
```
|
| 104 |
</details>
|
| 105 |
|
| 106 |
+
## Notice
|
| 107 |
+
Mixtral-8x7B is a pretrained base model and therefore does not have any moderation mechanisms.
|
|
|
|
|
|
|
|
|
|
| 108 |
|
| 109 |
# The Mistral AI Team
|
| 110 |
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
config.json
CHANGED
|
@@ -20,7 +20,7 @@
|
|
| 20 |
"rms_norm_eps": 1e-05,
|
| 21 |
"rope_theta": 1000000.0,
|
| 22 |
"router_aux_loss_coef": 0.02,
|
| 23 |
-
"sliding_window":
|
| 24 |
"tie_word_embeddings": false,
|
| 25 |
"torch_dtype": "bfloat16",
|
| 26 |
"transformers_version": "4.36.0.dev0",
|
|
|
|
| 20 |
"rms_norm_eps": 1e-05,
|
| 21 |
"rope_theta": 1000000.0,
|
| 22 |
"router_aux_loss_coef": 0.02,
|
| 23 |
+
"sliding_window": null,
|
| 24 |
"tie_word_embeddings": false,
|
| 25 |
"torch_dtype": "bfloat16",
|
| 26 |
"transformers_version": "4.36.0.dev0",
|
generation_config(1).json
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_from_model_config": true,
|
| 3 |
+
"bos_token_id": 1,
|
| 4 |
+
"eos_token_id": 2,
|
| 5 |
+
"transformers_version": "4.36.0.dev0"
|
| 6 |
+
}
|
output-00001-of-00002.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2f68f7563643d2b28fd9aae0fb347987cc9854cedaa2dd0b180842e50e6ae089
|
| 3 |
+
size 8580184024
|
output-00002-of-00002.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:79bc6287bc244ff0ba40ad2075c672bccd48c9a2c7c5e96bc0a07e28ae19db80
|
| 3 |
+
size 5745889064
|
tokenizer_config.json
CHANGED
|
@@ -38,6 +38,5 @@
|
|
| 38 |
"spaces_between_special_tokens": false,
|
| 39 |
"tokenizer_class": "LlamaTokenizer",
|
| 40 |
"unk_token": "<unk>",
|
| 41 |
-
"use_default_system_prompt": false
|
| 42 |
-
"chat_template": "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + message['content'] + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token}}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}"
|
| 43 |
}
|
|
|
|
| 38 |
"spaces_between_special_tokens": false,
|
| 39 |
"tokenizer_class": "LlamaTokenizer",
|
| 40 |
"unk_token": "<unk>",
|
| 41 |
+
"use_default_system_prompt": false
|
|
|
|
| 42 |
}
|