kaiiddo commited on
Commit
0e97338
·
verified ·
1 Parent(s): 53c183b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -35,7 +35,7 @@ Welcome to **A3ON-1B**, the enhanced version of the A3ON AI assistant! With **1.
35
  | **Architecture** | Transformer-based neural network |
36
  | **Model Type** | Causal language model |
37
  | **Parameters** | 1.1 Billion (1,137,207,296) |
38
- | **Vocabulary Size** | `{len(tokenizer):,}` tokens |
39
  | **Context Length** | Up to 32,768 tokens |
40
  | **Precision** | FP32/FP16 support |
41
 
@@ -52,13 +52,16 @@ Welcome to **A3ON-1B**, the enhanced version of the A3ON AI assistant! With **1.
52
  ```python
53
  from transformers import AutoTokenizer, AutoModelForCausalLM
54
 
55
- tokenizer = AutoTokenizer.from_pretrained("{repo_name}")
56
- model = AutoModelForCausalLM.from_pretrained("{repo_name}")
 
57
 
58
  # Generate text
59
  inputs = tokenizer("Hello, how can I help you today?", return_tensors="pt")
60
  outputs = model.generate(**inputs, max_length=500)
61
  response = tokenizer.decode(outputs[0], skip_special_tokens=True)
 
 
62
  print(response)
63
  ```
64
 
 
35
  | **Architecture** | Transformer-based neural network |
36
  | **Model Type** | Causal language model |
37
  | **Parameters** | 1.1 Billion (1,137,207,296) |
38
+ | **Vocabulary Size** | 49,152 tokens |
39
  | **Context Length** | Up to 32,768 tokens |
40
  | **Precision** | FP32/FP16 support |
41
 
 
52
  ```python
53
  from transformers import AutoTokenizer, AutoModelForCausalLM
54
 
55
+ # Load the tokenizer and model
56
+ tokenizer = AutoTokenizer.from_pretrained("kaiiddo/A3ON-1B")
57
+ model = AutoModelForCausalLM.from_pretrained("kaiiddo/A3ON-1B")
58
 
59
  # Generate text
60
  inputs = tokenizer("Hello, how can I help you today?", return_tensors="pt")
61
  outputs = model.generate(**inputs, max_length=500)
62
  response = tokenizer.decode(outputs[0], skip_special_tokens=True)
63
+
64
+ # Print the response
65
  print(response)
66
  ```
67