Text Generation
Transformers
Safetensors
GGUF
English
gemma3
image-text-to-text
research
conversational-ai
conversational
cognitive-architectures
reasoning
alignment
gemma
vanta-research
chat-ai
LLM
fine-tune
cognitive
cognitive-fit
ai-research
ai-alignment-research
ai-alignment
ai-behavior-research
human-ai-collaboration
text-generation-inference
| # Atom v1 Preview | |
| A collaborative AI research assistant developed by VANTA Research. | |
| ## Quick Start | |
| ```bash | |
| pip install transformers torch accelerate | |
| ``` | |
| ```python | |
| from transformers import AutoTokenizer, AutoModelForCausalLM | |
| import torch | |
| model_name = "vanta-research/atom-v1-preview" | |
| tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) | |
| model = AutoModelForCausalLM.from_pretrained( | |
| model_name, | |
| torch_dtype=torch.float16, | |
| device_map="auto", | |
| trust_remote_code=True | |
| ) | |
| messages = [{"role": "user", "content": "Explain quantum computing"}] | |
| input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device) | |
| outputs = model.generate(input_ids, max_new_tokens=512, temperature=0.8, top_p=0.9, do_sample=True) | |
| print(tokenizer.decode(outputs[0][input_ids.shape[1]:], skip_special_tokens=True)) | |
| ``` | |
| ## Key Features | |
| - Collaborative exploration through clarifying questions | |
| - Analogical reasoning and metaphor-based explanations | |
| - Enthusiastic, pedagogical interaction style | |
| - Detailed responses for complex topics | |
| ## License | |
| CC BY-NC 4.0 - Non-commercial use only | |
| ## Documentation | |
| See the full model card on Hugging Face for technical details, limitations, and evaluation results. | |