Uploaded model
- Developed by: uisikdag
- License: apache-2.0
- Finetuned from model : unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with Unsloth
from unsloth import FastModel
from unsloth.chat_templates import get_chat_template
# Load model
model, tokenizer = FastModel.from_pretrained(
"uisikdag/gemma3-1b-arc-tr",
max_seq_length=2048,
load_in_4bit=True,
)
tokenizer = get_chat_template(tokenizer, chat_template="gemma-3")
FastModel.for_inference(model)
# Predict
question = "Fotosentez sırasında bitkiler hangi gazı üretir?"
options = ["A Karbondioksit", "B Oksijen", "C Azot", "D Hidrojen"]
prompt = f"Soru: {question}\n\nSecenekler:\n" + "\n".join(options) + "\n\nDogru cevap hangisi?"
messages = [{"role": "user", "content": [{"type": "text", "text": prompt}]}]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True).to("cuda")
outputs = model.generate(**inputs, max_new_tokens=64, temperature=1.0, top_p=0.95)
response = tokenizer.decode(outputs[0], skip_special_tokens=True).split("model")[-1].strip()
print(f"Cevap: {response}")
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for uisikdag/gemma3-1b-arc-tr
Base model
google/gemma-3-1b-pt
Finetuned
google/gemma-3-1b-it
Quantized
unsloth/gemma-3-1b-it-unsloth-bnb-4bit
