File size: 3,824 Bytes
f20f13c
98dea4a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f20f13c
98dea4a
f20f13c
 
98dea4a
f20f13c
98dea4a
f20f13c
98dea4a
f20f13c
98dea4a
 
e2e9b84
98dea4a
 
f20f13c
98dea4a
f20f13c
98dea4a
 
 
 
 
 
 
 
 
 
f20f13c
98dea4a
 
f20f13c
 
 
98dea4a
f20f13c
e2e9b84
98dea4a
f20f13c
e2e9b84
98dea4a
f20f13c
e2e9b84
98dea4a
f20f13c
98dea4a
f20f13c
98dea4a
f20f13c
98dea4a
e2e9b84
98dea4a
 
 
 
f20f13c
98dea4a
f20f13c
98dea4a
 
 
e2e9b84
98dea4a
 
 
 
f20f13c
e2e9b84
98dea4a
 
 
 
 
 
f20f13c
98dea4a
f20f13c
98dea4a
 
e2e9b84
98dea4a
 
 
f20f13c
98dea4a
f20f13c
98dea4a
f20f13c
 
98dea4a
f20f13c
98dea4a
f20f13c
98dea4a
f20f13c
e2e9b84
98dea4a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---
license: gemma
base_model: google/gemma-3-270m
tags:
- text-generation
- prompt-generation
- role-playing
- creative-writing
- fine-tuned
- gemma
- stacks
datasets:
- fka/awesome-chatgpt-prompts
language:
- en
pipeline_tag: text-generation
library_name: transformers
model_type: gemma
---

# STACKS

**STACKS** (Specialized Text AI for Creative Knowledge Synthesis) is a fine-tuned version of **google/gemma-3-270m** specifically trained to generate creative and detailed prompts for role-playing scenarios. Instead of responding to prompts, STACKS **creates** prompts when given a role or character type.

## 🎯 Model Description

STACKS transforms any role or profession into detailed, engaging prompts perfect for:
- **Role-playing games** and creative writing
- **Educational scenarios** and training simulations
- **Content creation** and storytelling
- **Character development** exercises

## πŸš€ Quick Start

with torch.no_grad():
    outputs = model.generate(
        **inputs,
        max_new_tokens=200,
        temperature=0.8,
        do_sample=True,
        top_p=0.9,
        repetition_penalty=1.1,
        pad_token_id=tokenizer.eos_token_id
    )

response = tokenizer.decode(outputs, skip_special_tokens=True)
return response[len(input_text):].strip()



## πŸ“‹ Example Outputs

**Input:** `"chef"`
**STACKS Output:** *"Act as a world-renowned chef who specializes in fusion cuisine. You've just opened a new restaurant that combines traditional techniques with molecular gastronomy. Explain your signature dish and the story behind its creation."*

**Input:** `"time traveler"`
**STACKS Output:** *"You are a time traveler who has just arrived in the year 2024 from 1850. Describe your amazement at modern technology, but also share wisdom from the past that people today have forgotten."*

**Input:** `"detective"`
**STACKS Output:** *"Act as a private detective in 1940s noir-style setting. You've been hired to investigate a mysterious disappearance at a high-society party. Describe the case and your first impressions of the suspects."*

## 🎭 Supported Role Categories

STACKS excels at generating prompts for:

- **Professional Roles**: doctors, lawyers, teachers, engineers, scientists
- **Creative Roles**: artists, writers, musicians, designers, filmmakers
- **Historical Figures**: philosophers, explorers, inventors, rulers
- **Fictional Characters**: superheroes, fantasy characters, sci-fi roles
- **Specialized Experts**: consultants, coaches, advisors, mentors
- **Adventure Roles**: explorers, adventurers, survivalists, travelers

## πŸ”§ Technical Details

### Training Configuration
- **Base Model**: google/gemma-3-270m (268M parameters)
- **Training Type**: Complete fine-tuning (all parameters trainable)
- **Dataset**: fka/awesome-chatgpt-prompts
- **Format**: Role β†’ Prompt generation patterns
- **Precision**: BF16 optimized
- **Context Length**: 768 tokens
- **Training Date**: 2025-08-20

### Model Specifications
- **Architecture**: Gemma-3
- **Parameters**: 268,098,176
- **Format**: Safetensors
- **Size**: ~536MB
- **Hardware**: Optimized for GPU inference
- **Attention**: Eager implementation (required for Gemma-3)

## πŸ“Š Performance & Quality

STACKS generates:
- **Coherent prompts** that match the requested role
- **Creative scenarios** with engaging storylines
- **Detailed instructions** for effective role-playing
- **Varied outputs** avoiding repetitive patterns
- **Contextually appropriate** content for each role

## 🎯 Usage Patterns

### Basic Generation


## πŸ“‹ License

This model is released under the **Gemma License**. Please see the [Gemma License](https://ai.google.dev/gemma/terms) for complete terms and conditions.

---

**Built with ❀️ by gouthamsai78**
*Transforming roles into creative prompts, one generation at a time.*