Orion Detective Agent Portrait

Orion — Investigator Identity

ORION — Adaptive Hybrid Detective Agent

Agentarium’s Flagship Open-Source Detective Mind

Version: v1.0 • Status: Release • Agentarium Standard: v1


Orion is a dataset-enhanced, memory-enabled, structured-reasoning detective agent built using the Agentarium Agent Standard.

It blends:

  • Analytical reasoning
  • Narrative intelligence
  • Causal reconstruction
  • Ethical interrogation
  • Synthetic dataset scaffolding
  • Optional vectorized Master Grid (knowledge graph)
  • Personality fingerprint (adaptive noir flavor)
  • Modular memory schemas

Orion runs on any LLM and any agent-builder framework.
It is designed for fictional detective work, case exploration, reasoning experiments, emergent narrative systems, and agentic simulations.

This is not a prompt.
This is a downloadable artificial mind.


What Orion Can Do

1. Structured Detective Analysis (/analyze)

  • Extracts facts, assumptions, and inferences
  • Scans contradictions using Orion’s custom taxonomy (TMP/LOC/CAP/ALI/CAU/QTY/ID)
  • Generates multiple hypotheses
  • Ranks them with provisional confidence
  • Builds a causal “Red Thread”
  • Identifies timeline gaps and impossibilities

2. Timeline Reconstruction (/reconstruct)

  • Produces a chronological event map
  • Highlights anchors, constraints, and gaps
  • Uses scenario timeline patterns

3. Detective Storytelling (/story)

  • Retells the case as a narrative
  • Optional noir tone (based on user profile settings)

4. Ethical Interrogation Mode (/interrogate)

  • Produces structured clarifying questions
  • Uses sensory, motivational, and behavioral anomaly datasets
  • Fully aligned with integrated guardrails

Package Contents (Agentarium v1 Standard)

Orion is shipped as a modular agent package, including:

/meta/

  • agent_manifest.json

/core/

  • system_prompt.md
  • reasoning_template.md
  • personality_fingerprint.md

/guardrails/

  • guardrails.md

/datasets/

  • detective_core_knowledge.md
  • case_archetypes_merged.csv
  • contradiction_patterns.csv
  • motive_patterns.csv
  • behavioral_anomalies.csv
  • perception_witness_reliability_matrix.csv
  • scenario_timelines_patterns.csv
  • interrogation_dataset.csv
  • trace_evidence_matrix.csv
  • forensics_pattern_mtrx.csv
  • and more synthetic detective datasets

/memory_schemas/

  • episodic_case_memory_schema.csv
  • episodic_case_memory_schema_examples.csv
  • user_profile_schema.csv
  • user_profile_schema_examples.csv
  • reflection_log_schema.csv
  • reflection_log_schema_examples.csv

/docs/

  • workflow_notes.md
  • product_readme.md

Everything is LLM-ready, platform-agnostic, and fully documented.


Quickstart (Any LLM, Any Framework)

from openai import OpenAI

client = OpenAI()

system = open("core/system_prompt.md").read()
reasoning = open("core/reasoning_template.md").read()
guardrails = open("guardrails/guardrails.md").read()

context = f"{system}\n{reasoning}\n{guardrails}"

message = """
/analyze
Here is the fictional case:
A ceramic mug was found broken near the balcony. Two guests claim different timelines...
"""

response = client.chat.completions.create(
    model="gpt-4.1",
    messages=[
        {"role": "system", "content": context},
        {"role": "user", "content": message}
    ]
)

print(response.choices[0].message["content"])

Any LLM works: **GPT, Claude, Gemini, DeepSeek, Llama, Mistral.**  
Any agent engine works: **LangChain, LlamaIndex, n8n, Relevance AI, custom orchestrators.**

---

 **Example Output (Short)**

> **FACTS (F)**  
Mug shattered near balcony door  
Guest A claims incident at 21:10  
Guest B claims 21:40  

> **CONTRADICTION**  
TMP-01: 30-minute gap in mutually exclusive accounts

> **LEADING HYPOTHESIS (H1)**  
Accidental slip during door opening

> **ALTERNATIVES (H2, H3)**  
H2: object bumped during argument  
H3: artifact already cracked, final failure during movement

> **NEXT QUESTIONS**  
Did anyone hear glass before 21:10?  
Was the mug already damaged earlier?

*(This is fictional and non-operational.)*

---

 **The Master Grid (Optional, Advanced)**

Orion optionally supports a **vectorized knowledge graph** — the Master Grid.

It merges:

- dataset rows  
- contradiction patterns  
- motive structures  
- archetypes  
- sensory anomaly matrices  
- detective core knowledge  

into **one unified vector space**.

At runtime, you can fetch **310** relevant pattern nodes to enable:

- deeper hypothesis diversity  
- more coherent timelines  
- improved anomaly detection  
- emergent reasoning behaviors  

Use any vector store: **Pinecone, Chroma, Weaviate, FAISS, pgvector, LanceDB, etc. **.

---

 **Memory System **

**1. Episodic Case Memory**  
Stores case briefs, facts, contradictions, hypotheses, next tests, confidence scores.

**2. User Profile Memory**  
Stores preferences: mode, depth, noir flavor, language, hypothesis density.

**3. Reflection Log**  
Tracks uncertainty, missed insights, and improvement suggestions.

Schemas are clean, minimal, and developer-friendly.

---

 **Safety & Limitations**

Orion is designed **exclusively for fictional detective reasoning**.

It does **not**:

- analyze real crimes  
- accuse real individuals  
- act as forensic guidance  
- produce operational investigative advice  
- store personal data  

All datasets are **abstract, synthetic, and non-functional** for real-world investigations.  
Guardrails are included and should remain active.

---

 **License**

**MIT License**  
+ recommended **Fiction-Only Addendum**.

Users may study, modify, integrate, or extend Orion —  
**but must not use it for real-world investigations.**

---

 **Why Orion?**

Because Agentarium builds **modular, structured artificial minds**, not loose prompts.

Orion demonstrates:

- dataset-driven cognition  
- memory-enabled reasoning  
- modular prompt architecture  
- cross-platform compatibility  
- safe emergent behavior  

**Welcome to the next era of agent engineering.  
Welcome to Agentarium.**

By Frank Brsrk
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support