⚠️ Warning: This model can produce narratives and RP that contain violent and graphic erotic content. Adjust your system prompt accordingly, and use Mistral Tekken chat template.

🌌 Magistaroth 24B v1.1

Magistaroth

Merge Method

A custom merge method known as pdq has been invented. Instead of using its own yaml, it acts as a post-merge processor which applies directly to the merged model using the original yaml. pdq aims to enhance creativity by re-scanning the original donor models, encouraging them to explore the 'dark matter' regions of the vectors to synergistically augment the merged base with more unique novelty. For Magistaroth v1.1, I tested both the v1 Della → PDQ → MPOA and Della → MPOA → PDQ.

It turns out that both are very creative, and the MPOA → PDQ is interesting because it doesn't re-introduce any refusals, however, PDQ → MPOA is much smarter. The difference in Q0 bench reflects this (9451 vs 12648). Scale 1.2 was the ablation threshold required to disable refusals. This has resulted in the most creative, detailed, and uncensored variant of the configurations tested.

Bugs

A small risk of increased artifacts (missing spaces, word misspelled or repeated) might be noticed due to pdq pushing the limits of what's possible with transformers. These are rare and can be edited out if needed.

Fully Uncensored

An unablated PDQ version was also tested (it has refusals) but it seems the ablated versions are more popular so I'm just releasing this one for now.

Settings

  • Recommended temp 1.0 and topnsigma 1.25
  • Mistral Tekken chat template
Downloads last month
101
Safetensors
Model size
24B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DarkArtsForge/Magistaroth-24B-v1.1