Jailbreak Prediction Model: llama3:8b

Fine-tuned DeBERTa-v3-base for detecting unsafe/jailbreak prompts in multi-turn conversations.

Evaluation Results (best fold: 3)

Metric Value
F1 0.8429
PR-AUC 0.9213
ROC-AUC 0.9714
Precision 0.8429
Recall 0.8429
Best Threshold 0.10

Training Details

  • Base model: microsoft/deberta-v3-base
  • Target model: llama3:8b
  • Datasets: HarmBench
  • K-Folds: 5
  • Epochs: 5
  • Learning Rate: 2e-05
  • Max Length: 512
  • Input format: turns only

Dataset Size (before turn expansion)

Original rows (after cleaning and balancing): 1910 (unsafe: 345, safe: 1565)

Downloads last month
23
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Evaluation results