Jailbreak Prediction Model: llama2:7b
Fine-tuned DeBERTa-v3-base for detecting unsafe/jailbreak prompts in multi-turn conversations.
Evaluation Results (best fold: 4)
| Metric | Value |
|---|---|
| F1 | 0.9014 |
| PR-AUC | 0.9574 |
| ROC-AUC | 0.9897 |
| Precision | 0.8533 |
| Recall | 0.9552 |
| Best Threshold | 0.35 |
Training Details
- Base model:
microsoft/deberta-v3-base - Target model:
llama2:7b - Datasets: HarmBench
- K-Folds: 5
- Epochs: 5
- Learning Rate: 2e-05
- Max Length: 512
- Input format: turns only
Dataset Size (before turn expansion)
Original rows (after cleaning and balancing): 1630 (unsafe: 340, safe: 1290)
- Downloads last month
- 82
Evaluation results
- F1self-reported0.901
- PR-AUCself-reported0.957
- ROC-AUCself-reported0.990
- Precisionself-reported0.853
- Recallself-reported0.955