Tree-based Dialogue Reinforced Policy Optimization for Red-Teaming Attacks Paper • 2510.02286 • Published Oct 2 • 28
LLM Can be a Dangerous Persuader: Empirical Study of Persuasion Safety in Large Language Models Paper • 2504.10430 • Published Apr 14 • 5