Abstract
Sequence-Level PPO addresses instability in long-chain-of-thought reasoning by reformulating the process as a contextual bandit problem with decoupled value functions for improved efficiency.
Proximal Policy Optimization (PPO) is central to aligning Large Language Models (LLMs) in reasoning tasks with verifiable rewards. However, standard token-level PPO struggles in this setting due to the instability of temporal credit assignment over long Chain-of-Thought (CoT) horizons and the prohibitive memory cost of the value model. While critic-free alternatives like GRPO mitigate these issues, they incur significant computational overhead by requiring multiple samples for baseline estimation, severely limiting training throughput. In this paper, we introduce Sequence-Level PPO (SPPO), a scalable algorithm that harmonizes the sample efficiency of PPO with the stability of outcome-based updates. SPPO reformulates the reasoning process as a Sequence-Level Contextual Bandit problem, employing a decoupled scalar value function to derive low-variance advantage signals without multi-sampling. Extensive experiments on mathematical benchmarks demonstrate that SPPO significantly surpasses standard PPO and matches the performance of computation-heavy group-based methods, offering a resource-efficient framework for aligning reasoning LLMs.
Community
We introduce SPPO (Sequence-Level PPO), a scalable RL algorithm for aligning reasoning LLMs that resolves the fundamental tension between PPO's unstable credit assignment and GRPO's costly multi-sampling.
Standard token-level PPO struggles in long Chain-of-Thought (CoT) reasoning due to the "Tail Effect" โ the critic overfits positional cues and fails to propagate sparse rewards across thousands of tokens. While GRPO sidesteps this with group-based baselines, it demands N>1 samples per prompt, severely bottlenecking training throughput.
Our key insight: GRPO's success stems from implicitly treating reasoning as a Sequence-Level Contextual Bandit. SPPO makes this explicit โ collapsing the entire reasoning chain into a single atomic action and employing a learned scalar value function V(s_p) to estimate prompt solvability, enabling stable single-sample (N=1) updates.
Highlights:
- ๐ Outperforms standard PPO and matches GRPO (N=8) on AIME24/25, AMC23, MATH500, and Minerva Math at both 1.5B and 7B scales
- โก 5.9ร training speedup over GRPO with single-sample efficiency
- ๐ง Decoupled Critic: a lightweight 1.5B critic successfully aligns a 7B policy, reducing VRAM by 12.8% while achieving the highest average score (58.56)
- ๐ฌ Validated beyond LLMs on classic control tasks (CartPole, Hopper, MountainCar, LunarLander, Pendulum) under the RLVR framework
๐ Paper (ACL 2026 Main): https://arxiv.org/abs/2604.08865
๐ป Code: https://github.com/sustech-nlp/SPPO
Get this paper in your agent:
hf papers read 2604.08865 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper