Consolidating Reinforcement Learning for Multimodal Discrete Diffusion Models
Abstract
MaskGRPO addresses challenges in optimizing discrete diffusion models with rewards through effective importance sampling and modality-specific adaptations, improving reasoning and generation quality.
Optimizing discrete diffusion model (DDM) with rewards remains a challenge: the non-autoregressive paradigm makes importance sampling intractable and rollout complex, puzzling reinforcement learning methods such as Group Relative Policy Optimization (GRPO). In this study, we introduce MaskGRPO, the first viable approach to enable scalable multimodal reinforcement learning in discrete diffusion with effective importance sampling and modality-specific adaptations. To this end, we first clarify the theoretical foundation for DDMs, which facilitates building an importance estimator that captures valuable token fluctuation for gradient updates. We then delicately tailored the rollout method for visual sequences, which yields diverse completions and reliable optimization gradients. Upon math reasoning, coding, and visual generation benchmarks, MaskGRPO brings more stable and efficient updates, leading to stronger reasoning performance and better generation quality. This study establishes MaskGRPO as a systematic policy optimization approach and the first practical way for discretized visual diffusion.
Community
Our project has been open-sourced at https://github.com/martian422/MaskGRPO
In this repo, we release:
- Improved importance estimation for reinforcing DDMs with controlled randomness across devices.
- AR-like reversing for RL training on math reasoning and coding tasks.
- Emerge sampler for image generation and RL training.
- Detailed SFT, RL and Evaluation scripts.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Inpainting-Guided Policy Optimization for Diffusion Large Language Models (2025)
- MDPO: Overcoming the Training-Inference Divide of Masked Diffusion Language Models (2025)
- DiFFPO: Training Diffusion LLMs to Reason Fast and Furious via Reinforcement Learning (2025)
- STAGE: Stable and Generalizable GRPO for Autoregressive Image Generation (2025)
- Plug-and-Play Prompt Refinement via Latent Feedback for Diffusion Model Alignment (2025)
- d2: Improved Techniques for Training Reasoning Diffusion Language Models (2025)
- RFG: Test-Time Scaling for Diffusion Large Language Model Reasoning with Reward-Free Guidance (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper
