PACEvolve: Enabling Long-Horizon Progress-Aware Consistent Evolution
Abstract
PACEvolve framework addresses key failure modes in LLM evolutionary search through hierarchical context management, momentum-based backtracking, and adaptive sampling policies for improved self-improvement and solution discovery.
Large Language Models (LLMs) have emerged as powerful operators for evolutionary search, yet the design of efficient search scaffolds remains ad hoc. While promising, current LLM-in-the-loop systems lack a systematic approach to managing the evolutionary process. We identify three distinct failure modes: Context Pollution, where experiment history biases future candidate generation; Mode Collapse, where agents stagnate in local minima due to poor exploration-exploitation balance; and Weak Collaboration, where rigid crossover strategies fail to leverage parallel search trajectories effectively. We introduce Progress-Aware Consistent Evolution (PACEvolve), a framework designed to robustly govern the agent's context and search dynamics, to address these challenges. PACEvolve combines hierarchical context management (HCM) with pruning to address context pollution; momentum-based backtracking (MBB) to escape local minima; and a self-adaptive sampling policy that unifies backtracking and crossover for dynamic search coordination (CE), allowing agents to balance internal refinement with cross-trajectory collaboration. We demonstrate that PACEvolve provides a systematic path to consistent, long-horizon self-improvement, achieving state-of-the-art results on LLM-SR and KernelBench, while discovering solutions surpassing the record on Modded NanoGPT.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LoongFlow: Directed Evolutionary Search via a Cognitive Plan-Execute-Summarize Paradigm (2025)
- TiMem: Temporal-Hierarchical Memory Consolidation for Long-Horizon Conversational Agents (2026)
- DarwinTOD: LLM Driven Lifelong Self Evolution for Task Oriented Dialog Systems (2026)
- Controlled Self-Evolution for Algorithmic Code Optimization (2026)
- FoldAct: Efficient and Stable Context Folding for Long-Horizon Search Agents (2025)
- EvoFSM: Controllable Self-Evolution for Deep Research with Finite State Machines (2026)
- RevoNAD: Reflective Evolutionary Exploration for Neural Architecture Design (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
