Query-focused and Memory-aware Reranker for Long Context Processing
Abstract
A lightweight reranking framework uses attention scores from selected heads to estimate passage-query relevance, achieving strong performance across multiple domains and benchmarks.
Built upon the existing analysis of retrieval heads in large language models, we propose an alternative reranking framework that trains models to estimate passage-query relevance using the attention scores of selected heads. This approach provides a listwise solution that leverages holistic information within the entire candidate shortlist during ranking. At the same time, it naturally produces continuous relevance scores, enabling training on arbitrary retrieval datasets without requiring Likert-scale supervision. Our framework is lightweight and effective, requiring only small-scale models (e.g., 4B parameters) to achieve strong performance. Extensive experiments demonstrate that our method outperforms existing state-of-the-art pointwise and listwise rerankers across multiple domains, including Wikipedia and long narrative datasets. It further establishes a new state-of-the-art on the LoCoMo benchmark that assesses the capabilities of dialogue understanding and memory usage. We further demonstrate that our framework supports flexible extensions. For example, augmenting candidate passages with contextual information further improves ranking accuracy, while training attention heads from middle layers enhances efficiency without sacrificing performance.
Community
arXivLens breakdown of this paper 👉 https://arxivlens.com/PaperView/Details/query-focused-and-memory-aware-reranker-for-long-context-processing-899-b70d7e0c
- Executive Summary
- Detailed Breakdown
- Practical Applications
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- OptiSet: Unified Optimizing Set Selection and Ranking for Retrieval-Augmented Generation (2026)
- DF-RAG: Query-Aware Diversity for Retrieval-Augmented Generation (2026)
- ReAttn: Improving Attention-based Re-ranking via Attention Re-weighting (2026)
- Implicit Graph, Explicit Retrieval: Towards Efficient and Interpretable Long-horizon Memory for Large Language Models (2026)
- S3-Attention:Attention-Aligned Endogenous Retrieval for Memory-Bounded Long-Context Inference (2026)
- CIRAG: Construction-Integration Retrieval and Adaptive Generation for Multi-hop Question Answering (2026)
- Exploring Fine-Tuning for In-Context Retrieval and Efficient KV-Caching in Long-Context Language Models (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
memory aware reranking for long context is a practical approach, the query focused part makes a lot of sense for real retrieval pipelines. found a nice breakdown of this one https://arxivexplained.com/papers/query-focused-and-memory-aware-reranker-for-long-context-processing
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper