Chunks as Arms: Multi-Armed Bandit-Guided Sampling for Long-Context LLM Preference Optimization
Long-context modeling is critical for a wide range of real-world tasks, including long-context question answering, summarization, and complex reasoning tasks. Recent studies have explored fine-tuning Large Language Models (LLMs) with synthetic data to enhance their long-context capabilities. However, the effectiveness of such approaches is often limited by the low diversity and factual inconsistencies in the generated data.
To address these challenges, we propose LongMab, a novel framework that leverages a Multi-Armed Bandit (MAB) rollout strategy to identify the most informative chunks from the given long context for sampling high-quality and diverse responses and constructing preference data pairs for Direct Preference Optimization (DPO) training. Specifically, we treat context chunks as arms of MAB, select chunks based on their expected reward scores to input into LLMs to generate responses, and iteratively update these scores based on reward feedback. Both exploration and exploitation during the rollout process enable the LLM to focus on the most relevant context segments, thereby generating and collecting high-quality and diverse responses.
Experimental results on both Llama and Qwen show the effectiveness of LongMab by achieving more than a 4% improvement on long-context reasoning benchmarks. All data and code will be released on this https URL. The paper is 17 pages long and can be cited as arXiv:2508.13993 [cs.CL] or arXiv:2508.13993v2 [cs.CL] for this version, with a DOI of https://doi.org/10.48550/arXiv.2508.13993, which is an arXiv-issued DOI via DataCite. The submission history includes two versions, with the first version submitted on Tue, 19 Aug 2025 16:33:55 UTC and the second version submitted on Thu, 9 Apr 2026 06:47:24 UTC.
No replies yet. Be first.