原文
[Submitted on 12 Mar 2025 (v1), last revised 19 Mar 2025 (this version, v2)]
View a PDF of the paper titled Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning, by Bowen Jin and 5 other authors
View PDF HTML (experimental)Abstract:Efficiently acquiring external knowledge and up-to-date information is essential for effective reasoning and text generation in large language models (LLMs). Prompting advanced LLMs with reasoning capabilities during inference to use search engines is not optimal, since the LLM does not learn how to optimally interact with the search engine. This paper introduces Search-R1, an extension of the DeepSeek-R1 model where the LLM learns -- solely through reinforcement learning (RL) -- to autonomously generate (multiple) search queries during step-by-step reasoning with real-time retrieval. Search-R1 optimizes LLM rollouts with multi-turn search interactions, leveraging retrieved token masking for stable RL training and a simple outcome-based reward function. Experiments on seven question-answering datasets show that Search-R1 improves performance by 26% (Qwen2.5-7B), 21% (Qwen2.5-3B), and 10% (LLaMA3.2-3B) over strong baselines. This paper further provides empirical insights into RL optimization methods, LLM choices, and response length dynamics in retrieval-augmented reasoning. The code and model checkpoints are available at this https URL.
From: Bowen Jin [view email]
[v1] Wed, 12 Mar 2025 16:26:39 UTC (196 KB)
[v2] Wed, 19 Mar 2025 21:40:12 UTC (196 KB)
[v1] Wed, 12 Mar 2025 16:26:39 UTC (196 KB)
[v2] Wed, 19 Mar 2025 21:40:12 UTC (196 KB)