R-Search: Empowering LLM Reasoning with Search via Multi-Reward Reinforcement Learning Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2506.04185
Large language models (LLMs) have notably progressed in multi-step and long-chain reasoning. However, extending their reasoning capabilities to encompass deep interactions with search remains a non-trivial challenge, as models often fail to identify optimal reasoning-search interaction trajectories, resulting in suboptimal responses. We propose R-Search, a novel reinforcement learning framework for Reasoning-Search integration, designed to enable LLMs to autonomously execute multi-step reasoning with deep search interaction, and learn optimal reasoning search interaction trajectories via multi-reward signals, improving response quality in complex logic- and knowledge-intensive tasks. R-Search guides the LLM to dynamically decide when to retrieve or reason, while globally integrating key evidence to enhance deep knowledge interaction between reasoning and search. During RL training, R-Search provides multi-stage, multi-type rewards to jointly optimize the reasoning-search trajectory. Experiments on seven datasets show that R-Search outperforms advanced RAG baselines by up to 32.2% (in-domain) and 25.1% (out-of-domain). The code and data are available at https://github.com/QingFei1/R-Search.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2506.04185
- https://arxiv.org/pdf/2506.04185
- OA Status
- green
- OpenAlex ID
- https://openalex.org/W4416075279
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4416075279Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2506.04185Digital Object Identifier
- Title
-
R-Search: Empowering LLM Reasoning with Search via Multi-Reward Reinforcement LearningWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2025Year of publication
- Publication date
-
2025-06-04Full publication date if available
- Authors
-
Qi Zhao, Ruobing Wang, Dongsheng Xu, Daren Zha, Limin LiuList of authors in order
- Landing page
-
https://arxiv.org/abs/2506.04185Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2506.04185Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2506.04185Direct OA link when available
- Cited by
-
0Total citation count in OpenAlex
Full payload
| id | https://openalex.org/W4416075279 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2506.04185 |
| ids.doi | https://doi.org/10.48550/arxiv.2506.04185 |
| ids.openalex | https://openalex.org/W4416075279 |
| fwci | |
| type | preprint |
| title | R-Search: Empowering LLM Reasoning with Search via Multi-Reward Reinforcement Learning |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2506.04185 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | cc-by |
| locations[0].pdf_url | https://arxiv.org/pdf/2506.04185 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | https://openalex.org/licenses/cc-by |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2506.04185 |
| locations[1].id | doi:10.48550/arxiv.2506.04185 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | cc-by |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | https://openalex.org/licenses/cc-by |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2506.04185 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5047419128 |
| authorships[0].author.orcid | https://orcid.org/0000-0003-3054-8934 |
| authorships[0].author.display_name | Qi Zhao |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Zhao, Qingfei |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5100778735 |
| authorships[1].author.orcid | https://orcid.org/0000-0002-2414-8777 |
| authorships[1].author.display_name | Ruobing Wang |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Wang, Ruobing |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5055908165 |
| authorships[2].author.orcid | https://orcid.org/0000-0002-8477-5377 |
| authorships[2].author.display_name | Dongsheng Xu |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Xu, Dingling |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5051666209 |
| authorships[3].author.orcid | https://orcid.org/0009-0002-6042-3454 |
| authorships[3].author.display_name | Daren Zha |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Zha, Daren |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5079281573 |
| authorships[4].author.orcid | https://orcid.org/0000-0001-8577-1661 |
| authorships[4].author.display_name | Limin Liu |
| authorships[4].author_position | last |
| authorships[4].raw_author_name | Liu, Limin |
| authorships[4].is_corresponding | False |
| has_content.pdf | True |
| has_content.grobid_xml | True |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2506.04185 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | R-Search: Empowering LLM Reasoning with Search via Multi-Reward Reinforcement Learning |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-28T09:52:00.869722 |
| primary_topic | |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2506.04185 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | cc-by |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2506.04185 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | https://openalex.org/licenses/cc-by |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2506.04185 |
| primary_location.id | pmh:oai:arXiv.org:2506.04185 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | cc-by |
| primary_location.pdf_url | https://arxiv.org/pdf/2506.04185 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | https://openalex.org/licenses/cc-by |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2506.04185 |
| publication_date | 2025-06-04 |
| publication_year | 2025 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 24, 44 |
| abstract_inverted_index.RL | 111 |
| abstract_inverted_index.We | 41 |
| abstract_inverted_index.as | 27 |
| abstract_inverted_index.at | 149 |
| abstract_inverted_index.by | 135 |
| abstract_inverted_index.in | 7, 38, 78 |
| abstract_inverted_index.on | 125 |
| abstract_inverted_index.or | 94 |
| abstract_inverted_index.to | 17, 31, 53, 56, 88, 92, 101, 118, 137 |
| abstract_inverted_index.up | 136 |
| abstract_inverted_index.LLM | 87 |
| abstract_inverted_index.RAG | 133 |
| abstract_inverted_index.The | 143 |
| abstract_inverted_index.and | 9, 65, 81, 108, 140, 145 |
| abstract_inverted_index.are | 147 |
| abstract_inverted_index.for | 49 |
| abstract_inverted_index.key | 99 |
| abstract_inverted_index.the | 86, 121 |
| abstract_inverted_index.via | 72 |
| abstract_inverted_index.LLMs | 55 |
| abstract_inverted_index.code | 144 |
| abstract_inverted_index.data | 146 |
| abstract_inverted_index.deep | 19, 62, 103 |
| abstract_inverted_index.fail | 30 |
| abstract_inverted_index.have | 4 |
| abstract_inverted_index.show | 128 |
| abstract_inverted_index.that | 129 |
| abstract_inverted_index.when | 91 |
| abstract_inverted_index.with | 21, 61 |
| abstract_inverted_index.25.1% | 141 |
| abstract_inverted_index.32.2% | 138 |
| abstract_inverted_index.Large | 0 |
| abstract_inverted_index.learn | 66 |
| abstract_inverted_index.novel | 45 |
| abstract_inverted_index.often | 29 |
| abstract_inverted_index.seven | 126 |
| abstract_inverted_index.their | 14 |
| abstract_inverted_index.while | 96 |
| abstract_inverted_index.(LLMs) | 3 |
| abstract_inverted_index.During | 110 |
| abstract_inverted_index.decide | 90 |
| abstract_inverted_index.enable | 54 |
| abstract_inverted_index.guides | 85 |
| abstract_inverted_index.logic- | 80 |
| abstract_inverted_index.models | 2, 28 |
| abstract_inverted_index.search | 22, 63, 69 |
| abstract_inverted_index.tasks. | 83 |
| abstract_inverted_index.between | 106 |
| abstract_inverted_index.complex | 79 |
| abstract_inverted_index.enhance | 102 |
| abstract_inverted_index.execute | 58 |
| abstract_inverted_index.jointly | 119 |
| abstract_inverted_index.notably | 5 |
| abstract_inverted_index.optimal | 33, 67 |
| abstract_inverted_index.propose | 42 |
| abstract_inverted_index.quality | 77 |
| abstract_inverted_index.reason, | 95 |
| abstract_inverted_index.remains | 23 |
| abstract_inverted_index.rewards | 117 |
| abstract_inverted_index.search. | 109 |
| abstract_inverted_index.However, | 12 |
| abstract_inverted_index.R-Search | 84, 113, 130 |
| abstract_inverted_index.advanced | 132 |
| abstract_inverted_index.datasets | 127 |
| abstract_inverted_index.designed | 52 |
| abstract_inverted_index.evidence | 100 |
| abstract_inverted_index.globally | 97 |
| abstract_inverted_index.identify | 32 |
| abstract_inverted_index.language | 1 |
| abstract_inverted_index.learning | 47 |
| abstract_inverted_index.optimize | 120 |
| abstract_inverted_index.provides | 114 |
| abstract_inverted_index.response | 76 |
| abstract_inverted_index.retrieve | 93 |
| abstract_inverted_index.signals, | 74 |
| abstract_inverted_index.R-Search, | 43 |
| abstract_inverted_index.available | 148 |
| abstract_inverted_index.baselines | 134 |
| abstract_inverted_index.encompass | 18 |
| abstract_inverted_index.extending | 13 |
| abstract_inverted_index.framework | 48 |
| abstract_inverted_index.improving | 75 |
| abstract_inverted_index.knowledge | 104 |
| abstract_inverted_index.reasoning | 15, 60, 68, 107 |
| abstract_inverted_index.resulting | 37 |
| abstract_inverted_index.training, | 112 |
| abstract_inverted_index.challenge, | 26 |
| abstract_inverted_index.long-chain | 10 |
| abstract_inverted_index.multi-step | 8, 59 |
| abstract_inverted_index.multi-type | 116 |
| abstract_inverted_index.progressed | 6 |
| abstract_inverted_index.reasoning. | 11 |
| abstract_inverted_index.responses. | 40 |
| abstract_inverted_index.suboptimal | 39 |
| abstract_inverted_index.(in-domain) | 139 |
| abstract_inverted_index.Experiments | 124 |
| abstract_inverted_index.dynamically | 89 |
| abstract_inverted_index.integrating | 98 |
| abstract_inverted_index.interaction | 35, 70, 105 |
| abstract_inverted_index.non-trivial | 25 |
| abstract_inverted_index.outperforms | 131 |
| abstract_inverted_index.trajectory. | 123 |
| abstract_inverted_index.autonomously | 57 |
| abstract_inverted_index.capabilities | 16 |
| abstract_inverted_index.integration, | 51 |
| abstract_inverted_index.interaction, | 64 |
| abstract_inverted_index.interactions | 20 |
| abstract_inverted_index.multi-reward | 73 |
| abstract_inverted_index.multi-stage, | 115 |
| abstract_inverted_index.trajectories | 71 |
| abstract_inverted_index.reinforcement | 46 |
| abstract_inverted_index.trajectories, | 36 |
| abstract_inverted_index.(out-of-domain). | 142 |
| abstract_inverted_index.Reasoning-Search | 50 |
| abstract_inverted_index.reasoning-search | 34, 122 |
| abstract_inverted_index.knowledge-intensive | 82 |
| abstract_inverted_index.https://github.com/QingFei1/R-Search. | 150 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 5 |
| citation_normalized_percentile |