V2X-UniPool: Unifying Multimodal Perception and Knowledge Reasoning for Autonomous Driving Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2506.02580
Autonomous driving (AD) has achieved significant progress, yet single-vehicle perception remains constrained by sensing range and occlusions. Vehicle-to-Everything (V2X) communication addresses these limits by enabling collaboration across vehicles and infrastructure, but it also faces heterogeneity, synchronization, and latency constraints. Language models offer strong knowledge-driven reasoning and decision-making capabilities, but they are not inherently designed to process raw sensor streams and are prone to hallucination. We propose V2X-UniPool, the first framework that unifies V2X perception with language-based reasoning for knowledge-driven AD. It transforms multimodal V2X data into structured, language-based knowledge, organizes it in a time-indexed knowledge pool for temporally consistent reasoning, and employs Retrieval-Augmented Generation (RAG) to ground decisions in real-time context. Experiments on the real-world DAIR-V2X dataset show that V2X-UniPool achieves state-of-the-art planning accuracy and safety while reducing communication cost by more than 80\%, achieving the lowest overhead among evaluated methods. These results highlight the promise of bridging V2X perception and language reasoning to advance scalable and trustworthy driving. Our code is available at: https://github.com/Xuewen2025/V2X-UniPool
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2506.02580
- https://arxiv.org/pdf/2506.02580
- OA Status
- green
- OpenAlex ID
- https://openalex.org/W4415133854
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4415133854Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2506.02580Digital Object Identifier
- Title
-
V2X-UniPool: Unifying Multimodal Perception and Knowledge Reasoning for Autonomous DrivingWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2025Year of publication
- Publication date
-
2025-06-03Full publication date if available
- Authors
-
Xuewen Luo, Fengze Yang, Fan Ding, Xiangbo Gao, S. Xing, Zhou Yang, Zhengzhong Tu, Chenxi LiuList of authors in order
- Landing page
-
https://arxiv.org/abs/2506.02580Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2506.02580Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2506.02580Direct OA link when available
- Cited by
-
0Total citation count in OpenAlex
Full payload
| id | https://openalex.org/W4415133854 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2506.02580 |
| ids.doi | https://doi.org/10.48550/arxiv.2506.02580 |
| ids.openalex | https://openalex.org/W4415133854 |
| fwci | |
| type | preprint |
| title | V2X-UniPool: Unifying Multimodal Perception and Knowledge Reasoning for Autonomous Driving |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T10181 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9577000141143799 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1702 |
| topics[0].subfield.display_name | Artificial Intelligence |
| topics[0].display_name | Natural Language Processing Techniques |
| topics[1].id | https://openalex.org/T10028 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9491999745368958 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1702 |
| topics[1].subfield.display_name | Artificial Intelligence |
| topics[1].display_name | Topic Modeling |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2506.02580 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2506.02580 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2506.02580 |
| locations[1].id | doi:10.48550/arxiv.2506.02580 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2506.02580 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5102664645 |
| authorships[0].author.orcid | |
| authorships[0].author.display_name | Xuewen Luo |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Luo, Xuewen |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5102489788 |
| authorships[1].author.orcid | |
| authorships[1].author.display_name | Fengze Yang |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Yang, Fengze |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5101867313 |
| authorships[2].author.orcid | https://orcid.org/0000-0002-6672-2247 |
| authorships[2].author.display_name | Fan Ding |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Ding, Fan |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5003755037 |
| authorships[3].author.orcid | https://orcid.org/0000-0001-7123-2675 |
| authorships[3].author.display_name | Xiangbo Gao |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Gao, Xiangbo |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5100706352 |
| authorships[4].author.orcid | https://orcid.org/0000-0002-7502-6876 |
| authorships[4].author.display_name | S. Xing |
| authorships[4].author_position | middle |
| authorships[4].raw_author_name | Xing, Shuo |
| authorships[4].is_corresponding | False |
| authorships[5].author.id | https://openalex.org/A5052761636 |
| authorships[5].author.orcid | https://orcid.org/0000-0003-1229-3739 |
| authorships[5].author.display_name | Zhou Yang |
| authorships[5].author_position | middle |
| authorships[5].raw_author_name | Zhou, Yang |
| authorships[5].is_corresponding | False |
| authorships[6].author.id | https://openalex.org/A5015173810 |
| authorships[6].author.orcid | https://orcid.org/0000-0002-7594-2292 |
| authorships[6].author.display_name | Zhengzhong Tu |
| authorships[6].author_position | middle |
| authorships[6].raw_author_name | Tu, Zhengzhong |
| authorships[6].is_corresponding | False |
| authorships[7].author.id | https://openalex.org/A5101875470 |
| authorships[7].author.orcid | https://orcid.org/0000-0002-7993-8370 |
| authorships[7].author.display_name | Chenxi Liu |
| authorships[7].author_position | last |
| authorships[7].raw_author_name | Liu, Chenxi |
| authorships[7].is_corresponding | False |
| has_content.pdf | True |
| has_content.grobid_xml | True |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2506.02580 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-14T00:00:00 |
| display_name | V2X-UniPool: Unifying Multimodal Perception and Knowledge Reasoning for Autonomous Driving |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T10181 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9577000141143799 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1702 |
| primary_topic.subfield.display_name | Artificial Intelligence |
| primary_topic.display_name | Natural Language Processing Techniques |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2506.02580 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2506.02580 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2506.02580 |
| primary_location.id | pmh:oai:arXiv.org:2506.02580 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2506.02580 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2506.02580 |
| publication_date | 2025-06-03 |
| publication_year | 2025 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 92 |
| abstract_inverted_index.It | 80 |
| abstract_inverted_index.We | 64 |
| abstract_inverted_index.by | 12, 23, 130 |
| abstract_inverted_index.in | 91, 108 |
| abstract_inverted_index.is | 161 |
| abstract_inverted_index.it | 31, 90 |
| abstract_inverted_index.of | 146 |
| abstract_inverted_index.on | 112 |
| abstract_inverted_index.to | 54, 62, 105, 153 |
| abstract_inverted_index.AD. | 79 |
| abstract_inverted_index.Our | 159 |
| abstract_inverted_index.V2X | 72, 83, 148 |
| abstract_inverted_index.and | 15, 28, 36, 45, 59, 100, 124, 150, 156 |
| abstract_inverted_index.are | 50, 60 |
| abstract_inverted_index.at: | 163 |
| abstract_inverted_index.but | 30, 48 |
| abstract_inverted_index.for | 77, 96 |
| abstract_inverted_index.has | 3 |
| abstract_inverted_index.not | 51 |
| abstract_inverted_index.raw | 56 |
| abstract_inverted_index.the | 67, 113, 135, 144 |
| abstract_inverted_index.yet | 7 |
| abstract_inverted_index.(AD) | 2 |
| abstract_inverted_index.also | 32 |
| abstract_inverted_index.code | 160 |
| abstract_inverted_index.cost | 129 |
| abstract_inverted_index.data | 84 |
| abstract_inverted_index.into | 85 |
| abstract_inverted_index.more | 131 |
| abstract_inverted_index.pool | 95 |
| abstract_inverted_index.show | 117 |
| abstract_inverted_index.than | 132 |
| abstract_inverted_index.that | 70, 118 |
| abstract_inverted_index.they | 49 |
| abstract_inverted_index.with | 74 |
| abstract_inverted_index.(RAG) | 104 |
| abstract_inverted_index.(V2X) | 18 |
| abstract_inverted_index.80\%, | 133 |
| abstract_inverted_index.These | 141 |
| abstract_inverted_index.among | 138 |
| abstract_inverted_index.faces | 33 |
| abstract_inverted_index.first | 68 |
| abstract_inverted_index.offer | 41 |
| abstract_inverted_index.prone | 61 |
| abstract_inverted_index.range | 14 |
| abstract_inverted_index.these | 21 |
| abstract_inverted_index.while | 126 |
| abstract_inverted_index.across | 26 |
| abstract_inverted_index.ground | 106 |
| abstract_inverted_index.limits | 22 |
| abstract_inverted_index.lowest | 136 |
| abstract_inverted_index.models | 40 |
| abstract_inverted_index.safety | 125 |
| abstract_inverted_index.sensor | 57 |
| abstract_inverted_index.strong | 42 |
| abstract_inverted_index.advance | 154 |
| abstract_inverted_index.dataset | 116 |
| abstract_inverted_index.driving | 1 |
| abstract_inverted_index.employs | 101 |
| abstract_inverted_index.latency | 37 |
| abstract_inverted_index.process | 55 |
| abstract_inverted_index.promise | 145 |
| abstract_inverted_index.propose | 65 |
| abstract_inverted_index.remains | 10 |
| abstract_inverted_index.results | 142 |
| abstract_inverted_index.sensing | 13 |
| abstract_inverted_index.streams | 58 |
| abstract_inverted_index.unifies | 71 |
| abstract_inverted_index.DAIR-V2X | 115 |
| abstract_inverted_index.Language | 39 |
| abstract_inverted_index.accuracy | 123 |
| abstract_inverted_index.achieved | 4 |
| abstract_inverted_index.achieves | 120 |
| abstract_inverted_index.bridging | 147 |
| abstract_inverted_index.context. | 110 |
| abstract_inverted_index.designed | 53 |
| abstract_inverted_index.driving. | 158 |
| abstract_inverted_index.enabling | 24 |
| abstract_inverted_index.language | 151 |
| abstract_inverted_index.methods. | 140 |
| abstract_inverted_index.overhead | 137 |
| abstract_inverted_index.planning | 122 |
| abstract_inverted_index.reducing | 127 |
| abstract_inverted_index.scalable | 155 |
| abstract_inverted_index.vehicles | 27 |
| abstract_inverted_index.achieving | 134 |
| abstract_inverted_index.addresses | 20 |
| abstract_inverted_index.available | 162 |
| abstract_inverted_index.decisions | 107 |
| abstract_inverted_index.evaluated | 139 |
| abstract_inverted_index.framework | 69 |
| abstract_inverted_index.highlight | 143 |
| abstract_inverted_index.knowledge | 94 |
| abstract_inverted_index.organizes | 89 |
| abstract_inverted_index.progress, | 6 |
| abstract_inverted_index.real-time | 109 |
| abstract_inverted_index.reasoning | 44, 76, 152 |
| abstract_inverted_index.Autonomous | 0 |
| abstract_inverted_index.Generation | 103 |
| abstract_inverted_index.consistent | 98 |
| abstract_inverted_index.inherently | 52 |
| abstract_inverted_index.knowledge, | 88 |
| abstract_inverted_index.multimodal | 82 |
| abstract_inverted_index.perception | 9, 73, 149 |
| abstract_inverted_index.real-world | 114 |
| abstract_inverted_index.reasoning, | 99 |
| abstract_inverted_index.temporally | 97 |
| abstract_inverted_index.transforms | 81 |
| abstract_inverted_index.Experiments | 111 |
| abstract_inverted_index.V2X-UniPool | 119 |
| abstract_inverted_index.constrained | 11 |
| abstract_inverted_index.occlusions. | 16 |
| abstract_inverted_index.significant | 5 |
| abstract_inverted_index.structured, | 86 |
| abstract_inverted_index.trustworthy | 157 |
| abstract_inverted_index.V2X-UniPool, | 66 |
| abstract_inverted_index.constraints. | 38 |
| abstract_inverted_index.time-indexed | 93 |
| abstract_inverted_index.capabilities, | 47 |
| abstract_inverted_index.collaboration | 25 |
| abstract_inverted_index.communication | 19, 128 |
| abstract_inverted_index.hallucination. | 63 |
| abstract_inverted_index.heterogeneity, | 34 |
| abstract_inverted_index.language-based | 75, 87 |
| abstract_inverted_index.single-vehicle | 8 |
| abstract_inverted_index.decision-making | 46 |
| abstract_inverted_index.infrastructure, | 29 |
| abstract_inverted_index.knowledge-driven | 43, 78 |
| abstract_inverted_index.state-of-the-art | 121 |
| abstract_inverted_index.synchronization, | 35 |
| abstract_inverted_index.Retrieval-Augmented | 102 |
| abstract_inverted_index.Vehicle-to-Everything | 17 |
| abstract_inverted_index.https://github.com/Xuewen2025/V2X-UniPool | 164 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 8 |
| citation_normalized_percentile |