PhysVLM: Enabling Visual Language Models to Understand Robotic Physical Reachability Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2503.08481
Understanding the environment and a robot's physical reachability is crucial for task execution. While state-of-the-art vision-language models (VLMs) excel in environmental perception, they often generate inaccurate or impractical responses in embodied visual reasoning tasks due to a lack of understanding of robotic physical reachability. To address this issue, we propose a unified representation of physical reachability across diverse robots, i.e., Space-Physical Reachability Map (S-P Map), and PhysVLM, a vision-language model that integrates this reachability information into visual reasoning. Specifically, the S-P Map abstracts a robot's physical reachability into a generalized spatial representation, independent of specific robot configurations, allowing the model to focus on reachability features rather than robot-specific parameters. Subsequently, PhysVLM extends traditional VLM architectures by incorporating an additional feature encoder to process the S-P Map, enabling the model to reason about physical reachability without compromising its general vision-language capabilities. To train and evaluate PhysVLM, we constructed a large-scale multi-robot dataset, Phys100K, and a challenging benchmark, EQA-phys, which includes tasks for six different robots in both simulated and real-world environments. Experimental results demonstrate that PhysVLM outperforms existing models, achieving a 14\% improvement over GPT-4o on EQA-phys and surpassing advanced embodied VLMs such as RoboMamba and SpatialVLM on the RoboVQA-val and OpenEQA benchmarks. Additionally, the S-P Map shows strong compatibility with various VLMs, and its integration into GPT-4o-mini yields a 7.1\% performance improvement.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2503.08481
- https://arxiv.org/pdf/2503.08481
- OA Status
- green
- OpenAlex ID
- https://openalex.org/W4416031512
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4416031512Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2503.08481Digital Object Identifier
- Title
-
PhysVLM: Enabling Visual Language Models to Understand Robotic Physical ReachabilityWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2025Year of publication
- Publication date
-
2025-03-11Full publication date if available
- Authors
-
Weijie Zhou, Manli Tao, Chaoyang Zhao, Haiyun Guo, Honghui Dong, Ming Tang, Jinqiao WangList of authors in order
- Landing page
-
https://arxiv.org/abs/2503.08481Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2503.08481Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2503.08481Direct OA link when available
- Cited by
-
0Total citation count in OpenAlex
Full payload
| id | https://openalex.org/W4416031512 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2503.08481 |
| ids.doi | https://doi.org/10.48550/arxiv.2503.08481 |
| ids.openalex | https://openalex.org/W4416031512 |
| fwci | |
| type | preprint |
| title | PhysVLM: Enabling Visual Language Models to Understand Robotic Physical Reachability |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2503.08481 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2503.08481 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2503.08481 |
| locations[1].id | doi:10.48550/arxiv.2503.08481 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | cc-by |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | https://openalex.org/licenses/cc-by |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2503.08481 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5029399748 |
| authorships[0].author.orcid | https://orcid.org/0000-0002-6094-3823 |
| authorships[0].author.display_name | Weijie Zhou |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Zhou, Weijie |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5047036609 |
| authorships[1].author.orcid | https://orcid.org/0000-0002-7484-8173 |
| authorships[1].author.display_name | Manli Tao |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Tao, Manli |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5102990950 |
| authorships[2].author.orcid | https://orcid.org/0009-0005-3896-6542 |
| authorships[2].author.display_name | Chaoyang Zhao |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Zhao, Chaoyang |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5085707125 |
| authorships[3].author.orcid | https://orcid.org/0000-0001-9241-6211 |
| authorships[3].author.display_name | Haiyun Guo |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Guo, Haiyun |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5100389344 |
| authorships[4].author.orcid | https://orcid.org/0000-0001-6483-1426 |
| authorships[4].author.display_name | Honghui Dong |
| authorships[4].author_position | middle |
| authorships[4].raw_author_name | Dong, Honghui |
| authorships[4].is_corresponding | False |
| authorships[5].author.id | https://openalex.org/A5100782002 |
| authorships[5].author.orcid | https://orcid.org/0000-0001-8669-4186 |
| authorships[5].author.display_name | Ming Tang |
| authorships[5].author_position | middle |
| authorships[5].raw_author_name | Tang, Ming |
| authorships[5].is_corresponding | False |
| authorships[6].author.id | https://openalex.org/A5058420913 |
| authorships[6].author.orcid | https://orcid.org/0000-0002-9118-2780 |
| authorships[6].author.display_name | Jinqiao Wang |
| authorships[6].author_position | last |
| authorships[6].raw_author_name | Wang, Jinqiao |
| authorships[6].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2503.08481 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | PhysVLM: Enabling Visual Language Models to Understand Robotic Physical Reachability |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-09T23:09:16.995542 |
| primary_topic | |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2503.08481 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2503.08481 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2503.08481 |
| primary_location.id | pmh:oai:arXiv.org:2503.08481 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2503.08481 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2503.08481 |
| publication_date | 2025-03-11 |
| publication_year | 2025 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 4, 36, 50, 67, 83, 88, 147, 153, 179, 218 |
| abstract_inverted_index.To | 44, 140 |
| abstract_inverted_index.an | 117 |
| abstract_inverted_index.as | 192 |
| abstract_inverted_index.by | 115 |
| abstract_inverted_index.in | 19, 29, 164 |
| abstract_inverted_index.is | 8 |
| abstract_inverted_index.of | 38, 40, 53, 93 |
| abstract_inverted_index.on | 102, 184, 196 |
| abstract_inverted_index.or | 26 |
| abstract_inverted_index.to | 35, 100, 121, 129 |
| abstract_inverted_index.we | 48, 145 |
| abstract_inverted_index.Map | 62, 81, 205 |
| abstract_inverted_index.S-P | 80, 124, 204 |
| abstract_inverted_index.VLM | 113 |
| abstract_inverted_index.and | 3, 65, 142, 152, 167, 186, 194, 199, 212 |
| abstract_inverted_index.due | 34 |
| abstract_inverted_index.for | 10, 160 |
| abstract_inverted_index.its | 136, 213 |
| abstract_inverted_index.six | 161 |
| abstract_inverted_index.the | 1, 79, 98, 123, 127, 197, 203 |
| abstract_inverted_index.(S-P | 63 |
| abstract_inverted_index.14\% | 180 |
| abstract_inverted_index.Map, | 125 |
| abstract_inverted_index.VLMs | 190 |
| abstract_inverted_index.both | 165 |
| abstract_inverted_index.into | 75, 87, 215 |
| abstract_inverted_index.lack | 37 |
| abstract_inverted_index.over | 182 |
| abstract_inverted_index.such | 191 |
| abstract_inverted_index.task | 11 |
| abstract_inverted_index.than | 106 |
| abstract_inverted_index.that | 70, 173 |
| abstract_inverted_index.they | 22 |
| abstract_inverted_index.this | 46, 72 |
| abstract_inverted_index.with | 209 |
| abstract_inverted_index.7.1\% | 219 |
| abstract_inverted_index.Map), | 64 |
| abstract_inverted_index.VLMs, | 211 |
| abstract_inverted_index.While | 13 |
| abstract_inverted_index.about | 131 |
| abstract_inverted_index.excel | 18 |
| abstract_inverted_index.focus | 101 |
| abstract_inverted_index.i.e., | 59 |
| abstract_inverted_index.model | 69, 99, 128 |
| abstract_inverted_index.often | 23 |
| abstract_inverted_index.robot | 95 |
| abstract_inverted_index.shows | 206 |
| abstract_inverted_index.tasks | 33, 159 |
| abstract_inverted_index.train | 141 |
| abstract_inverted_index.which | 157 |
| abstract_inverted_index.(VLMs) | 17 |
| abstract_inverted_index.GPT-4o | 183 |
| abstract_inverted_index.across | 56 |
| abstract_inverted_index.issue, | 47 |
| abstract_inverted_index.models | 16 |
| abstract_inverted_index.rather | 105 |
| abstract_inverted_index.reason | 130 |
| abstract_inverted_index.robots | 163 |
| abstract_inverted_index.strong | 207 |
| abstract_inverted_index.visual | 31, 76 |
| abstract_inverted_index.yields | 217 |
| abstract_inverted_index.OpenEQA | 200 |
| abstract_inverted_index.PhysVLM | 110, 174 |
| abstract_inverted_index.address | 45 |
| abstract_inverted_index.crucial | 9 |
| abstract_inverted_index.diverse | 57 |
| abstract_inverted_index.encoder | 120 |
| abstract_inverted_index.extends | 111 |
| abstract_inverted_index.feature | 119 |
| abstract_inverted_index.general | 137 |
| abstract_inverted_index.models, | 177 |
| abstract_inverted_index.process | 122 |
| abstract_inverted_index.propose | 49 |
| abstract_inverted_index.results | 171 |
| abstract_inverted_index.robot's | 5, 84 |
| abstract_inverted_index.robotic | 41 |
| abstract_inverted_index.robots, | 58 |
| abstract_inverted_index.spatial | 90 |
| abstract_inverted_index.unified | 51 |
| abstract_inverted_index.various | 210 |
| abstract_inverted_index.without | 134 |
| abstract_inverted_index.EQA-phys | 185 |
| abstract_inverted_index.PhysVLM, | 66, 144 |
| abstract_inverted_index.advanced | 188 |
| abstract_inverted_index.allowing | 97 |
| abstract_inverted_index.dataset, | 150 |
| abstract_inverted_index.embodied | 30, 189 |
| abstract_inverted_index.enabling | 126 |
| abstract_inverted_index.evaluate | 143 |
| abstract_inverted_index.existing | 176 |
| abstract_inverted_index.features | 104 |
| abstract_inverted_index.generate | 24 |
| abstract_inverted_index.includes | 158 |
| abstract_inverted_index.physical | 6, 42, 54, 85, 132 |
| abstract_inverted_index.specific | 94 |
| abstract_inverted_index.EQA-phys, | 156 |
| abstract_inverted_index.Phys100K, | 151 |
| abstract_inverted_index.RoboMamba | 193 |
| abstract_inverted_index.abstracts | 82 |
| abstract_inverted_index.achieving | 178 |
| abstract_inverted_index.different | 162 |
| abstract_inverted_index.reasoning | 32 |
| abstract_inverted_index.responses | 28 |
| abstract_inverted_index.simulated | 166 |
| abstract_inverted_index.SpatialVLM | 195 |
| abstract_inverted_index.additional | 118 |
| abstract_inverted_index.benchmark, | 155 |
| abstract_inverted_index.execution. | 12 |
| abstract_inverted_index.inaccurate | 25 |
| abstract_inverted_index.integrates | 71 |
| abstract_inverted_index.real-world | 168 |
| abstract_inverted_index.reasoning. | 77 |
| abstract_inverted_index.surpassing | 187 |
| abstract_inverted_index.GPT-4o-mini | 216 |
| abstract_inverted_index.RoboVQA-val | 198 |
| abstract_inverted_index.benchmarks. | 201 |
| abstract_inverted_index.challenging | 154 |
| abstract_inverted_index.constructed | 146 |
| abstract_inverted_index.demonstrate | 172 |
| abstract_inverted_index.environment | 2 |
| abstract_inverted_index.generalized | 89 |
| abstract_inverted_index.impractical | 27 |
| abstract_inverted_index.improvement | 181 |
| abstract_inverted_index.independent | 92 |
| abstract_inverted_index.information | 74 |
| abstract_inverted_index.integration | 214 |
| abstract_inverted_index.large-scale | 148 |
| abstract_inverted_index.multi-robot | 149 |
| abstract_inverted_index.outperforms | 175 |
| abstract_inverted_index.parameters. | 108 |
| abstract_inverted_index.perception, | 21 |
| abstract_inverted_index.performance | 220 |
| abstract_inverted_index.traditional | 112 |
| abstract_inverted_index.Experimental | 170 |
| abstract_inverted_index.Reachability | 61 |
| abstract_inverted_index.compromising | 135 |
| abstract_inverted_index.improvement. | 221 |
| abstract_inverted_index.reachability | 7, 55, 73, 86, 103, 133 |
| abstract_inverted_index.Additionally, | 202 |
| abstract_inverted_index.Specifically, | 78 |
| abstract_inverted_index.Subsequently, | 109 |
| abstract_inverted_index.Understanding | 0 |
| abstract_inverted_index.architectures | 114 |
| abstract_inverted_index.capabilities. | 139 |
| abstract_inverted_index.compatibility | 208 |
| abstract_inverted_index.environmental | 20 |
| abstract_inverted_index.environments. | 169 |
| abstract_inverted_index.incorporating | 116 |
| abstract_inverted_index.reachability. | 43 |
| abstract_inverted_index.understanding | 39 |
| abstract_inverted_index.Space-Physical | 60 |
| abstract_inverted_index.representation | 52 |
| abstract_inverted_index.robot-specific | 107 |
| abstract_inverted_index.configurations, | 96 |
| abstract_inverted_index.representation, | 91 |
| abstract_inverted_index.vision-language | 15, 68, 138 |
| abstract_inverted_index.state-of-the-art | 14 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 7 |
| citation_normalized_percentile |