TGP: Two-modal occupancy prediction with 3D Gaussian and sparse points for 3D Environment Awareness Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2503.09941
3D semantic occupancy has rapidly become a research focus in the fields of robotics and autonomous driving environment perception due to its ability to provide more realistic geometric perception and its closer integration with downstream tasks. By performing occupancy prediction of the 3D space in the environment, the ability and robustness of scene understanding can be effectively improved. However, existing occupancy prediction tasks are primarily modeled using voxel or point cloud-based approaches: voxel-based network structures often suffer from the loss of spatial information due to the voxelization process, while point cloud-based methods, although better at retaining spatial location information, face limitations in representing volumetric structural details. To address this issue, we propose a dual-modal prediction method based on 3D Gaussian sets and sparse points, which balances both spatial location and volumetric structural information, achieving higher accuracy in semantic occupancy prediction. Specifically, our method adopts a Transformer-based architecture, taking 3D Gaussian sets, sparse points, and queries as inputs. Through the multi-layer structure of the Transformer, the enhanced queries and 3D Gaussian sets jointly contribute to the semantic occupancy prediction, and an adaptive fusion mechanism integrates the semantic outputs of both modalities to generate the final prediction results. Additionally, to further improve accuracy, we dynamically refine the point cloud at each layer, allowing for more precise location information during occupancy prediction. We conducted experiments on the Occ3DnuScenes dataset, and the experimental results demonstrate superior performance of the proposed method on IoU based metrics.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2503.09941
- https://arxiv.org/pdf/2503.09941
- OA Status
- green
- OpenAlex ID
- https://openalex.org/W4416038072
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4416038072Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2503.09941Digital Object Identifier
- Title
-
TGP: Two-modal occupancy prediction with 3D Gaussian and sparse points for 3D Environment AwarenessWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2025Year of publication
- Publication date
-
2025-03-13Full publication date if available
- Authors
-
Chen Mu, Wenyu Chen, Mingchuan Yang, Yuan Zhang, Tao Han, Xinchi Li, Yunlong Li, Huaici ZhaoList of authors in order
- Landing page
-
https://arxiv.org/abs/2503.09941Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2503.09941Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2503.09941Direct OA link when available
- Cited by
-
0Total citation count in OpenAlex
Full payload
| id | https://openalex.org/W4416038072 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2503.09941 |
| ids.doi | https://doi.org/10.48550/arxiv.2503.09941 |
| ids.openalex | https://openalex.org/W4416038072 |
| fwci | |
| type | preprint |
| title | TGP: Two-modal occupancy prediction with 3D Gaussian and sparse points for 3D Environment Awareness |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2503.09941 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2503.09941 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2503.09941 |
| locations[1].id | doi:10.48550/arxiv.2503.09941 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2503.09941 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5011469029 |
| authorships[0].author.orcid | https://orcid.org/0000-0002-6329-5112 |
| authorships[0].author.display_name | Chen Mu |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Chen, Mu |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5100687323 |
| authorships[1].author.orcid | https://orcid.org/0000-0002-9933-8014 |
| authorships[1].author.display_name | Wenyu Chen |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Chen, Wenyu |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5085862505 |
| authorships[2].author.orcid | https://orcid.org/0000-0002-6747-9218 |
| authorships[2].author.display_name | Mingchuan Yang |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Yang, Mingchuan |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5100368740 |
| authorships[3].author.orcid | https://orcid.org/0000-0002-8842-2691 |
| authorships[3].author.display_name | Yuan Zhang |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Zhang, Yuan |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5101604804 |
| authorships[4].author.orcid | https://orcid.org/0000-0002-6626-1305 |
| authorships[4].author.display_name | Tao Han |
| authorships[4].author_position | middle |
| authorships[4].raw_author_name | Han, Tao |
| authorships[4].is_corresponding | False |
| authorships[5].author.id | https://openalex.org/A5056666001 |
| authorships[5].author.orcid | https://orcid.org/0000-0003-2456-7928 |
| authorships[5].author.display_name | Xinchi Li |
| authorships[5].author_position | middle |
| authorships[5].raw_author_name | Li, Xinchi |
| authorships[5].is_corresponding | False |
| authorships[6].author.id | https://openalex.org/A5100402190 |
| authorships[6].author.orcid | https://orcid.org/0000-0002-1350-5142 |
| authorships[6].author.display_name | Yunlong Li |
| authorships[6].author_position | middle |
| authorships[6].raw_author_name | Li, Yunlong |
| authorships[6].is_corresponding | False |
| authorships[7].author.id | https://openalex.org/A5063377357 |
| authorships[7].author.orcid | https://orcid.org/0000-0002-7772-8652 |
| authorships[7].author.display_name | Huaici Zhao |
| authorships[7].author_position | last |
| authorships[7].raw_author_name | Zhao, Huaici |
| authorships[7].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2503.09941 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | TGP: Two-modal occupancy prediction with 3D Gaussian and sparse points for 3D Environment Awareness |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-09T23:09:16.995542 |
| primary_topic | |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2503.09941 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2503.09941 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2503.09941 |
| primary_location.id | pmh:oai:arXiv.org:2503.09941 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2503.09941 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2503.09941 |
| publication_date | 2025-03-13 |
| publication_year | 2025 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 6, 112, 144 |
| abstract_inverted_index.3D | 0, 42, 118, 148, 168 |
| abstract_inverted_index.By | 36 |
| abstract_inverted_index.To | 106 |
| abstract_inverted_index.We | 219 |
| abstract_inverted_index.an | 179 |
| abstract_inverted_index.as | 155 |
| abstract_inverted_index.at | 94, 207 |
| abstract_inverted_index.be | 55 |
| abstract_inverted_index.in | 9, 44, 101, 136 |
| abstract_inverted_index.of | 12, 40, 51, 80, 161, 187, 233 |
| abstract_inverted_index.on | 117, 222, 237 |
| abstract_inverted_index.or | 68 |
| abstract_inverted_index.to | 20, 23, 84, 173, 190, 197 |
| abstract_inverted_index.we | 110, 201 |
| abstract_inverted_index.IoU | 238 |
| abstract_inverted_index.and | 14, 29, 49, 121, 129, 153, 167, 178, 226 |
| abstract_inverted_index.are | 63 |
| abstract_inverted_index.can | 54 |
| abstract_inverted_index.due | 19, 83 |
| abstract_inverted_index.for | 211 |
| abstract_inverted_index.has | 3 |
| abstract_inverted_index.its | 21, 30 |
| abstract_inverted_index.our | 141 |
| abstract_inverted_index.the | 10, 41, 45, 47, 78, 85, 158, 162, 164, 174, 184, 192, 204, 223, 227, 234 |
| abstract_inverted_index.both | 126, 188 |
| abstract_inverted_index.each | 208 |
| abstract_inverted_index.face | 99 |
| abstract_inverted_index.from | 77 |
| abstract_inverted_index.loss | 79 |
| abstract_inverted_index.more | 25, 212 |
| abstract_inverted_index.sets | 120, 170 |
| abstract_inverted_index.this | 108 |
| abstract_inverted_index.with | 33 |
| abstract_inverted_index.based | 116, 239 |
| abstract_inverted_index.cloud | 206 |
| abstract_inverted_index.final | 193 |
| abstract_inverted_index.focus | 8 |
| abstract_inverted_index.often | 75 |
| abstract_inverted_index.point | 69, 89, 205 |
| abstract_inverted_index.scene | 52 |
| abstract_inverted_index.sets, | 150 |
| abstract_inverted_index.space | 43 |
| abstract_inverted_index.tasks | 62 |
| abstract_inverted_index.using | 66 |
| abstract_inverted_index.voxel | 67 |
| abstract_inverted_index.which | 124 |
| abstract_inverted_index.while | 88 |
| abstract_inverted_index.adopts | 143 |
| abstract_inverted_index.become | 5 |
| abstract_inverted_index.better | 93 |
| abstract_inverted_index.closer | 31 |
| abstract_inverted_index.during | 216 |
| abstract_inverted_index.fields | 11 |
| abstract_inverted_index.fusion | 181 |
| abstract_inverted_index.higher | 134 |
| abstract_inverted_index.issue, | 109 |
| abstract_inverted_index.layer, | 209 |
| abstract_inverted_index.method | 115, 142, 236 |
| abstract_inverted_index.refine | 203 |
| abstract_inverted_index.sparse | 122, 151 |
| abstract_inverted_index.suffer | 76 |
| abstract_inverted_index.taking | 147 |
| abstract_inverted_index.tasks. | 35 |
| abstract_inverted_index.Through | 157 |
| abstract_inverted_index.ability | 22, 48 |
| abstract_inverted_index.address | 107 |
| abstract_inverted_index.driving | 16 |
| abstract_inverted_index.further | 198 |
| abstract_inverted_index.improve | 199 |
| abstract_inverted_index.inputs. | 156 |
| abstract_inverted_index.jointly | 171 |
| abstract_inverted_index.modeled | 65 |
| abstract_inverted_index.network | 73 |
| abstract_inverted_index.outputs | 186 |
| abstract_inverted_index.points, | 123, 152 |
| abstract_inverted_index.precise | 213 |
| abstract_inverted_index.propose | 111 |
| abstract_inverted_index.provide | 24 |
| abstract_inverted_index.queries | 154, 166 |
| abstract_inverted_index.rapidly | 4 |
| abstract_inverted_index.results | 229 |
| abstract_inverted_index.spatial | 81, 96, 127 |
| abstract_inverted_index.Gaussian | 119, 149, 169 |
| abstract_inverted_index.However, | 58 |
| abstract_inverted_index.accuracy | 135 |
| abstract_inverted_index.adaptive | 180 |
| abstract_inverted_index.allowing | 210 |
| abstract_inverted_index.although | 92 |
| abstract_inverted_index.balances | 125 |
| abstract_inverted_index.dataset, | 225 |
| abstract_inverted_index.details. | 105 |
| abstract_inverted_index.enhanced | 165 |
| abstract_inverted_index.existing | 59 |
| abstract_inverted_index.generate | 191 |
| abstract_inverted_index.location | 97, 128, 214 |
| abstract_inverted_index.methods, | 91 |
| abstract_inverted_index.metrics. | 240 |
| abstract_inverted_index.process, | 87 |
| abstract_inverted_index.proposed | 235 |
| abstract_inverted_index.research | 7 |
| abstract_inverted_index.results. | 195 |
| abstract_inverted_index.robotics | 13 |
| abstract_inverted_index.semantic | 1, 137, 175, 185 |
| abstract_inverted_index.superior | 231 |
| abstract_inverted_index.accuracy, | 200 |
| abstract_inverted_index.achieving | 133 |
| abstract_inverted_index.conducted | 220 |
| abstract_inverted_index.geometric | 27 |
| abstract_inverted_index.improved. | 57 |
| abstract_inverted_index.mechanism | 182 |
| abstract_inverted_index.occupancy | 2, 38, 60, 138, 176, 217 |
| abstract_inverted_index.primarily | 64 |
| abstract_inverted_index.realistic | 26 |
| abstract_inverted_index.retaining | 95 |
| abstract_inverted_index.structure | 160 |
| abstract_inverted_index.autonomous | 15 |
| abstract_inverted_index.contribute | 172 |
| abstract_inverted_index.downstream | 34 |
| abstract_inverted_index.dual-modal | 113 |
| abstract_inverted_index.integrates | 183 |
| abstract_inverted_index.modalities | 189 |
| abstract_inverted_index.perception | 18, 28 |
| abstract_inverted_index.performing | 37 |
| abstract_inverted_index.prediction | 39, 61, 114, 194 |
| abstract_inverted_index.robustness | 50 |
| abstract_inverted_index.structural | 104, 131 |
| abstract_inverted_index.structures | 74 |
| abstract_inverted_index.volumetric | 103, 130 |
| abstract_inverted_index.approaches: | 71 |
| abstract_inverted_index.cloud-based | 70, 90 |
| abstract_inverted_index.demonstrate | 230 |
| abstract_inverted_index.dynamically | 202 |
| abstract_inverted_index.effectively | 56 |
| abstract_inverted_index.environment | 17 |
| abstract_inverted_index.experiments | 221 |
| abstract_inverted_index.information | 82, 215 |
| abstract_inverted_index.integration | 32 |
| abstract_inverted_index.limitations | 100 |
| abstract_inverted_index.multi-layer | 159 |
| abstract_inverted_index.performance | 232 |
| abstract_inverted_index.prediction, | 177 |
| abstract_inverted_index.prediction. | 139, 218 |
| abstract_inverted_index.voxel-based | 72 |
| abstract_inverted_index.Transformer, | 163 |
| abstract_inverted_index.environment, | 46 |
| abstract_inverted_index.experimental | 228 |
| abstract_inverted_index.information, | 98, 132 |
| abstract_inverted_index.representing | 102 |
| abstract_inverted_index.voxelization | 86 |
| abstract_inverted_index.Additionally, | 196 |
| abstract_inverted_index.Occ3DnuScenes | 224 |
| abstract_inverted_index.Specifically, | 140 |
| abstract_inverted_index.architecture, | 146 |
| abstract_inverted_index.understanding | 53 |
| abstract_inverted_index.Transformer-based | 145 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 8 |
| citation_normalized_percentile |