OE-BevSeg: An Object Informed and Environment Aware Multimodal Framework for Bird's-eye-view Vehicle Semantic Segmentation Article Swipe
YOU?
·
· 2024
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2407.13137
Bird's-eye-view (BEV) semantic segmentation is becoming crucial in autonomous driving systems. It realizes ego-vehicle surrounding environment perception by projecting 2D multi-view images into 3D world space. Recently, BEV segmentation has made notable progress, attributed to better view transformation modules, larger image encoders, or more temporal information. However, there are still two issues: 1) a lack of effective understanding and enhancement of BEV space features, particularly in accurately capturing long-distance environmental features and 2) recognizing fine details of target objects. To address these issues, we propose OE-BevSeg, an end-to-end multimodal framework that enhances BEV segmentation performance through global environment-aware perception and local target object enhancement. OE-BevSeg employs an environment-aware BEV compressor. Based on prior knowledge about the main composition of the BEV surrounding environment varying with the increase of distance intervals, long-sequence global modeling is utilized to improve the model's understanding and perception of the environment. From the perspective of enriching target object information in segmentation results, we introduce the center-informed object enhancement module, using centerness information to supervise and guide the segmentation head, thereby enhancing segmentation performance from a local enhancement perspective. Additionally, we designed a multimodal fusion branch that integrates multi-view RGB image features with radar/LiDAR features, achieving significant performance improvements. Extensive experiments show that, whether in camera-only or multimodal fusion BEV segmentation tasks, our approach achieves state-of-the-art results by a large margin on the nuScenes dataset for vehicle segmentation, demonstrating superior applicability in the field of autonomous driving.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2407.13137
- https://arxiv.org/pdf/2407.13137
- OA Status
- green
- Cited By
- 1
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4406058803
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4406058803Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2407.13137Digital Object Identifier
- Title
-
OE-BevSeg: An Object Informed and Environment Aware Multimodal Framework for Bird's-eye-view Vehicle Semantic SegmentationWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2024Year of publication
- Publication date
-
2024-07-18Full publication date if available
- Authors
-
Jian Sun, Yuqi Dai, Chi‐Man Vong, Qing Xu, Songnian Li, Jianqiang Wang, Lei He, Keqiang LiList of authors in order
- Landing page
-
https://arxiv.org/abs/2407.13137Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2407.13137Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2407.13137Direct OA link when available
- Concepts
-
Segmentation, Object (grammar), Computer vision, Artificial intelligence, Computer science, Human–computer interaction, PsychologyTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
1Total citation count in OpenAlex
- Citations by year (recent)
-
2025: 1Per-year citation counts (last 5 years)
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4406058803 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2407.13137 |
| ids.doi | https://doi.org/10.48550/arxiv.2407.13137 |
| ids.openalex | https://openalex.org/W4406058803 |
| fwci | |
| type | preprint |
| title | OE-BevSeg: An Object Informed and Environment Aware Multimodal Framework for Bird's-eye-view Vehicle Semantic Segmentation |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T11099 |
| topics[0].field.id | https://openalex.org/fields/22 |
| topics[0].field.display_name | Engineering |
| topics[0].score | 0.9721999764442444 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/2203 |
| topics[0].subfield.display_name | Automotive Engineering |
| topics[0].display_name | Autonomous Vehicle Technology and Safety |
| topics[1].id | https://openalex.org/T10036 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9455999732017517 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1707 |
| topics[1].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[1].display_name | Advanced Neural Network Applications |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C89600930 |
| concepts[0].level | 2 |
| concepts[0].score | 0.7048938870429993 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q1423946 |
| concepts[0].display_name | Segmentation |
| concepts[1].id | https://openalex.org/C2781238097 |
| concepts[1].level | 2 |
| concepts[1].score | 0.6222358345985413 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q175026 |
| concepts[1].display_name | Object (grammar) |
| concepts[2].id | https://openalex.org/C31972630 |
| concepts[2].level | 1 |
| concepts[2].score | 0.5349527597427368 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q844240 |
| concepts[2].display_name | Computer vision |
| concepts[3].id | https://openalex.org/C154945302 |
| concepts[3].level | 1 |
| concepts[3].score | 0.5331121683120728 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[3].display_name | Artificial intelligence |
| concepts[4].id | https://openalex.org/C41008148 |
| concepts[4].level | 0 |
| concepts[4].score | 0.5199993252754211 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[4].display_name | Computer science |
| concepts[5].id | https://openalex.org/C107457646 |
| concepts[5].level | 1 |
| concepts[5].score | 0.34273436665534973 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q207434 |
| concepts[5].display_name | Human–computer interaction |
| concepts[6].id | https://openalex.org/C15744967 |
| concepts[6].level | 0 |
| concepts[6].score | 0.3423153758049011 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q9418 |
| concepts[6].display_name | Psychology |
| keywords[0].id | https://openalex.org/keywords/segmentation |
| keywords[0].score | 0.7048938870429993 |
| keywords[0].display_name | Segmentation |
| keywords[1].id | https://openalex.org/keywords/object |
| keywords[1].score | 0.6222358345985413 |
| keywords[1].display_name | Object (grammar) |
| keywords[2].id | https://openalex.org/keywords/computer-vision |
| keywords[2].score | 0.5349527597427368 |
| keywords[2].display_name | Computer vision |
| keywords[3].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[3].score | 0.5331121683120728 |
| keywords[3].display_name | Artificial intelligence |
| keywords[4].id | https://openalex.org/keywords/computer-science |
| keywords[4].score | 0.5199993252754211 |
| keywords[4].display_name | Computer science |
| keywords[5].id | https://openalex.org/keywords/human–computer-interaction |
| keywords[5].score | 0.34273436665534973 |
| keywords[5].display_name | Human–computer interaction |
| keywords[6].id | https://openalex.org/keywords/psychology |
| keywords[6].score | 0.3423153758049011 |
| keywords[6].display_name | Psychology |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2407.13137 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2407.13137 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2407.13137 |
| locations[1].id | doi:10.48550/arxiv.2407.13137 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2407.13137 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5062319704 |
| authorships[0].author.orcid | https://orcid.org/0000-0001-5031-4938 |
| authorships[0].author.display_name | Jian Sun |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Sun, Jian |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5050656423 |
| authorships[1].author.orcid | https://orcid.org/0009-0009-3581-7639 |
| authorships[1].author.display_name | Yuqi Dai |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Dai, Yuqi |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5076922237 |
| authorships[2].author.orcid | https://orcid.org/0000-0001-7997-8279 |
| authorships[2].author.display_name | Chi‐Man Vong |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Vong, Chi-Man |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5100743481 |
| authorships[3].author.orcid | https://orcid.org/0000-0002-6723-6072 |
| authorships[3].author.display_name | Qing Xu |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Xu, Qing |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5047703394 |
| authorships[4].author.orcid | https://orcid.org/0000-0002-8244-5681 |
| authorships[4].author.display_name | Songnian Li |
| authorships[4].author_position | middle |
| authorships[4].raw_author_name | Li, Shengbo Eben |
| authorships[4].is_corresponding | False |
| authorships[5].author.id | https://openalex.org/A5115596549 |
| authorships[5].author.orcid | |
| authorships[5].author.display_name | Jianqiang Wang |
| authorships[5].author_position | middle |
| authorships[5].raw_author_name | Wang, Jianqiang |
| authorships[5].is_corresponding | False |
| authorships[6].author.id | https://openalex.org/A5100693609 |
| authorships[6].author.orcid | https://orcid.org/0000-0003-3020-2984 |
| authorships[6].author.display_name | Lei He |
| authorships[6].author_position | middle |
| authorships[6].raw_author_name | He, Lei |
| authorships[6].is_corresponding | False |
| authorships[7].author.id | https://openalex.org/A5031855986 |
| authorships[7].author.orcid | https://orcid.org/0000-0002-9333-7416 |
| authorships[7].author.display_name | Keqiang Li |
| authorships[7].author_position | last |
| authorships[7].raw_author_name | Li, Keqiang |
| authorships[7].is_corresponding | False |
| has_content.pdf | True |
| has_content.grobid_xml | True |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2407.13137 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | OE-BevSeg: An Object Informed and Environment Aware Multimodal Framework for Bird's-eye-view Vehicle Semantic Segmentation |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T11099 |
| primary_topic.field.id | https://openalex.org/fields/22 |
| primary_topic.field.display_name | Engineering |
| primary_topic.score | 0.9721999764442444 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/2203 |
| primary_topic.subfield.display_name | Automotive Engineering |
| primary_topic.display_name | Autonomous Vehicle Technology and Safety |
| related_works | https://openalex.org/W2772917594, https://openalex.org/W2036807459, https://openalex.org/W2058170566, https://openalex.org/W2755342338, https://openalex.org/W2166024367, https://openalex.org/W3116076068, https://openalex.org/W2229312674, https://openalex.org/W2951359407, https://openalex.org/W2079911747, https://openalex.org/W1969923398 |
| cited_by_count | 1 |
| counts_by_year[0].year | 2025 |
| counts_by_year[0].cited_by_count | 1 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2407.13137 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2407.13137 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2407.13137 |
| primary_location.id | pmh:oai:arXiv.org:2407.13137 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2407.13137 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2407.13137 |
| publication_date | 2024-07-18 |
| publication_year | 2024 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 53, 178, 185, 221 |
| abstract_inverted_index.1) | 52 |
| abstract_inverted_index.2) | 72 |
| abstract_inverted_index.2D | 19 |
| abstract_inverted_index.3D | 23 |
| abstract_inverted_index.It | 11 |
| abstract_inverted_index.To | 79 |
| abstract_inverted_index.an | 86, 106 |
| abstract_inverted_index.by | 17, 220 |
| abstract_inverted_index.in | 7, 65, 153, 207, 234 |
| abstract_inverted_index.is | 4, 133 |
| abstract_inverted_index.of | 55, 60, 76, 118, 127, 142, 148, 237 |
| abstract_inverted_index.on | 111, 224 |
| abstract_inverted_index.or | 42, 209 |
| abstract_inverted_index.to | 34, 135, 166 |
| abstract_inverted_index.we | 83, 156, 183 |
| abstract_inverted_index.BEV | 27, 61, 92, 108, 120, 212 |
| abstract_inverted_index.RGB | 192 |
| abstract_inverted_index.and | 58, 71, 99, 140, 168 |
| abstract_inverted_index.are | 48 |
| abstract_inverted_index.for | 228 |
| abstract_inverted_index.has | 29 |
| abstract_inverted_index.our | 215 |
| abstract_inverted_index.the | 115, 119, 125, 137, 143, 146, 158, 170, 225, 235 |
| abstract_inverted_index.two | 50 |
| abstract_inverted_index.From | 145 |
| abstract_inverted_index.fine | 74 |
| abstract_inverted_index.from | 177 |
| abstract_inverted_index.into | 22 |
| abstract_inverted_index.lack | 54 |
| abstract_inverted_index.made | 30 |
| abstract_inverted_index.main | 116 |
| abstract_inverted_index.more | 43 |
| abstract_inverted_index.show | 204 |
| abstract_inverted_index.that | 90, 189 |
| abstract_inverted_index.view | 36 |
| abstract_inverted_index.with | 124, 195 |
| abstract_inverted_index.(BEV) | 1 |
| abstract_inverted_index.Based | 110 |
| abstract_inverted_index.about | 114 |
| abstract_inverted_index.field | 236 |
| abstract_inverted_index.guide | 169 |
| abstract_inverted_index.head, | 172 |
| abstract_inverted_index.image | 40, 193 |
| abstract_inverted_index.large | 222 |
| abstract_inverted_index.local | 100, 179 |
| abstract_inverted_index.prior | 112 |
| abstract_inverted_index.space | 62 |
| abstract_inverted_index.still | 49 |
| abstract_inverted_index.that, | 205 |
| abstract_inverted_index.there | 47 |
| abstract_inverted_index.these | 81 |
| abstract_inverted_index.using | 163 |
| abstract_inverted_index.world | 24 |
| abstract_inverted_index.better | 35 |
| abstract_inverted_index.branch | 188 |
| abstract_inverted_index.fusion | 187, 211 |
| abstract_inverted_index.global | 96, 131 |
| abstract_inverted_index.images | 21 |
| abstract_inverted_index.larger | 39 |
| abstract_inverted_index.margin | 223 |
| abstract_inverted_index.object | 102, 151, 160 |
| abstract_inverted_index.space. | 25 |
| abstract_inverted_index.target | 77, 101, 150 |
| abstract_inverted_index.tasks, | 214 |
| abstract_inverted_index.address | 80 |
| abstract_inverted_index.crucial | 6 |
| abstract_inverted_index.dataset | 227 |
| abstract_inverted_index.details | 75 |
| abstract_inverted_index.driving | 9 |
| abstract_inverted_index.employs | 105 |
| abstract_inverted_index.improve | 136 |
| abstract_inverted_index.issues, | 82 |
| abstract_inverted_index.issues: | 51 |
| abstract_inverted_index.model's | 138 |
| abstract_inverted_index.module, | 162 |
| abstract_inverted_index.notable | 31 |
| abstract_inverted_index.propose | 84 |
| abstract_inverted_index.results | 219 |
| abstract_inverted_index.thereby | 173 |
| abstract_inverted_index.through | 95 |
| abstract_inverted_index.varying | 123 |
| abstract_inverted_index.vehicle | 229 |
| abstract_inverted_index.whether | 206 |
| abstract_inverted_index.However, | 46 |
| abstract_inverted_index.achieves | 217 |
| abstract_inverted_index.approach | 216 |
| abstract_inverted_index.becoming | 5 |
| abstract_inverted_index.designed | 184 |
| abstract_inverted_index.distance | 128 |
| abstract_inverted_index.driving. | 239 |
| abstract_inverted_index.enhances | 91 |
| abstract_inverted_index.features | 70, 194 |
| abstract_inverted_index.increase | 126 |
| abstract_inverted_index.modeling | 132 |
| abstract_inverted_index.modules, | 38 |
| abstract_inverted_index.nuScenes | 226 |
| abstract_inverted_index.objects. | 78 |
| abstract_inverted_index.realizes | 12 |
| abstract_inverted_index.results, | 155 |
| abstract_inverted_index.semantic | 2 |
| abstract_inverted_index.superior | 232 |
| abstract_inverted_index.systems. | 10 |
| abstract_inverted_index.temporal | 44 |
| abstract_inverted_index.utilized | 134 |
| abstract_inverted_index.Extensive | 202 |
| abstract_inverted_index.OE-BevSeg | 104 |
| abstract_inverted_index.Recently, | 26 |
| abstract_inverted_index.achieving | 198 |
| abstract_inverted_index.capturing | 67 |
| abstract_inverted_index.effective | 56 |
| abstract_inverted_index.encoders, | 41 |
| abstract_inverted_index.enhancing | 174 |
| abstract_inverted_index.enriching | 149 |
| abstract_inverted_index.features, | 63, 197 |
| abstract_inverted_index.framework | 89 |
| abstract_inverted_index.introduce | 157 |
| abstract_inverted_index.knowledge | 113 |
| abstract_inverted_index.progress, | 32 |
| abstract_inverted_index.supervise | 167 |
| abstract_inverted_index.OE-BevSeg, | 85 |
| abstract_inverted_index.accurately | 66 |
| abstract_inverted_index.attributed | 33 |
| abstract_inverted_index.autonomous | 8, 238 |
| abstract_inverted_index.centerness | 164 |
| abstract_inverted_index.end-to-end | 87 |
| abstract_inverted_index.integrates | 190 |
| abstract_inverted_index.intervals, | 129 |
| abstract_inverted_index.multi-view | 20, 191 |
| abstract_inverted_index.multimodal | 88, 186, 210 |
| abstract_inverted_index.perception | 16, 98, 141 |
| abstract_inverted_index.projecting | 18 |
| abstract_inverted_index.camera-only | 208 |
| abstract_inverted_index.composition | 117 |
| abstract_inverted_index.compressor. | 109 |
| abstract_inverted_index.ego-vehicle | 13 |
| abstract_inverted_index.enhancement | 59, 161, 180 |
| abstract_inverted_index.environment | 15, 122 |
| abstract_inverted_index.experiments | 203 |
| abstract_inverted_index.information | 152, 165 |
| abstract_inverted_index.performance | 94, 176, 200 |
| abstract_inverted_index.perspective | 147 |
| abstract_inverted_index.radar/LiDAR | 196 |
| abstract_inverted_index.recognizing | 73 |
| abstract_inverted_index.significant | 199 |
| abstract_inverted_index.surrounding | 14, 121 |
| abstract_inverted_index.enhancement. | 103 |
| abstract_inverted_index.environment. | 144 |
| abstract_inverted_index.information. | 45 |
| abstract_inverted_index.particularly | 64 |
| abstract_inverted_index.perspective. | 181 |
| abstract_inverted_index.segmentation | 3, 28, 93, 154, 171, 175, 213 |
| abstract_inverted_index.Additionally, | 182 |
| abstract_inverted_index.applicability | 233 |
| abstract_inverted_index.demonstrating | 231 |
| abstract_inverted_index.environmental | 69 |
| abstract_inverted_index.improvements. | 201 |
| abstract_inverted_index.long-distance | 68 |
| abstract_inverted_index.long-sequence | 130 |
| abstract_inverted_index.segmentation, | 230 |
| abstract_inverted_index.understanding | 57, 139 |
| abstract_inverted_index.transformation | 37 |
| abstract_inverted_index.Bird's-eye-view | 0 |
| abstract_inverted_index.center-informed | 159 |
| abstract_inverted_index.state-of-the-art | 218 |
| abstract_inverted_index.environment-aware | 97, 107 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 8 |
| citation_normalized_percentile |