CL-fusionBEV: 3D object detection method with camera-LiDAR fusion in Bird’s Eye View Article Swipe
YOU?
·
· 2024
· Open Access
·
· DOI: https://doi.org/10.1007/s40747-024-01567-0
In the wave of research on autonomous driving, 3D object detection from the Bird’s Eye View (BEV) perspective has emerged as a pivotal area of focus. The essence of this challenge is the effective fusion of camera and LiDAR data into the BEV. Current approaches predominantly train and predict within the front view and Cartesian coordinate system, often overlooking the inherent structural and operational differences between cameras and LiDAR sensors. This paper introduces CL-FusionBEV, an innovative 3D object detection methodology tailored for sensor data fusion in the BEV perspective. Our approach initiates with a view transformation, facilitated by an implicit learning module that transitions the camera’s perspective to the BEV space, thereby aligning the prediction module. Subsequently, to achieve modal fusion within the BEV framework, we employ voxelization to convert the LiDAR point cloud into BEV space, thereby generating LiDAR BEV spatial features. Moreover, to integrate the BEV spatial features from both camera and LiDAR, we have developed a multi-modal cross-attention mechanism and an implicit multi-modal fusion network, designed to enhance the synergy and application of dual-modal data. To counteract potential deficiencies in global reasoning and feature interaction arising from multi-modal cross-attention, we propose a BEV self-attention mechanism that facilitates comprehensive global feature operations. Our methodology has undergone rigorous evaluation on a substantial dataset within the autonomous driving domain, the nuScenes dataset. The outcomes demonstrate that our method achieves a mean Average Precision (mAP) of 73.3% and a nuScenes Detection Score (NDS) of 75.5%, particularly excelling in the detection of cars and pedestrians with high accuracies of 89% and 90.7%, respectively. Additionally, CL-FusionBEV exhibits superior performance in identifying occluded and distant objects, surpassing existing comparative methods.
Related Topics
- Type
- article
- Language
- en
- Landing Page
- https://doi.org/10.1007/s40747-024-01567-0
- OA Status
- gold
- Cited By
- 10
- References
- 47
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4401052722
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4401052722Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.1007/s40747-024-01567-0Digital Object Identifier
- Title
-
CL-fusionBEV: 3D object detection method with camera-LiDAR fusion in Bird’s Eye ViewWork title
- Type
-
articleOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2024Year of publication
- Publication date
-
2024-07-27Full publication date if available
- Authors
-
Peicheng Shi, Zhiqiang Liu, Xinlong Dong, Aixi YangList of authors in order
- Landing page
-
https://doi.org/10.1007/s40747-024-01567-0Publisher landing page
- Open access
-
YesWhether a free full text is available
- OA status
-
goldOpen access status per OpenAlex
- OA URL
-
https://doi.org/10.1007/s40747-024-01567-0Direct OA link when available
- Concepts
-
Lidar, Computer vision, Artificial intelligence, Computational intelligence, Object (grammar), Computer science, Object detection, Fusion, Remote sensing, Geography, Pattern recognition (psychology), Philosophy, LinguisticsTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
10Total citation count in OpenAlex
- Citations by year (recent)
-
2025: 9, 2024: 1Per-year citation counts (last 5 years)
- References (count)
-
47Number of works referenced by this work
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4401052722 |
|---|---|
| doi | https://doi.org/10.1007/s40747-024-01567-0 |
| ids.doi | https://doi.org/10.1007/s40747-024-01567-0 |
| ids.openalex | https://openalex.org/W4401052722 |
| fwci | 5.30157559 |
| type | article |
| title | CL-fusionBEV: 3D object detection method with camera-LiDAR fusion in Bird’s Eye View |
| biblio.issue | 6 |
| biblio.volume | 10 |
| biblio.last_page | 7696 |
| biblio.first_page | 7681 |
| topics[0].id | https://openalex.org/T10036 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9991999864578247 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Advanced Neural Network Applications |
| topics[1].id | https://openalex.org/T14413 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9959999918937683 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1702 |
| topics[1].subfield.display_name | Artificial Intelligence |
| topics[1].display_name | Advanced Technologies in Various Fields |
| topics[2].id | https://openalex.org/T10191 |
| topics[2].field.id | https://openalex.org/fields/22 |
| topics[2].field.display_name | Engineering |
| topics[2].score | 0.9828000068664551 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/2202 |
| topics[2].subfield.display_name | Aerospace Engineering |
| topics[2].display_name | Robotics and Sensor-Based Localization |
| is_xpac | False |
| apc_list.value | 1320 |
| apc_list.currency | GBP |
| apc_list.value_usd | 1619 |
| apc_paid.value | 1320 |
| apc_paid.currency | GBP |
| apc_paid.value_usd | 1619 |
| concepts[0].id | https://openalex.org/C51399673 |
| concepts[0].level | 2 |
| concepts[0].score | 0.7189284563064575 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q504027 |
| concepts[0].display_name | Lidar |
| concepts[1].id | https://openalex.org/C31972630 |
| concepts[1].level | 1 |
| concepts[1].score | 0.7125591039657593 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q844240 |
| concepts[1].display_name | Computer vision |
| concepts[2].id | https://openalex.org/C154945302 |
| concepts[2].level | 1 |
| concepts[2].score | 0.6696165800094604 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[2].display_name | Artificial intelligence |
| concepts[3].id | https://openalex.org/C139502532 |
| concepts[3].level | 2 |
| concepts[3].score | 0.6234382390975952 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q1122090 |
| concepts[3].display_name | Computational intelligence |
| concepts[4].id | https://openalex.org/C2781238097 |
| concepts[4].level | 2 |
| concepts[4].score | 0.6123619675636292 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q175026 |
| concepts[4].display_name | Object (grammar) |
| concepts[5].id | https://openalex.org/C41008148 |
| concepts[5].level | 0 |
| concepts[5].score | 0.49303504824638367 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[5].display_name | Computer science |
| concepts[6].id | https://openalex.org/C2776151529 |
| concepts[6].level | 3 |
| concepts[6].score | 0.47182995080947876 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q3045304 |
| concepts[6].display_name | Object detection |
| concepts[7].id | https://openalex.org/C158525013 |
| concepts[7].level | 2 |
| concepts[7].score | 0.44637003540992737 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q2593739 |
| concepts[7].display_name | Fusion |
| concepts[8].id | https://openalex.org/C62649853 |
| concepts[8].level | 1 |
| concepts[8].score | 0.33870914578437805 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q199687 |
| concepts[8].display_name | Remote sensing |
| concepts[9].id | https://openalex.org/C205649164 |
| concepts[9].level | 0 |
| concepts[9].score | 0.271365761756897 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q1071 |
| concepts[9].display_name | Geography |
| concepts[10].id | https://openalex.org/C153180895 |
| concepts[10].level | 2 |
| concepts[10].score | 0.2382054626941681 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q7148389 |
| concepts[10].display_name | Pattern recognition (psychology) |
| concepts[11].id | https://openalex.org/C138885662 |
| concepts[11].level | 0 |
| concepts[11].score | 0.0 |
| concepts[11].wikidata | https://www.wikidata.org/wiki/Q5891 |
| concepts[11].display_name | Philosophy |
| concepts[12].id | https://openalex.org/C41895202 |
| concepts[12].level | 1 |
| concepts[12].score | 0.0 |
| concepts[12].wikidata | https://www.wikidata.org/wiki/Q8162 |
| concepts[12].display_name | Linguistics |
| keywords[0].id | https://openalex.org/keywords/lidar |
| keywords[0].score | 0.7189284563064575 |
| keywords[0].display_name | Lidar |
| keywords[1].id | https://openalex.org/keywords/computer-vision |
| keywords[1].score | 0.7125591039657593 |
| keywords[1].display_name | Computer vision |
| keywords[2].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[2].score | 0.6696165800094604 |
| keywords[2].display_name | Artificial intelligence |
| keywords[3].id | https://openalex.org/keywords/computational-intelligence |
| keywords[3].score | 0.6234382390975952 |
| keywords[3].display_name | Computational intelligence |
| keywords[4].id | https://openalex.org/keywords/object |
| keywords[4].score | 0.6123619675636292 |
| keywords[4].display_name | Object (grammar) |
| keywords[5].id | https://openalex.org/keywords/computer-science |
| keywords[5].score | 0.49303504824638367 |
| keywords[5].display_name | Computer science |
| keywords[6].id | https://openalex.org/keywords/object-detection |
| keywords[6].score | 0.47182995080947876 |
| keywords[6].display_name | Object detection |
| keywords[7].id | https://openalex.org/keywords/fusion |
| keywords[7].score | 0.44637003540992737 |
| keywords[7].display_name | Fusion |
| keywords[8].id | https://openalex.org/keywords/remote-sensing |
| keywords[8].score | 0.33870914578437805 |
| keywords[8].display_name | Remote sensing |
| keywords[9].id | https://openalex.org/keywords/geography |
| keywords[9].score | 0.271365761756897 |
| keywords[9].display_name | Geography |
| keywords[10].id | https://openalex.org/keywords/pattern-recognition |
| keywords[10].score | 0.2382054626941681 |
| keywords[10].display_name | Pattern recognition (psychology) |
| language | en |
| locations[0].id | doi:10.1007/s40747-024-01567-0 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S3035462843 |
| locations[0].source.issn | 2198-6053, 2199-4536 |
| locations[0].source.type | journal |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | 2198-6053 |
| locations[0].source.is_core | True |
| locations[0].source.is_in_doaj | True |
| locations[0].source.display_name | Complex & Intelligent Systems |
| locations[0].source.host_organization | https://openalex.org/P4310319900 |
| locations[0].source.host_organization_name | Springer Science+Business Media |
| locations[0].source.host_organization_lineage | https://openalex.org/P4310319900, https://openalex.org/P4310319965 |
| locations[0].source.host_organization_lineage_names | Springer Science+Business Media, Springer Nature |
| locations[0].license | cc-by |
| locations[0].pdf_url | |
| locations[0].version | publishedVersion |
| locations[0].raw_type | journal-article |
| locations[0].license_id | https://openalex.org/licenses/cc-by |
| locations[0].is_accepted | True |
| locations[0].is_published | True |
| locations[0].raw_source_name | Complex & Intelligent Systems |
| locations[0].landing_page_url | https://doi.org/10.1007/s40747-024-01567-0 |
| locations[1].id | pmh:oai:doaj.org/article:e90820165ff64e448f3d1310127a12be |
| locations[1].is_oa | False |
| locations[1].source.id | https://openalex.org/S4306401280 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | False |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | DOAJ (DOAJ: Directory of Open Access Journals) |
| locations[1].source.host_organization | |
| locations[1].source.host_organization_name | |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | submittedVersion |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | False |
| locations[1].raw_source_name | Complex & Intelligent Systems, Vol 10, Iss 6, Pp 7681-7696 (2024) |
| locations[1].landing_page_url | https://doaj.org/article/e90820165ff64e448f3d1310127a12be |
| indexed_in | crossref, doaj |
| authorships[0].author.id | https://openalex.org/A5046985719 |
| authorships[0].author.orcid | https://orcid.org/0000-0003-1533-8154 |
| authorships[0].author.display_name | Peicheng Shi |
| authorships[0].countries | CN |
| authorships[0].affiliations[0].institution_ids | https://openalex.org/I70908550 |
| authorships[0].affiliations[0].raw_affiliation_string | School of Mechanical and Automotive Engineering, Anhui Polytechnic University, Wuhu, 241000, China |
| authorships[0].institutions[0].id | https://openalex.org/I70908550 |
| authorships[0].institutions[0].ror | https://ror.org/041sj0284 |
| authorships[0].institutions[0].type | education |
| authorships[0].institutions[0].lineage | https://openalex.org/I70908550 |
| authorships[0].institutions[0].country_code | CN |
| authorships[0].institutions[0].display_name | Anhui Polytechnic University |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Peicheng Shi |
| authorships[0].is_corresponding | True |
| authorships[0].raw_affiliation_strings | School of Mechanical and Automotive Engineering, Anhui Polytechnic University, Wuhu, 241000, China |
| authorships[1].author.id | https://openalex.org/A5100677599 |
| authorships[1].author.orcid | https://orcid.org/0000-0002-8077-5345 |
| authorships[1].author.display_name | Zhiqiang Liu |
| authorships[1].countries | CN |
| authorships[1].affiliations[0].institution_ids | https://openalex.org/I70908550 |
| authorships[1].affiliations[0].raw_affiliation_string | School of Mechanical and Automotive Engineering, Anhui Polytechnic University, Wuhu, 241000, China |
| authorships[1].institutions[0].id | https://openalex.org/I70908550 |
| authorships[1].institutions[0].ror | https://ror.org/041sj0284 |
| authorships[1].institutions[0].type | education |
| authorships[1].institutions[0].lineage | https://openalex.org/I70908550 |
| authorships[1].institutions[0].country_code | CN |
| authorships[1].institutions[0].display_name | Anhui Polytechnic University |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Zhiqiang Liu |
| authorships[1].is_corresponding | False |
| authorships[1].raw_affiliation_strings | School of Mechanical and Automotive Engineering, Anhui Polytechnic University, Wuhu, 241000, China |
| authorships[2].author.id | https://openalex.org/A5101334083 |
| authorships[2].author.orcid | https://orcid.org/0009-0004-3278-6252 |
| authorships[2].author.display_name | Xinlong Dong |
| authorships[2].countries | CN |
| authorships[2].affiliations[0].institution_ids | https://openalex.org/I70908550 |
| authorships[2].affiliations[0].raw_affiliation_string | School of Mechanical and Automotive Engineering, Anhui Polytechnic University, Wuhu, 241000, China |
| authorships[2].institutions[0].id | https://openalex.org/I70908550 |
| authorships[2].institutions[0].ror | https://ror.org/041sj0284 |
| authorships[2].institutions[0].type | education |
| authorships[2].institutions[0].lineage | https://openalex.org/I70908550 |
| authorships[2].institutions[0].country_code | CN |
| authorships[2].institutions[0].display_name | Anhui Polytechnic University |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Xinlong Dong |
| authorships[2].is_corresponding | False |
| authorships[2].raw_affiliation_strings | School of Mechanical and Automotive Engineering, Anhui Polytechnic University, Wuhu, 241000, China |
| authorships[3].author.id | https://openalex.org/A5102010695 |
| authorships[3].author.orcid | https://orcid.org/0009-0006-2029-0537 |
| authorships[3].author.display_name | Aixi Yang |
| authorships[3].countries | CN |
| authorships[3].affiliations[0].institution_ids | https://openalex.org/I4210132079, https://openalex.org/I76130692 |
| authorships[3].affiliations[0].raw_affiliation_string | Polytechnic Institute, Zhejiang University, Hangzhou, 310058, China |
| authorships[3].institutions[0].id | https://openalex.org/I4210132079 |
| authorships[3].institutions[0].ror | https://ror.org/03sxnxp24 |
| authorships[3].institutions[0].type | education |
| authorships[3].institutions[0].lineage | https://openalex.org/I4210132079 |
| authorships[3].institutions[0].country_code | CN |
| authorships[3].institutions[0].display_name | Hangzhou Wanxiang Polytechnic |
| authorships[3].institutions[1].id | https://openalex.org/I76130692 |
| authorships[3].institutions[1].ror | https://ror.org/00a2xv884 |
| authorships[3].institutions[1].type | education |
| authorships[3].institutions[1].lineage | https://openalex.org/I76130692 |
| authorships[3].institutions[1].country_code | CN |
| authorships[3].institutions[1].display_name | Zhejiang University |
| authorships[3].author_position | last |
| authorships[3].raw_author_name | Aixi Yang |
| authorships[3].is_corresponding | False |
| authorships[3].raw_affiliation_strings | Polytechnic Institute, Zhejiang University, Hangzhou, 310058, China |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://doi.org/10.1007/s40747-024-01567-0 |
| open_access.oa_status | gold |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | CL-fusionBEV: 3D object detection method with camera-LiDAR fusion in Bird’s Eye View |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T03:46:38.306776 |
| primary_topic.id | https://openalex.org/T10036 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9991999864578247 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Advanced Neural Network Applications |
| related_works | https://openalex.org/W4319317934, https://openalex.org/W2901265155, https://openalex.org/W2956374172, https://openalex.org/W4281783339, https://openalex.org/W4319837668, https://openalex.org/W4308071650, https://openalex.org/W3188333020, https://openalex.org/W1964041166, https://openalex.org/W4292830139, https://openalex.org/W4319309705 |
| cited_by_count | 10 |
| counts_by_year[0].year | 2025 |
| counts_by_year[0].cited_by_count | 9 |
| counts_by_year[1].year | 2024 |
| counts_by_year[1].cited_by_count | 1 |
| locations_count | 2 |
| best_oa_location.id | doi:10.1007/s40747-024-01567-0 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S3035462843 |
| best_oa_location.source.issn | 2198-6053, 2199-4536 |
| best_oa_location.source.type | journal |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | 2198-6053 |
| best_oa_location.source.is_core | True |
| best_oa_location.source.is_in_doaj | True |
| best_oa_location.source.display_name | Complex & Intelligent Systems |
| best_oa_location.source.host_organization | https://openalex.org/P4310319900 |
| best_oa_location.source.host_organization_name | Springer Science+Business Media |
| best_oa_location.source.host_organization_lineage | https://openalex.org/P4310319900, https://openalex.org/P4310319965 |
| best_oa_location.source.host_organization_lineage_names | Springer Science+Business Media, Springer Nature |
| best_oa_location.license | cc-by |
| best_oa_location.pdf_url | |
| best_oa_location.version | publishedVersion |
| best_oa_location.raw_type | journal-article |
| best_oa_location.license_id | https://openalex.org/licenses/cc-by |
| best_oa_location.is_accepted | True |
| best_oa_location.is_published | True |
| best_oa_location.raw_source_name | Complex & Intelligent Systems |
| best_oa_location.landing_page_url | https://doi.org/10.1007/s40747-024-01567-0 |
| primary_location.id | doi:10.1007/s40747-024-01567-0 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S3035462843 |
| primary_location.source.issn | 2198-6053, 2199-4536 |
| primary_location.source.type | journal |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | 2198-6053 |
| primary_location.source.is_core | True |
| primary_location.source.is_in_doaj | True |
| primary_location.source.display_name | Complex & Intelligent Systems |
| primary_location.source.host_organization | https://openalex.org/P4310319900 |
| primary_location.source.host_organization_name | Springer Science+Business Media |
| primary_location.source.host_organization_lineage | https://openalex.org/P4310319900, https://openalex.org/P4310319965 |
| primary_location.source.host_organization_lineage_names | Springer Science+Business Media, Springer Nature |
| primary_location.license | cc-by |
| primary_location.pdf_url | |
| primary_location.version | publishedVersion |
| primary_location.raw_type | journal-article |
| primary_location.license_id | https://openalex.org/licenses/cc-by |
| primary_location.is_accepted | True |
| primary_location.is_published | True |
| primary_location.raw_source_name | Complex & Intelligent Systems |
| primary_location.landing_page_url | https://doi.org/10.1007/s40747-024-01567-0 |
| publication_date | 2024-07-27 |
| publication_year | 2024 |
| referenced_works | https://openalex.org/W2963809933, https://openalex.org/W2798462325, https://openalex.org/W4386544112, https://openalex.org/W2963727135, https://openalex.org/W2897529137, https://openalex.org/W3167095230, https://openalex.org/W2968296999, https://openalex.org/W2740144340, https://openalex.org/W2755856537, https://openalex.org/W2991216808, https://openalex.org/W4383066393, https://openalex.org/W2555618208, https://openalex.org/W4387307565, https://openalex.org/W3035574168, https://openalex.org/W2768263011, https://openalex.org/W4327744039, https://openalex.org/W3109395584, https://openalex.org/W6601977772, https://openalex.org/W4321349508, https://openalex.org/W4225793049, https://openalex.org/W4312894406, https://openalex.org/W2798965597, https://openalex.org/W2949708697, https://openalex.org/W2981949127, https://openalex.org/W4366966647, https://openalex.org/W2964062501, https://openalex.org/W2970095196, https://openalex.org/W3004237909, https://openalex.org/W2962888833, https://openalex.org/W4312707458, https://openalex.org/W4390871713, https://openalex.org/W2194775991, https://openalex.org/W2193145675, https://openalex.org/W2963351448, https://openalex.org/W3035172746, https://openalex.org/W2150066425, https://openalex.org/W4385804883, https://openalex.org/W4312953085, https://openalex.org/W6600195515, https://openalex.org/W6601848475, https://openalex.org/W3109675406, https://openalex.org/W3035461736, https://openalex.org/W3017930107, https://openalex.org/W3170030651, https://openalex.org/W3209639308, https://openalex.org/W3107819843, https://openalex.org/W3106250896 |
| referenced_works_count | 47 |
| abstract_inverted_index.a | 22, 94, 159, 195, 212, 230, 238 |
| abstract_inverted_index.3D | 9, 77 |
| abstract_inverted_index.In | 1 |
| abstract_inverted_index.To | 179 |
| abstract_inverted_index.an | 75, 99, 164 |
| abstract_inverted_index.as | 21 |
| abstract_inverted_index.by | 98 |
| abstract_inverted_index.in | 86, 183, 247, 267 |
| abstract_inverted_index.is | 32 |
| abstract_inverted_index.of | 4, 25, 29, 36, 176, 235, 243, 250, 257 |
| abstract_inverted_index.on | 6, 211 |
| abstract_inverted_index.to | 108, 118, 129, 145, 170 |
| abstract_inverted_index.we | 126, 156, 193 |
| abstract_inverted_index.89% | 258 |
| abstract_inverted_index.BEV | 88, 110, 124, 136, 141, 148, 196 |
| abstract_inverted_index.Eye | 15 |
| abstract_inverted_index.Our | 90, 205 |
| abstract_inverted_index.The | 27, 223 |
| abstract_inverted_index.and | 38, 48, 54, 63, 68, 154, 163, 174, 186, 237, 252, 259, 270 |
| abstract_inverted_index.for | 82 |
| abstract_inverted_index.has | 19, 207 |
| abstract_inverted_index.our | 227 |
| abstract_inverted_index.the | 2, 13, 33, 42, 51, 60, 87, 105, 109, 114, 123, 131, 147, 172, 216, 220, 248 |
| abstract_inverted_index.BEV. | 43 |
| abstract_inverted_index.This | 71 |
| abstract_inverted_index.View | 16 |
| abstract_inverted_index.area | 24 |
| abstract_inverted_index.both | 152 |
| abstract_inverted_index.cars | 251 |
| abstract_inverted_index.data | 40, 84 |
| abstract_inverted_index.from | 12, 151, 190 |
| abstract_inverted_index.have | 157 |
| abstract_inverted_index.high | 255 |
| abstract_inverted_index.into | 41, 135 |
| abstract_inverted_index.mean | 231 |
| abstract_inverted_index.that | 103, 199, 226 |
| abstract_inverted_index.this | 30 |
| abstract_inverted_index.view | 53, 95 |
| abstract_inverted_index.wave | 3 |
| abstract_inverted_index.with | 93, 254 |
| abstract_inverted_index.(BEV) | 17 |
| abstract_inverted_index.(NDS) | 242 |
| abstract_inverted_index.(mAP) | 234 |
| abstract_inverted_index.73.3% | 236 |
| abstract_inverted_index.LiDAR | 39, 69, 132, 140 |
| abstract_inverted_index.Score | 241 |
| abstract_inverted_index.cloud | 134 |
| abstract_inverted_index.data. | 178 |
| abstract_inverted_index.front | 52 |
| abstract_inverted_index.modal | 120 |
| abstract_inverted_index.often | 58 |
| abstract_inverted_index.paper | 72 |
| abstract_inverted_index.point | 133 |
| abstract_inverted_index.train | 47 |
| abstract_inverted_index.75.5%, | 244 |
| abstract_inverted_index.90.7%, | 260 |
| abstract_inverted_index.LiDAR, | 155 |
| abstract_inverted_index.camera | 37, 153 |
| abstract_inverted_index.employ | 127 |
| abstract_inverted_index.focus. | 26 |
| abstract_inverted_index.fusion | 35, 85, 121, 167 |
| abstract_inverted_index.global | 184, 202 |
| abstract_inverted_index.method | 228 |
| abstract_inverted_index.module | 102 |
| abstract_inverted_index.object | 10, 78 |
| abstract_inverted_index.sensor | 83 |
| abstract_inverted_index.space, | 111, 137 |
| abstract_inverted_index.within | 50, 122, 215 |
| abstract_inverted_index.Average | 232 |
| abstract_inverted_index.Current | 44 |
| abstract_inverted_index.achieve | 119 |
| abstract_inverted_index.arising | 189 |
| abstract_inverted_index.between | 66 |
| abstract_inverted_index.cameras | 67 |
| abstract_inverted_index.convert | 130 |
| abstract_inverted_index.dataset | 214 |
| abstract_inverted_index.distant | 271 |
| abstract_inverted_index.domain, | 219 |
| abstract_inverted_index.driving | 218 |
| abstract_inverted_index.emerged | 20 |
| abstract_inverted_index.enhance | 171 |
| abstract_inverted_index.essence | 28 |
| abstract_inverted_index.feature | 187, 203 |
| abstract_inverted_index.module. | 116 |
| abstract_inverted_index.pivotal | 23 |
| abstract_inverted_index.predict | 49 |
| abstract_inverted_index.propose | 194 |
| abstract_inverted_index.spatial | 142, 149 |
| abstract_inverted_index.synergy | 173 |
| abstract_inverted_index.system, | 57 |
| abstract_inverted_index.thereby | 112, 138 |
| abstract_inverted_index.Abstract | 0 |
| abstract_inverted_index.Bird’s | 14 |
| abstract_inverted_index.achieves | 229 |
| abstract_inverted_index.aligning | 113 |
| abstract_inverted_index.approach | 91 |
| abstract_inverted_index.dataset. | 222 |
| abstract_inverted_index.designed | 169 |
| abstract_inverted_index.driving, | 8 |
| abstract_inverted_index.exhibits | 264 |
| abstract_inverted_index.existing | 274 |
| abstract_inverted_index.features | 150 |
| abstract_inverted_index.implicit | 100, 165 |
| abstract_inverted_index.inherent | 61 |
| abstract_inverted_index.learning | 101 |
| abstract_inverted_index.methods. | 276 |
| abstract_inverted_index.network, | 168 |
| abstract_inverted_index.nuScenes | 221, 239 |
| abstract_inverted_index.objects, | 272 |
| abstract_inverted_index.occluded | 269 |
| abstract_inverted_index.outcomes | 224 |
| abstract_inverted_index.research | 5 |
| abstract_inverted_index.rigorous | 209 |
| abstract_inverted_index.sensors. | 70 |
| abstract_inverted_index.superior | 265 |
| abstract_inverted_index.tailored | 81 |
| abstract_inverted_index.Cartesian | 55 |
| abstract_inverted_index.Detection | 240 |
| abstract_inverted_index.Moreover, | 144 |
| abstract_inverted_index.Precision | 233 |
| abstract_inverted_index.challenge | 31 |
| abstract_inverted_index.detection | 11, 79, 249 |
| abstract_inverted_index.developed | 158 |
| abstract_inverted_index.effective | 34 |
| abstract_inverted_index.excelling | 246 |
| abstract_inverted_index.features. | 143 |
| abstract_inverted_index.initiates | 92 |
| abstract_inverted_index.integrate | 146 |
| abstract_inverted_index.mechanism | 162, 198 |
| abstract_inverted_index.potential | 181 |
| abstract_inverted_index.reasoning | 185 |
| abstract_inverted_index.undergone | 208 |
| abstract_inverted_index.accuracies | 256 |
| abstract_inverted_index.approaches | 45 |
| abstract_inverted_index.autonomous | 7, 217 |
| abstract_inverted_index.camera’s | 106 |
| abstract_inverted_index.coordinate | 56 |
| abstract_inverted_index.counteract | 180 |
| abstract_inverted_index.dual-modal | 177 |
| abstract_inverted_index.evaluation | 210 |
| abstract_inverted_index.framework, | 125 |
| abstract_inverted_index.generating | 139 |
| abstract_inverted_index.innovative | 76 |
| abstract_inverted_index.introduces | 73 |
| abstract_inverted_index.prediction | 115 |
| abstract_inverted_index.structural | 62 |
| abstract_inverted_index.surpassing | 273 |
| abstract_inverted_index.application | 175 |
| abstract_inverted_index.comparative | 275 |
| abstract_inverted_index.demonstrate | 225 |
| abstract_inverted_index.differences | 65 |
| abstract_inverted_index.facilitated | 97 |
| abstract_inverted_index.facilitates | 200 |
| abstract_inverted_index.identifying | 268 |
| abstract_inverted_index.interaction | 188 |
| abstract_inverted_index.methodology | 80, 206 |
| abstract_inverted_index.multi-modal | 160, 166, 191 |
| abstract_inverted_index.operational | 64 |
| abstract_inverted_index.operations. | 204 |
| abstract_inverted_index.overlooking | 59 |
| abstract_inverted_index.pedestrians | 253 |
| abstract_inverted_index.performance | 266 |
| abstract_inverted_index.perspective | 18, 107 |
| abstract_inverted_index.substantial | 213 |
| abstract_inverted_index.transitions | 104 |
| abstract_inverted_index.CL-FusionBEV | 263 |
| abstract_inverted_index.deficiencies | 182 |
| abstract_inverted_index.particularly | 245 |
| abstract_inverted_index.perspective. | 89 |
| abstract_inverted_index.voxelization | 128 |
| abstract_inverted_index.Additionally, | 262 |
| abstract_inverted_index.CL-FusionBEV, | 74 |
| abstract_inverted_index.Subsequently, | 117 |
| abstract_inverted_index.comprehensive | 201 |
| abstract_inverted_index.predominantly | 46 |
| abstract_inverted_index.respectively. | 261 |
| abstract_inverted_index.self-attention | 197 |
| abstract_inverted_index.cross-attention | 161 |
| abstract_inverted_index.transformation, | 96 |
| abstract_inverted_index.cross-attention, | 192 |
| cited_by_percentile_year.max | 99 |
| cited_by_percentile_year.min | 90 |
| corresponding_author_ids | https://openalex.org/A5046985719 |
| countries_distinct_count | 1 |
| institutions_distinct_count | 4 |
| corresponding_institution_ids | https://openalex.org/I70908550 |
| citation_normalized_percentile.value | 0.93753983 |
| citation_normalized_percentile.is_in_top_1_percent | False |
| citation_normalized_percentile.is_in_top_10_percent | True |