A Concise but High-performing Network for Image Guided Depth Completion in Autonomous Driving Article Swipe
YOU?
·
· 2024
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2401.15902
Depth completion is a crucial task in autonomous driving, aiming to convert a sparse depth map into a dense depth prediction. Due to its potentially rich semantic information, RGB image is commonly fused to enhance the completion effect. Image-guided depth completion involves three key challenges: 1) how to effectively fuse the two modalities; 2) how to better recover depth information; and 3) how to achieve real-time prediction for practical autonomous driving. To solve the above problems, we propose a concise but effective network, named CENet, to achieve high-performance depth completion with a simple and elegant structure. Firstly, we use a fast guidance module to fuse the two sensor features, utilizing abundant auxiliary features extracted from the color space. Unlike other commonly used complicated guidance modules, our approach is intuitive and low-cost. In addition, we find and analyze the optimization inconsistency problem for observed and unobserved positions, and a decoupled depth prediction head is proposed to alleviate the issue. The proposed decoupled head can better output the depth of valid and invalid positions with very few extra inference time. Based on the simple structure of dual-encoder and single-decoder, our CENet can achieve superior balance between accuracy and efficiency. In the KITTI depth completion benchmark, our CENet attains competitive performance and inference speed compared with the state-of-the-art methods. To validate the generalization of our method, we also evaluate on indoor NYUv2 dataset, and our CENet still achieve impressive results. The code of this work will be available at https://github.com/lmomoy/CHNet.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2401.15902
- https://arxiv.org/pdf/2401.15902
- OA Status
- green
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4391376583
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4391376583Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2401.15902Digital Object Identifier
- Title
-
A Concise but High-performing Network for Image Guided Depth Completion in Autonomous DrivingWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2024Year of publication
- Publication date
-
2024-01-29Full publication date if available
- Authors
-
Moyun Liu, Youping Chen, Jingming Xie, Lei Yao, Yang Zhang, Joey Tianyi ZhouList of authors in order
- Landing page
-
https://arxiv.org/abs/2401.15902Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2401.15902Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2401.15902Direct OA link when available
- Concepts
-
Completion (oil and gas wells), Image (mathematics), Computer science, Computer vision, Artificial intelligence, Geology, Petroleum engineeringTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
0Total citation count in OpenAlex
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4391376583 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2401.15902 |
| ids.doi | https://doi.org/10.48550/arxiv.2401.15902 |
| ids.openalex | https://openalex.org/W4391376583 |
| fwci | |
| type | preprint |
| title | A Concise but High-performing Network for Image Guided Depth Completion in Autonomous Driving |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T10531 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9979000091552734 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Advanced Vision and Imaging |
| topics[1].id | https://openalex.org/T10052 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9958000183105469 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1707 |
| topics[1].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[1].display_name | Medical Image Segmentation Techniques |
| topics[2].id | https://openalex.org/T10481 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9894999861717224 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1704 |
| topics[2].subfield.display_name | Computer Graphics and Computer-Aided Design |
| topics[2].display_name | Computer Graphics and Visualization Techniques |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C2779538338 |
| concepts[0].level | 2 |
| concepts[0].score | 0.7476956844329834 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q2990590 |
| concepts[0].display_name | Completion (oil and gas wells) |
| concepts[1].id | https://openalex.org/C115961682 |
| concepts[1].level | 2 |
| concepts[1].score | 0.6071658134460449 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q860623 |
| concepts[1].display_name | Image (mathematics) |
| concepts[2].id | https://openalex.org/C41008148 |
| concepts[2].level | 0 |
| concepts[2].score | 0.5071735382080078 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[2].display_name | Computer science |
| concepts[3].id | https://openalex.org/C31972630 |
| concepts[3].level | 1 |
| concepts[3].score | 0.46658995747566223 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q844240 |
| concepts[3].display_name | Computer vision |
| concepts[4].id | https://openalex.org/C154945302 |
| concepts[4].level | 1 |
| concepts[4].score | 0.4659503400325775 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[4].display_name | Artificial intelligence |
| concepts[5].id | https://openalex.org/C127313418 |
| concepts[5].level | 0 |
| concepts[5].score | 0.23217156529426575 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q1069 |
| concepts[5].display_name | Geology |
| concepts[6].id | https://openalex.org/C78762247 |
| concepts[6].level | 1 |
| concepts[6].score | 0.09924891591072083 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q1273174 |
| concepts[6].display_name | Petroleum engineering |
| keywords[0].id | https://openalex.org/keywords/completion |
| keywords[0].score | 0.7476956844329834 |
| keywords[0].display_name | Completion (oil and gas wells) |
| keywords[1].id | https://openalex.org/keywords/image |
| keywords[1].score | 0.6071658134460449 |
| keywords[1].display_name | Image (mathematics) |
| keywords[2].id | https://openalex.org/keywords/computer-science |
| keywords[2].score | 0.5071735382080078 |
| keywords[2].display_name | Computer science |
| keywords[3].id | https://openalex.org/keywords/computer-vision |
| keywords[3].score | 0.46658995747566223 |
| keywords[3].display_name | Computer vision |
| keywords[4].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[4].score | 0.4659503400325775 |
| keywords[4].display_name | Artificial intelligence |
| keywords[5].id | https://openalex.org/keywords/geology |
| keywords[5].score | 0.23217156529426575 |
| keywords[5].display_name | Geology |
| keywords[6].id | https://openalex.org/keywords/petroleum-engineering |
| keywords[6].score | 0.09924891591072083 |
| keywords[6].display_name | Petroleum engineering |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2401.15902 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2401.15902 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2401.15902 |
| locations[1].id | doi:10.48550/arxiv.2401.15902 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2401.15902 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5016553220 |
| authorships[0].author.orcid | https://orcid.org/0000-0002-4530-2606 |
| authorships[0].author.display_name | Moyun Liu |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Liu, Moyun |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5101513075 |
| authorships[1].author.orcid | https://orcid.org/0000-0002-3635-085X |
| authorships[1].author.display_name | Youping Chen |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Chen, Youping |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5103166003 |
| authorships[2].author.orcid | https://orcid.org/0000-0001-5974-8871 |
| authorships[2].author.display_name | Jingming Xie |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Xie, Jingming |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5102628507 |
| authorships[3].author.orcid | |
| authorships[3].author.display_name | Lei Yao |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Yao, Lei |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5100354843 |
| authorships[4].author.orcid | https://orcid.org/0009-0006-3659-2353 |
| authorships[4].author.display_name | Yang Zhang |
| authorships[4].author_position | middle |
| authorships[4].raw_author_name | Zhang, Yang |
| authorships[4].is_corresponding | False |
| authorships[5].author.id | https://openalex.org/A5045125183 |
| authorships[5].author.orcid | https://orcid.org/0000-0002-4675-7055 |
| authorships[5].author.display_name | Joey Tianyi Zhou |
| authorships[5].author_position | last |
| authorships[5].raw_author_name | Zhou, Joey Tianyi |
| authorships[5].is_corresponding | False |
| has_content.pdf | True |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2401.15902 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2024-01-31T00:00:00 |
| display_name | A Concise but High-performing Network for Image Guided Depth Completion in Autonomous Driving |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T10531 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9979000091552734 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Advanced Vision and Imaging |
| related_works | https://openalex.org/W2755342338, https://openalex.org/W2058170566, https://openalex.org/W2036807459, https://openalex.org/W2775347418, https://openalex.org/W1969923398, https://openalex.org/W2166024367, https://openalex.org/W2772917594, https://openalex.org/W3116076068, https://openalex.org/W2229312674, https://openalex.org/W2079911747 |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2401.15902 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2401.15902 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2401.15902 |
| primary_location.id | pmh:oai:arXiv.org:2401.15902 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2401.15902 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2401.15902 |
| publication_date | 2024-01-29 |
| publication_year | 2024 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 3, 12, 17, 78, 91, 99, 147 |
| abstract_inverted_index.1) | 45 |
| abstract_inverted_index.2) | 53 |
| abstract_inverted_index.3) | 61 |
| abstract_inverted_index.In | 131, 197 |
| abstract_inverted_index.To | 71, 216 |
| abstract_inverted_index.at | 245 |
| abstract_inverted_index.be | 243 |
| abstract_inverted_index.in | 6 |
| abstract_inverted_index.is | 2, 30, 127, 152 |
| abstract_inverted_index.of | 167, 183, 220, 239 |
| abstract_inverted_index.on | 179, 226 |
| abstract_inverted_index.to | 10, 22, 33, 47, 55, 63, 85, 103, 154 |
| abstract_inverted_index.we | 76, 97, 133, 223 |
| abstract_inverted_index.Due | 21 |
| abstract_inverted_index.RGB | 28 |
| abstract_inverted_index.The | 158, 237 |
| abstract_inverted_index.and | 60, 93, 129, 135, 143, 146, 169, 185, 195, 208, 230 |
| abstract_inverted_index.but | 80 |
| abstract_inverted_index.can | 162, 189 |
| abstract_inverted_index.few | 174 |
| abstract_inverted_index.for | 67, 141 |
| abstract_inverted_index.how | 46, 54, 62 |
| abstract_inverted_index.its | 23 |
| abstract_inverted_index.key | 43 |
| abstract_inverted_index.map | 15 |
| abstract_inverted_index.our | 125, 187, 203, 221, 231 |
| abstract_inverted_index.the | 35, 50, 73, 105, 115, 137, 156, 165, 180, 198, 213, 218 |
| abstract_inverted_index.two | 51, 106 |
| abstract_inverted_index.use | 98 |
| abstract_inverted_index.also | 224 |
| abstract_inverted_index.code | 238 |
| abstract_inverted_index.fast | 100 |
| abstract_inverted_index.find | 134 |
| abstract_inverted_index.from | 114 |
| abstract_inverted_index.fuse | 49, 104 |
| abstract_inverted_index.head | 151, 161 |
| abstract_inverted_index.into | 16 |
| abstract_inverted_index.rich | 25 |
| abstract_inverted_index.task | 5 |
| abstract_inverted_index.this | 240 |
| abstract_inverted_index.used | 121 |
| abstract_inverted_index.very | 173 |
| abstract_inverted_index.will | 242 |
| abstract_inverted_index.with | 90, 172, 212 |
| abstract_inverted_index.work | 241 |
| abstract_inverted_index.Based | 178 |
| abstract_inverted_index.CENet | 188, 204, 232 |
| abstract_inverted_index.Depth | 0 |
| abstract_inverted_index.KITTI | 199 |
| abstract_inverted_index.NYUv2 | 228 |
| abstract_inverted_index.above | 74 |
| abstract_inverted_index.color | 116 |
| abstract_inverted_index.dense | 18 |
| abstract_inverted_index.depth | 14, 19, 39, 58, 88, 149, 166, 200 |
| abstract_inverted_index.extra | 175 |
| abstract_inverted_index.fused | 32 |
| abstract_inverted_index.image | 29 |
| abstract_inverted_index.named | 83 |
| abstract_inverted_index.other | 119 |
| abstract_inverted_index.solve | 72 |
| abstract_inverted_index.speed | 210 |
| abstract_inverted_index.still | 233 |
| abstract_inverted_index.three | 42 |
| abstract_inverted_index.time. | 177 |
| abstract_inverted_index.valid | 168 |
| abstract_inverted_index.CENet, | 84 |
| abstract_inverted_index.Unlike | 118 |
| abstract_inverted_index.aiming | 9 |
| abstract_inverted_index.better | 56, 163 |
| abstract_inverted_index.indoor | 227 |
| abstract_inverted_index.issue. | 157 |
| abstract_inverted_index.module | 102 |
| abstract_inverted_index.output | 164 |
| abstract_inverted_index.sensor | 107 |
| abstract_inverted_index.simple | 92, 181 |
| abstract_inverted_index.space. | 117 |
| abstract_inverted_index.sparse | 13 |
| abstract_inverted_index.achieve | 64, 86, 190, 234 |
| abstract_inverted_index.analyze | 136 |
| abstract_inverted_index.attains | 205 |
| abstract_inverted_index.balance | 192 |
| abstract_inverted_index.between | 193 |
| abstract_inverted_index.concise | 79 |
| abstract_inverted_index.convert | 11 |
| abstract_inverted_index.crucial | 4 |
| abstract_inverted_index.effect. | 37 |
| abstract_inverted_index.elegant | 94 |
| abstract_inverted_index.enhance | 34 |
| abstract_inverted_index.invalid | 170 |
| abstract_inverted_index.method, | 222 |
| abstract_inverted_index.problem | 140 |
| abstract_inverted_index.propose | 77 |
| abstract_inverted_index.recover | 57 |
| abstract_inverted_index.Firstly, | 96 |
| abstract_inverted_index.abundant | 110 |
| abstract_inverted_index.accuracy | 194 |
| abstract_inverted_index.approach | 126 |
| abstract_inverted_index.commonly | 31, 120 |
| abstract_inverted_index.compared | 211 |
| abstract_inverted_index.dataset, | 229 |
| abstract_inverted_index.driving, | 8 |
| abstract_inverted_index.driving. | 70 |
| abstract_inverted_index.evaluate | 225 |
| abstract_inverted_index.features | 112 |
| abstract_inverted_index.guidance | 101, 123 |
| abstract_inverted_index.involves | 41 |
| abstract_inverted_index.methods. | 215 |
| abstract_inverted_index.modules, | 124 |
| abstract_inverted_index.network, | 82 |
| abstract_inverted_index.observed | 142 |
| abstract_inverted_index.proposed | 153, 159 |
| abstract_inverted_index.results. | 236 |
| abstract_inverted_index.semantic | 26 |
| abstract_inverted_index.superior | 191 |
| abstract_inverted_index.validate | 217 |
| abstract_inverted_index.addition, | 132 |
| abstract_inverted_index.alleviate | 155 |
| abstract_inverted_index.auxiliary | 111 |
| abstract_inverted_index.available | 244 |
| abstract_inverted_index.decoupled | 148, 160 |
| abstract_inverted_index.effective | 81 |
| abstract_inverted_index.extracted | 113 |
| abstract_inverted_index.features, | 108 |
| abstract_inverted_index.inference | 176, 209 |
| abstract_inverted_index.intuitive | 128 |
| abstract_inverted_index.low-cost. | 130 |
| abstract_inverted_index.positions | 171 |
| abstract_inverted_index.practical | 68 |
| abstract_inverted_index.problems, | 75 |
| abstract_inverted_index.real-time | 65 |
| abstract_inverted_index.structure | 182 |
| abstract_inverted_index.utilizing | 109 |
| abstract_inverted_index.autonomous | 7, 69 |
| abstract_inverted_index.benchmark, | 202 |
| abstract_inverted_index.completion | 1, 36, 40, 89, 201 |
| abstract_inverted_index.impressive | 235 |
| abstract_inverted_index.positions, | 145 |
| abstract_inverted_index.prediction | 66, 150 |
| abstract_inverted_index.structure. | 95 |
| abstract_inverted_index.unobserved | 144 |
| abstract_inverted_index.challenges: | 44 |
| abstract_inverted_index.competitive | 206 |
| abstract_inverted_index.complicated | 122 |
| abstract_inverted_index.effectively | 48 |
| abstract_inverted_index.efficiency. | 196 |
| abstract_inverted_index.modalities; | 52 |
| abstract_inverted_index.performance | 207 |
| abstract_inverted_index.potentially | 24 |
| abstract_inverted_index.prediction. | 20 |
| abstract_inverted_index.Image-guided | 38 |
| abstract_inverted_index.dual-encoder | 184 |
| abstract_inverted_index.information, | 27 |
| abstract_inverted_index.information; | 59 |
| abstract_inverted_index.optimization | 138 |
| abstract_inverted_index.inconsistency | 139 |
| abstract_inverted_index.generalization | 219 |
| abstract_inverted_index.single-decoder, | 186 |
| abstract_inverted_index.high-performance | 87 |
| abstract_inverted_index.state-of-the-art | 214 |
| abstract_inverted_index.https://github.com/lmomoy/CHNet. | 246 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 6 |
| citation_normalized_percentile |