Learning Robust Multi-scale Representation for Neural Radiance Fields from Unposed Images Article Swipe
YOU?
·
· 2023
· Open Access
·
· DOI: https://doi.org/10.1007/s11263-023-01936-1
We introduce an improved solution to the neural image-based rendering problem in computer vision. Given a set of images taken from a freely moving camera at train time, the proposed approach could synthesize a realistic image of the scene from a novel viewpoint at test time. The key ideas presented in this paper are (i) Recovering accurate camera parameters via a robust pipeline from unposed day-to-day images is equally crucial in neural novel view synthesis problem; (ii) It is rather more practical to model object's content at different resolutions since dramatic camera motion is highly likely in day-to-day unposed images. To incorporate the key ideas, we leverage the fundamentals of scene rigidity, multi-scale neural scene representation, and single-image depth prediction. Concretely, the proposed approach makes the camera parameters as learnable in a neural fields-based modeling framework. By assuming per view depth prediction is given up to scale, we constrain the relative pose between successive frames. From the relative poses, absolute camera pose estimation is modeled via a graph-neural network-based multiple motion averaging within the multi-scale neural-fields network, leading to a single loss function. Optimizing the introduced loss function provides camera intrinsic, extrinsic, and image rendering from unposed images. We demonstrate, with examples, that for a unified framework to accurately model multiscale neural scene representation from day-to-day acquired unposed multi-view images, it is equally essential to have precise camera-pose estimates within the scene representation framework. Without considering robustness measures in the camera pose estimation pipeline, modeling for multi-scale aliasing artifacts can be counterproductive. We present extensive experiments on several benchmark datasets to demonstrate the suitability of our approach.
Related Topics
- Type
- article
- Language
- en
- Landing Page
- https://doi.org/10.1007/s11263-023-01936-1
- OA Status
- hybrid
- Cited By
- 5
- References
- 46
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4388593728
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4388593728Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.1007/s11263-023-01936-1Digital Object Identifier
- Title
-
Learning Robust Multi-scale Representation for Neural Radiance Fields from Unposed ImagesWork title
- Type
-
articleOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2023Year of publication
- Publication date
-
2023-11-11Full publication date if available
- Authors
-
Nishant Jain, Suryansh Kumar, Luc Van GoolList of authors in order
- Landing page
-
https://doi.org/10.1007/s11263-023-01936-1Publisher landing page
- Open access
-
YesWhether a free full text is available
- OA status
-
hybridOpen access status per OpenAlex
- OA URL
-
https://doi.org/10.1007/s11263-023-01936-1Direct OA link when available
- Concepts
-
Artificial intelligence, Computer vision, Computer science, Rendering (computer graphics), Artificial neural network, Robustness (evolution), Leverage (statistics), View synthesis, Gene, Chemistry, BiochemistryTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
5Total citation count in OpenAlex
- Citations by year (recent)
-
2025: 5Per-year citation counts (last 5 years)
- References (count)
-
46Number of works referenced by this work
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4388593728 |
|---|---|
| doi | https://doi.org/10.1007/s11263-023-01936-1 |
| ids.doi | https://doi.org/10.1007/s11263-023-01936-1 |
| ids.openalex | https://openalex.org/W4388593728 |
| fwci | 0.9098417 |
| type | article |
| title | Learning Robust Multi-scale Representation for Neural Radiance Fields from Unposed Images |
| biblio.issue | 4 |
| biblio.volume | 132 |
| biblio.last_page | 1335 |
| biblio.first_page | 1310 |
| topics[0].id | https://openalex.org/T10531 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9998999834060669 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Advanced Vision and Imaging |
| topics[1].id | https://openalex.org/T10719 |
| topics[1].field.id | https://openalex.org/fields/22 |
| topics[1].field.display_name | Engineering |
| topics[1].score | 0.9987999796867371 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/2206 |
| topics[1].subfield.display_name | Computational Mechanics |
| topics[1].display_name | 3D Shape Modeling and Analysis |
| topics[2].id | https://openalex.org/T10481 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9986000061035156 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1704 |
| topics[2].subfield.display_name | Computer Graphics and Computer-Aided Design |
| topics[2].display_name | Computer Graphics and Visualization Techniques |
| is_xpac | False |
| apc_list.value | 2890 |
| apc_list.currency | EUR |
| apc_list.value_usd | 3690 |
| apc_paid.value | 2890 |
| apc_paid.currency | EUR |
| apc_paid.value_usd | 3690 |
| concepts[0].id | https://openalex.org/C154945302 |
| concepts[0].level | 1 |
| concepts[0].score | 0.7966893911361694 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[0].display_name | Artificial intelligence |
| concepts[1].id | https://openalex.org/C31972630 |
| concepts[1].level | 1 |
| concepts[1].score | 0.7011322379112244 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q844240 |
| concepts[1].display_name | Computer vision |
| concepts[2].id | https://openalex.org/C41008148 |
| concepts[2].level | 0 |
| concepts[2].score | 0.6914991736412048 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[2].display_name | Computer science |
| concepts[3].id | https://openalex.org/C205711294 |
| concepts[3].level | 2 |
| concepts[3].score | 0.6483374238014221 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q176953 |
| concepts[3].display_name | Rendering (computer graphics) |
| concepts[4].id | https://openalex.org/C50644808 |
| concepts[4].level | 2 |
| concepts[4].score | 0.5755181312561035 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q192776 |
| concepts[4].display_name | Artificial neural network |
| concepts[5].id | https://openalex.org/C63479239 |
| concepts[5].level | 3 |
| concepts[5].score | 0.5263639092445374 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q7353546 |
| concepts[5].display_name | Robustness (evolution) |
| concepts[6].id | https://openalex.org/C153083717 |
| concepts[6].level | 2 |
| concepts[6].score | 0.4711885452270508 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q6535263 |
| concepts[6].display_name | Leverage (statistics) |
| concepts[7].id | https://openalex.org/C2776449333 |
| concepts[7].level | 3 |
| concepts[7].score | 0.4648270905017853 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q7928781 |
| concepts[7].display_name | View synthesis |
| concepts[8].id | https://openalex.org/C104317684 |
| concepts[8].level | 2 |
| concepts[8].score | 0.0 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q7187 |
| concepts[8].display_name | Gene |
| concepts[9].id | https://openalex.org/C185592680 |
| concepts[9].level | 0 |
| concepts[9].score | 0.0 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q2329 |
| concepts[9].display_name | Chemistry |
| concepts[10].id | https://openalex.org/C55493867 |
| concepts[10].level | 1 |
| concepts[10].score | 0.0 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q7094 |
| concepts[10].display_name | Biochemistry |
| keywords[0].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[0].score | 0.7966893911361694 |
| keywords[0].display_name | Artificial intelligence |
| keywords[1].id | https://openalex.org/keywords/computer-vision |
| keywords[1].score | 0.7011322379112244 |
| keywords[1].display_name | Computer vision |
| keywords[2].id | https://openalex.org/keywords/computer-science |
| keywords[2].score | 0.6914991736412048 |
| keywords[2].display_name | Computer science |
| keywords[3].id | https://openalex.org/keywords/rendering |
| keywords[3].score | 0.6483374238014221 |
| keywords[3].display_name | Rendering (computer graphics) |
| keywords[4].id | https://openalex.org/keywords/artificial-neural-network |
| keywords[4].score | 0.5755181312561035 |
| keywords[4].display_name | Artificial neural network |
| keywords[5].id | https://openalex.org/keywords/robustness |
| keywords[5].score | 0.5263639092445374 |
| keywords[5].display_name | Robustness (evolution) |
| keywords[6].id | https://openalex.org/keywords/leverage |
| keywords[6].score | 0.4711885452270508 |
| keywords[6].display_name | Leverage (statistics) |
| keywords[7].id | https://openalex.org/keywords/view-synthesis |
| keywords[7].score | 0.4648270905017853 |
| keywords[7].display_name | View synthesis |
| language | en |
| locations[0].id | doi:10.1007/s11263-023-01936-1 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S25538012 |
| locations[0].source.issn | 0920-5691, 1573-1405 |
| locations[0].source.type | journal |
| locations[0].source.is_oa | False |
| locations[0].source.issn_l | 0920-5691 |
| locations[0].source.is_core | True |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | International Journal of Computer Vision |
| locations[0].source.host_organization | https://openalex.org/P4310319900 |
| locations[0].source.host_organization_name | Springer Science+Business Media |
| locations[0].source.host_organization_lineage | https://openalex.org/P4310319900, https://openalex.org/P4310319965 |
| locations[0].source.host_organization_lineage_names | Springer Science+Business Media, Springer Nature |
| locations[0].license | cc-by |
| locations[0].pdf_url | |
| locations[0].version | publishedVersion |
| locations[0].raw_type | journal-article |
| locations[0].license_id | https://openalex.org/licenses/cc-by |
| locations[0].is_accepted | True |
| locations[0].is_published | True |
| locations[0].raw_source_name | International Journal of Computer Vision |
| locations[0].landing_page_url | https://doi.org/10.1007/s11263-023-01936-1 |
| indexed_in | crossref |
| authorships[0].author.id | https://openalex.org/A5073570615 |
| authorships[0].author.orcid | https://orcid.org/0000-0003-3260-2543 |
| authorships[0].author.display_name | Nishant Jain |
| authorships[0].countries | IN |
| authorships[0].affiliations[0].institution_ids | https://openalex.org/I154851008 |
| authorships[0].affiliations[0].raw_affiliation_string | IIT Roorkee, Roorkee, India |
| authorships[0].institutions[0].id | https://openalex.org/I154851008 |
| authorships[0].institutions[0].ror | https://ror.org/00582g326 |
| authorships[0].institutions[0].type | education |
| authorships[0].institutions[0].lineage | https://openalex.org/I154851008 |
| authorships[0].institutions[0].country_code | IN |
| authorships[0].institutions[0].display_name | Indian Institute of Technology Roorkee |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Nishant Jain |
| authorships[0].is_corresponding | False |
| authorships[0].raw_affiliation_strings | IIT Roorkee, Roorkee, India |
| authorships[1].author.id | https://openalex.org/A5002526108 |
| authorships[1].author.orcid | https://orcid.org/0000-0003-2755-8744 |
| authorships[1].author.display_name | Suryansh Kumar |
| authorships[1].countries | CH |
| authorships[1].affiliations[0].institution_ids | https://openalex.org/I35440088 |
| authorships[1].affiliations[0].raw_affiliation_string | CVL Lab at ETH, Zurich, Switzerland |
| authorships[1].institutions[0].id | https://openalex.org/I35440088 |
| authorships[1].institutions[0].ror | https://ror.org/05a28rw58 |
| authorships[1].institutions[0].type | education |
| authorships[1].institutions[0].lineage | https://openalex.org/I2799323385, https://openalex.org/I35440088 |
| authorships[1].institutions[0].country_code | CH |
| authorships[1].institutions[0].display_name | ETH Zurich |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Suryansh Kumar |
| authorships[1].is_corresponding | False |
| authorships[1].raw_affiliation_strings | CVL Lab at ETH, Zurich, Switzerland |
| authorships[2].author.id | https://openalex.org/A5001254143 |
| authorships[2].author.orcid | https://orcid.org/0000-0002-3445-5711 |
| authorships[2].author.display_name | Luc Van Gool |
| authorships[2].countries | CH |
| authorships[2].affiliations[0].institution_ids | https://openalex.org/I35440088 |
| authorships[2].affiliations[0].raw_affiliation_string | CVL Lab at ETH, Zurich, Switzerland |
| authorships[2].institutions[0].id | https://openalex.org/I35440088 |
| authorships[2].institutions[0].ror | https://ror.org/05a28rw58 |
| authorships[2].institutions[0].type | education |
| authorships[2].institutions[0].lineage | https://openalex.org/I2799323385, https://openalex.org/I35440088 |
| authorships[2].institutions[0].country_code | CH |
| authorships[2].institutions[0].display_name | ETH Zurich |
| authorships[2].author_position | last |
| authorships[2].raw_author_name | Luc Van Gool |
| authorships[2].is_corresponding | False |
| authorships[2].raw_affiliation_strings | CVL Lab at ETH, Zurich, Switzerland |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://doi.org/10.1007/s11263-023-01936-1 |
| open_access.oa_status | hybrid |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | Learning Robust Multi-scale Representation for Neural Radiance Fields from Unposed Images |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T03:46:38.306776 |
| primary_topic.id | https://openalex.org/T10531 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9998999834060669 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Advanced Vision and Imaging |
| related_works | https://openalex.org/W2611780867, https://openalex.org/W2731344982, https://openalex.org/W3104631102, https://openalex.org/W1491099440, https://openalex.org/W2073038808, https://openalex.org/W3010999348, https://openalex.org/W2049890183, https://openalex.org/W3206964005, https://openalex.org/W4226100462, https://openalex.org/W3035116386 |
| cited_by_count | 5 |
| counts_by_year[0].year | 2025 |
| counts_by_year[0].cited_by_count | 5 |
| locations_count | 1 |
| best_oa_location.id | doi:10.1007/s11263-023-01936-1 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S25538012 |
| best_oa_location.source.issn | 0920-5691, 1573-1405 |
| best_oa_location.source.type | journal |
| best_oa_location.source.is_oa | False |
| best_oa_location.source.issn_l | 0920-5691 |
| best_oa_location.source.is_core | True |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | International Journal of Computer Vision |
| best_oa_location.source.host_organization | https://openalex.org/P4310319900 |
| best_oa_location.source.host_organization_name | Springer Science+Business Media |
| best_oa_location.source.host_organization_lineage | https://openalex.org/P4310319900, https://openalex.org/P4310319965 |
| best_oa_location.source.host_organization_lineage_names | Springer Science+Business Media, Springer Nature |
| best_oa_location.license | cc-by |
| best_oa_location.pdf_url | |
| best_oa_location.version | publishedVersion |
| best_oa_location.raw_type | journal-article |
| best_oa_location.license_id | https://openalex.org/licenses/cc-by |
| best_oa_location.is_accepted | True |
| best_oa_location.is_published | True |
| best_oa_location.raw_source_name | International Journal of Computer Vision |
| best_oa_location.landing_page_url | https://doi.org/10.1007/s11263-023-01936-1 |
| primary_location.id | doi:10.1007/s11263-023-01936-1 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S25538012 |
| primary_location.source.issn | 0920-5691, 1573-1405 |
| primary_location.source.type | journal |
| primary_location.source.is_oa | False |
| primary_location.source.issn_l | 0920-5691 |
| primary_location.source.is_core | True |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | International Journal of Computer Vision |
| primary_location.source.host_organization | https://openalex.org/P4310319900 |
| primary_location.source.host_organization_name | Springer Science+Business Media |
| primary_location.source.host_organization_lineage | https://openalex.org/P4310319900, https://openalex.org/P4310319965 |
| primary_location.source.host_organization_lineage_names | Springer Science+Business Media, Springer Nature |
| primary_location.license | cc-by |
| primary_location.pdf_url | |
| primary_location.version | publishedVersion |
| primary_location.raw_type | journal-article |
| primary_location.license_id | https://openalex.org/licenses/cc-by |
| primary_location.is_accepted | True |
| primary_location.is_published | True |
| primary_location.raw_source_name | International Journal of Computer Vision |
| primary_location.landing_page_url | https://doi.org/10.1007/s11263-023-01936-1 |
| publication_date | 2023-11-11 |
| publication_year | 2023 |
| referenced_works | https://openalex.org/W2002967403, https://openalex.org/W2163446794, https://openalex.org/W3005527513, https://openalex.org/W3203570626, https://openalex.org/W4386066457, https://openalex.org/W2605778869, https://openalex.org/W3204859697, https://openalex.org/W2594519801, https://openalex.org/W6736685754, https://openalex.org/W2105413810, https://openalex.org/W1501520364, https://openalex.org/W2264612925, https://openalex.org/W1993228150, https://openalex.org/W2102481828, https://openalex.org/W4248598408, https://openalex.org/W6658838613, https://openalex.org/W4304701439, https://openalex.org/W4380994277, https://openalex.org/W4214579621, https://openalex.org/W3205274379, https://openalex.org/W2738551266, https://openalex.org/W4306248542, https://openalex.org/W3204691786, https://openalex.org/W4214564845, https://openalex.org/W6781421651, https://openalex.org/W3180799285, https://openalex.org/W4200150166, https://openalex.org/W2109635530, https://openalex.org/W3110204544, https://openalex.org/W6755308174, https://openalex.org/W4214520160, https://openalex.org/W2471962767, https://openalex.org/W2519683295, https://openalex.org/W4214768561, https://openalex.org/W3214712237, https://openalex.org/W2124313187, https://openalex.org/W3186630079, https://openalex.org/W4312706422, https://openalex.org/W3177021849, https://openalex.org/W2962793285, https://openalex.org/W4221155806, https://openalex.org/W3112108866, https://openalex.org/W3176368002, https://openalex.org/W4200420145, https://openalex.org/W1981276685, https://openalex.org/W1992989752 |
| referenced_works_count | 46 |
| abstract_inverted_index.a | 15, 21, 33, 40, 60, 131, 166, 179, 204 |
| abstract_inverted_index.By | 136 |
| abstract_inverted_index.It | 77 |
| abstract_inverted_index.To | 100 |
| abstract_inverted_index.We | 0, 198, 252 |
| abstract_inverted_index.an | 2 |
| abstract_inverted_index.as | 128 |
| abstract_inverted_index.at | 25, 43, 86 |
| abstract_inverted_index.be | 250 |
| abstract_inverted_index.in | 11, 50, 70, 96, 130, 238 |
| abstract_inverted_index.is | 67, 78, 93, 142, 163, 221 |
| abstract_inverted_index.it | 220 |
| abstract_inverted_index.of | 17, 36, 109, 264 |
| abstract_inverted_index.on | 256 |
| abstract_inverted_index.to | 5, 82, 145, 178, 207, 224, 260 |
| abstract_inverted_index.up | 144 |
| abstract_inverted_index.we | 105, 147 |
| abstract_inverted_index.(i) | 54 |
| abstract_inverted_index.The | 46 |
| abstract_inverted_index.and | 116, 192 |
| abstract_inverted_index.are | 53 |
| abstract_inverted_index.can | 249 |
| abstract_inverted_index.for | 203, 245 |
| abstract_inverted_index.key | 47, 103 |
| abstract_inverted_index.our | 265 |
| abstract_inverted_index.per | 138 |
| abstract_inverted_index.set | 16 |
| abstract_inverted_index.the | 6, 28, 37, 102, 107, 121, 125, 149, 156, 173, 184, 230, 239, 262 |
| abstract_inverted_index.via | 59, 165 |
| abstract_inverted_index.(ii) | 76 |
| abstract_inverted_index.From | 155 |
| abstract_inverted_index.from | 20, 39, 63, 195, 214 |
| abstract_inverted_index.have | 225 |
| abstract_inverted_index.loss | 181, 186 |
| abstract_inverted_index.more | 80 |
| abstract_inverted_index.pose | 151, 161, 241 |
| abstract_inverted_index.test | 44 |
| abstract_inverted_index.that | 202 |
| abstract_inverted_index.this | 51 |
| abstract_inverted_index.view | 73, 139 |
| abstract_inverted_index.with | 200 |
| abstract_inverted_index.Given | 14 |
| abstract_inverted_index.could | 31 |
| abstract_inverted_index.depth | 118, 140 |
| abstract_inverted_index.given | 143 |
| abstract_inverted_index.ideas | 48 |
| abstract_inverted_index.image | 35, 193 |
| abstract_inverted_index.makes | 124 |
| abstract_inverted_index.model | 83, 209 |
| abstract_inverted_index.novel | 41, 72 |
| abstract_inverted_index.paper | 52 |
| abstract_inverted_index.scene | 38, 110, 114, 212, 231 |
| abstract_inverted_index.since | 89 |
| abstract_inverted_index.taken | 19 |
| abstract_inverted_index.time, | 27 |
| abstract_inverted_index.time. | 45 |
| abstract_inverted_index.train | 26 |
| abstract_inverted_index.camera | 24, 57, 91, 126, 160, 189, 240 |
| abstract_inverted_index.freely | 22 |
| abstract_inverted_index.highly | 94 |
| abstract_inverted_index.ideas, | 104 |
| abstract_inverted_index.images | 18, 66 |
| abstract_inverted_index.likely | 95 |
| abstract_inverted_index.motion | 92, 170 |
| abstract_inverted_index.moving | 23 |
| abstract_inverted_index.neural | 7, 71, 113, 132, 211 |
| abstract_inverted_index.poses, | 158 |
| abstract_inverted_index.rather | 79 |
| abstract_inverted_index.robust | 61 |
| abstract_inverted_index.scale, | 146 |
| abstract_inverted_index.single | 180 |
| abstract_inverted_index.within | 172, 229 |
| abstract_inverted_index.Without | 234 |
| abstract_inverted_index.between | 152 |
| abstract_inverted_index.content | 85 |
| abstract_inverted_index.crucial | 69 |
| abstract_inverted_index.equally | 68, 222 |
| abstract_inverted_index.frames. | 154 |
| abstract_inverted_index.images, | 219 |
| abstract_inverted_index.images. | 99, 197 |
| abstract_inverted_index.leading | 177 |
| abstract_inverted_index.modeled | 164 |
| abstract_inverted_index.precise | 226 |
| abstract_inverted_index.present | 253 |
| abstract_inverted_index.problem | 10 |
| abstract_inverted_index.several | 257 |
| abstract_inverted_index.unified | 205 |
| abstract_inverted_index.unposed | 64, 98, 196, 217 |
| abstract_inverted_index.vision. | 13 |
| abstract_inverted_index.absolute | 159 |
| abstract_inverted_index.accurate | 56 |
| abstract_inverted_index.acquired | 216 |
| abstract_inverted_index.aliasing | 247 |
| abstract_inverted_index.approach | 30, 123 |
| abstract_inverted_index.assuming | 137 |
| abstract_inverted_index.computer | 12 |
| abstract_inverted_index.datasets | 259 |
| abstract_inverted_index.dramatic | 90 |
| abstract_inverted_index.function | 187 |
| abstract_inverted_index.improved | 3 |
| abstract_inverted_index.leverage | 106 |
| abstract_inverted_index.measures | 237 |
| abstract_inverted_index.modeling | 134, 244 |
| abstract_inverted_index.multiple | 169 |
| abstract_inverted_index.network, | 176 |
| abstract_inverted_index.object's | 84 |
| abstract_inverted_index.pipeline | 62 |
| abstract_inverted_index.problem; | 75 |
| abstract_inverted_index.proposed | 29, 122 |
| abstract_inverted_index.provides | 188 |
| abstract_inverted_index.relative | 150, 157 |
| abstract_inverted_index.solution | 4 |
| abstract_inverted_index.approach. | 266 |
| abstract_inverted_index.artifacts | 248 |
| abstract_inverted_index.averaging | 171 |
| abstract_inverted_index.benchmark | 258 |
| abstract_inverted_index.constrain | 148 |
| abstract_inverted_index.different | 87 |
| abstract_inverted_index.essential | 223 |
| abstract_inverted_index.estimates | 228 |
| abstract_inverted_index.examples, | 201 |
| abstract_inverted_index.extensive | 254 |
| abstract_inverted_index.framework | 206 |
| abstract_inverted_index.function. | 182 |
| abstract_inverted_index.introduce | 1 |
| abstract_inverted_index.learnable | 129 |
| abstract_inverted_index.pipeline, | 243 |
| abstract_inverted_index.practical | 81 |
| abstract_inverted_index.presented | 49 |
| abstract_inverted_index.realistic | 34 |
| abstract_inverted_index.rendering | 9, 194 |
| abstract_inverted_index.rigidity, | 111 |
| abstract_inverted_index.synthesis | 74 |
| abstract_inverted_index.viewpoint | 42 |
| abstract_inverted_index.Optimizing | 183 |
| abstract_inverted_index.Recovering | 55 |
| abstract_inverted_index.accurately | 208 |
| abstract_inverted_index.day-to-day | 65, 97, 215 |
| abstract_inverted_index.estimation | 162, 242 |
| abstract_inverted_index.extrinsic, | 191 |
| abstract_inverted_index.framework. | 135, 233 |
| abstract_inverted_index.intrinsic, | 190 |
| abstract_inverted_index.introduced | 185 |
| abstract_inverted_index.multi-view | 218 |
| abstract_inverted_index.multiscale | 210 |
| abstract_inverted_index.parameters | 58, 127 |
| abstract_inverted_index.prediction | 141 |
| abstract_inverted_index.robustness | 236 |
| abstract_inverted_index.successive | 153 |
| abstract_inverted_index.synthesize | 32 |
| abstract_inverted_index.Concretely, | 120 |
| abstract_inverted_index.camera-pose | 227 |
| abstract_inverted_index.considering | 235 |
| abstract_inverted_index.demonstrate | 261 |
| abstract_inverted_index.experiments | 255 |
| abstract_inverted_index.image-based | 8 |
| abstract_inverted_index.incorporate | 101 |
| abstract_inverted_index.multi-scale | 112, 174, 246 |
| abstract_inverted_index.prediction. | 119 |
| abstract_inverted_index.resolutions | 88 |
| abstract_inverted_index.suitability | 263 |
| abstract_inverted_index.demonstrate, | 199 |
| abstract_inverted_index.fields-based | 133 |
| abstract_inverted_index.fundamentals | 108 |
| abstract_inverted_index.graph-neural | 167 |
| abstract_inverted_index.single-image | 117 |
| abstract_inverted_index.network-based | 168 |
| abstract_inverted_index.neural-fields | 175 |
| abstract_inverted_index.representation | 213, 232 |
| abstract_inverted_index.representation, | 115 |
| abstract_inverted_index.counterproductive. | 251 |
| cited_by_percentile_year.max | 98 |
| cited_by_percentile_year.min | 97 |
| countries_distinct_count | 2 |
| institutions_distinct_count | 3 |
| citation_normalized_percentile.value | 0.73784838 |
| citation_normalized_percentile.is_in_top_1_percent | False |
| citation_normalized_percentile.is_in_top_10_percent | False |