Flow-NeRF: Joint Learning of Geometry, Poses, and Dense Flow within Unified Neural Representations Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2503.10464
Learning accurate scene reconstruction without pose priors in neural radiance fields is challenging due to inherent geometric ambiguity. Recent development either relies on correspondence priors for regularization or uses off-the-shelf flow estimators to derive analytical poses. However, the potential for jointly learning scene geometry, camera poses, and dense flow within a unified neural representation remains largely unexplored. In this paper, we present Flow-NeRF, a unified framework that simultaneously optimizes scene geometry, camera poses, and dense optical flow all on-the-fly. To enable the learning of dense flow within the neural radiance field, we design and build a bijective mapping for flow estimation, conditioned on pose. To make the scene reconstruction benefit from the flow estimation, we develop an effective feature enhancement mechanism to pass canonical space features to world space representations, significantly enhancing scene geometry. We validate our model across four important tasks, i.e., novel view synthesis, depth estimation, camera pose prediction, and dense optical flow estimation, using several datasets. Our approach surpasses previous methods in almost all metrics for novel-view view synthesis and depth estimation and yields both qualitatively sound and quantitatively accurate novel-view flow. Our project page is https://zhengxunzhi.github.io/flownerf/.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2503.10464
- https://arxiv.org/pdf/2503.10464
- OA Status
- green
- OpenAlex ID
- https://openalex.org/W4416039555
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4416039555Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2503.10464Digital Object Identifier
- Title
-
Flow-NeRF: Joint Learning of Geometry, Poses, and Dense Flow within Unified Neural RepresentationsWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2025Year of publication
- Publication date
-
2025-03-13Full publication date if available
- Authors
-
Xiaoming Zheng, Dan XuList of authors in order
- Landing page
-
https://arxiv.org/abs/2503.10464Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2503.10464Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2503.10464Direct OA link when available
- Cited by
-
0Total citation count in OpenAlex
Full payload
| id | https://openalex.org/W4416039555 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2503.10464 |
| ids.doi | https://doi.org/10.48550/arxiv.2503.10464 |
| ids.openalex | https://openalex.org/W4416039555 |
| fwci | |
| type | preprint |
| title | Flow-NeRF: Joint Learning of Geometry, Poses, and Dense Flow within Unified Neural Representations |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2503.10464 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2503.10464 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2503.10464 |
| locations[1].id | doi:10.48550/arxiv.2503.10464 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2503.10464 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5081509849 |
| authorships[0].author.orcid | https://orcid.org/0000-0003-0918-2624 |
| authorships[0].author.display_name | Xiaoming Zheng |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Zheng, Xunzhi |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5100341938 |
| authorships[1].author.orcid | https://orcid.org/0000-0003-0136-9603 |
| authorships[1].author.display_name | Dan Xu |
| authorships[1].author_position | last |
| authorships[1].raw_author_name | Xu, Dan |
| authorships[1].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2503.10464 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | Flow-NeRF: Joint Learning of Geometry, Poses, and Dense Flow within Unified Neural Representations |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-09T23:09:16.995542 |
| primary_topic | |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2503.10464 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2503.10464 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2503.10464 |
| primary_location.id | pmh:oai:arXiv.org:2503.10464 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2503.10464 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2503.10464 |
| publication_date | 2025-03-13 |
| publication_year | 2025 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 50, 63, 95 |
| abstract_inverted_index.In | 57 |
| abstract_inverted_index.To | 79, 104 |
| abstract_inverted_index.We | 134 |
| abstract_inverted_index.an | 116 |
| abstract_inverted_index.in | 7, 164 |
| abstract_inverted_index.is | 11, 188 |
| abstract_inverted_index.of | 83 |
| abstract_inverted_index.on | 22, 102 |
| abstract_inverted_index.or | 27 |
| abstract_inverted_index.to | 14, 32, 121, 126 |
| abstract_inverted_index.we | 60, 91, 114 |
| abstract_inverted_index.Our | 159, 185 |
| abstract_inverted_index.all | 77, 166 |
| abstract_inverted_index.and | 46, 73, 93, 151, 172, 175, 180 |
| abstract_inverted_index.due | 13 |
| abstract_inverted_index.for | 25, 39, 98, 168 |
| abstract_inverted_index.our | 136 |
| abstract_inverted_index.the | 37, 81, 87, 106, 111 |
| abstract_inverted_index.both | 177 |
| abstract_inverted_index.flow | 30, 48, 76, 85, 99, 112, 154 |
| abstract_inverted_index.four | 139 |
| abstract_inverted_index.from | 110 |
| abstract_inverted_index.make | 105 |
| abstract_inverted_index.page | 187 |
| abstract_inverted_index.pass | 122 |
| abstract_inverted_index.pose | 5, 149 |
| abstract_inverted_index.that | 66 |
| abstract_inverted_index.this | 58 |
| abstract_inverted_index.uses | 28 |
| abstract_inverted_index.view | 144, 170 |
| abstract_inverted_index.build | 94 |
| abstract_inverted_index.dense | 47, 74, 84, 152 |
| abstract_inverted_index.depth | 146, 173 |
| abstract_inverted_index.flow. | 184 |
| abstract_inverted_index.i.e., | 142 |
| abstract_inverted_index.model | 137 |
| abstract_inverted_index.novel | 143 |
| abstract_inverted_index.pose. | 103 |
| abstract_inverted_index.scene | 2, 42, 69, 107, 132 |
| abstract_inverted_index.sound | 179 |
| abstract_inverted_index.space | 124, 128 |
| abstract_inverted_index.using | 156 |
| abstract_inverted_index.world | 127 |
| abstract_inverted_index.Recent | 18 |
| abstract_inverted_index.across | 138 |
| abstract_inverted_index.almost | 165 |
| abstract_inverted_index.camera | 44, 71, 148 |
| abstract_inverted_index.derive | 33 |
| abstract_inverted_index.design | 92 |
| abstract_inverted_index.either | 20 |
| abstract_inverted_index.enable | 80 |
| abstract_inverted_index.field, | 90 |
| abstract_inverted_index.fields | 10 |
| abstract_inverted_index.neural | 8, 52, 88 |
| abstract_inverted_index.paper, | 59 |
| abstract_inverted_index.poses, | 45, 72 |
| abstract_inverted_index.poses. | 35 |
| abstract_inverted_index.priors | 6, 24 |
| abstract_inverted_index.relies | 21 |
| abstract_inverted_index.tasks, | 141 |
| abstract_inverted_index.within | 49, 86 |
| abstract_inverted_index.yields | 176 |
| abstract_inverted_index.benefit | 109 |
| abstract_inverted_index.develop | 115 |
| abstract_inverted_index.feature | 118 |
| abstract_inverted_index.jointly | 40 |
| abstract_inverted_index.largely | 55 |
| abstract_inverted_index.mapping | 97 |
| abstract_inverted_index.methods | 163 |
| abstract_inverted_index.metrics | 167 |
| abstract_inverted_index.optical | 75, 153 |
| abstract_inverted_index.present | 61 |
| abstract_inverted_index.project | 186 |
| abstract_inverted_index.remains | 54 |
| abstract_inverted_index.several | 157 |
| abstract_inverted_index.unified | 51, 64 |
| abstract_inverted_index.without | 4 |
| abstract_inverted_index.However, | 36 |
| abstract_inverted_index.Learning | 0 |
| abstract_inverted_index.accurate | 1, 182 |
| abstract_inverted_index.approach | 160 |
| abstract_inverted_index.features | 125 |
| abstract_inverted_index.inherent | 15 |
| abstract_inverted_index.learning | 41, 82 |
| abstract_inverted_index.previous | 162 |
| abstract_inverted_index.radiance | 9, 89 |
| abstract_inverted_index.validate | 135 |
| abstract_inverted_index.bijective | 96 |
| abstract_inverted_index.canonical | 123 |
| abstract_inverted_index.datasets. | 158 |
| abstract_inverted_index.effective | 117 |
| abstract_inverted_index.enhancing | 131 |
| abstract_inverted_index.framework | 65 |
| abstract_inverted_index.geometric | 16 |
| abstract_inverted_index.geometry, | 43, 70 |
| abstract_inverted_index.geometry. | 133 |
| abstract_inverted_index.important | 140 |
| abstract_inverted_index.mechanism | 120 |
| abstract_inverted_index.optimizes | 68 |
| abstract_inverted_index.potential | 38 |
| abstract_inverted_index.surpasses | 161 |
| abstract_inverted_index.synthesis | 171 |
| abstract_inverted_index.Flow-NeRF, | 62 |
| abstract_inverted_index.ambiguity. | 17 |
| abstract_inverted_index.analytical | 34 |
| abstract_inverted_index.estimation | 174 |
| abstract_inverted_index.estimators | 31 |
| abstract_inverted_index.novel-view | 169, 183 |
| abstract_inverted_index.synthesis, | 145 |
| abstract_inverted_index.challenging | 12 |
| abstract_inverted_index.conditioned | 101 |
| abstract_inverted_index.development | 19 |
| abstract_inverted_index.enhancement | 119 |
| abstract_inverted_index.estimation, | 100, 113, 147, 155 |
| abstract_inverted_index.on-the-fly. | 78 |
| abstract_inverted_index.prediction, | 150 |
| abstract_inverted_index.unexplored. | 56 |
| abstract_inverted_index.off-the-shelf | 29 |
| abstract_inverted_index.qualitatively | 178 |
| abstract_inverted_index.significantly | 130 |
| abstract_inverted_index.correspondence | 23 |
| abstract_inverted_index.quantitatively | 181 |
| abstract_inverted_index.reconstruction | 3, 108 |
| abstract_inverted_index.regularization | 26 |
| abstract_inverted_index.representation | 53 |
| abstract_inverted_index.simultaneously | 67 |
| abstract_inverted_index.representations, | 129 |
| abstract_inverted_index.https://zhengxunzhi.github.io/flownerf/. | 189 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 2 |
| citation_normalized_percentile |