Cross-View Correspondence Modeling for Joint Representation Learning Between Egocentric and Exocentric Videos Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.1109/access.2025.3593474
Joint analysis of human action videos from egocentric and exocentric views enables a more comprehensive understanding of human behavior. While previous works leverage paired videos to align clip-level features across views, they often ignore the complex spatial and temporal misalignments inherent in such data. In this work, we propose a Cross-View Transformer that explicitly models fine-grained spatiotemporal correspondence between egocentric and exocentric videos. Our model incorporates self-attention to enhance intra-view context and cross-view attention to align features across space and time. To train the model, we introduce a hybrid loss function combining a triplet loss and a domain classification loss, further reinforced by a sample screening mechanism that emphasizes informative training pairs. We evaluate our method on multiple egocentric action recognition benchmarks, including Charades-Ego and EPIC-Kitchens. Experimental results demonstrate that our method consistently outperforms existing approaches, achieving state-of-the-art performance on several egocentric video understanding tasks.
Related Topics
- Type
- article
- Language
- en
- Landing Page
- https://doi.org/10.1109/access.2025.3593474
- OA Status
- gold
- References
- 50
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4412721953
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4412721953Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.1109/access.2025.3593474Digital Object Identifier
- Title
-
Cross-View Correspondence Modeling for Joint Representation Learning Between Egocentric and Exocentric VideosWork title
- Type
-
articleOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2025Year of publication
- Publication date
-
2025-01-01Full publication date if available
- Authors
-
Zhehao Zhu, Yoichi SatoList of authors in order
- Landing page
-
https://doi.org/10.1109/access.2025.3593474Publisher landing page
- Open access
-
YesWhether a free full text is available
- OA status
-
goldOpen access status per OpenAlex
- OA URL
-
https://doi.org/10.1109/access.2025.3593474Direct OA link when available
- Concepts
-
Endocentric and exocentric, Computer science, Representation (politics), Artificial intelligence, Joint (building), Human–computer interaction, Computer vision, Natural language processing, Engineering, Political science, Politics, Law, Architectural engineering, Noun phrase, NounTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
0Total citation count in OpenAlex
- References (count)
-
50Number of works referenced by this work
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4412721953 |
|---|---|
| doi | https://doi.org/10.1109/access.2025.3593474 |
| ids.doi | https://doi.org/10.1109/access.2025.3593474 |
| ids.openalex | https://openalex.org/W4412721953 |
| fwci | 0.0 |
| type | article |
| title | Cross-View Correspondence Modeling for Joint Representation Learning Between Egocentric and Exocentric Videos |
| awards[0].id | https://openalex.org/G4769843218 |
| awards[0].funder_id | https://openalex.org/F4320334789 |
| awards[0].display_name | |
| awards[0].funder_award_id | JP24K02956 |
| awards[0].funder_display_name | Japan Science and Technology Agency |
| awards[1].id | https://openalex.org/G7747484221 |
| awards[1].funder_id | https://openalex.org/F4320334789 |
| awards[1].display_name | 人間中心のビジョン・メディア技術に関する国際共同研究ネットワークの構築 |
| awards[1].funder_award_id | JPMJAP2303 |
| awards[1].funder_display_name | Japan Science and Technology Agency |
| biblio.issue | |
| biblio.volume | 13 |
| biblio.last_page | 140741 |
| biblio.first_page | 140733 |
| topics[0].id | https://openalex.org/T11105 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9380000233650208 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Advanced Image Processing Techniques |
| topics[1].id | https://openalex.org/T10531 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9247999787330627 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1707 |
| topics[1].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[1].display_name | Advanced Vision and Imaging |
| topics[2].id | https://openalex.org/T11448 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9067999720573425 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1707 |
| topics[2].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[2].display_name | Face recognition and analysis |
| funders[0].id | https://openalex.org/F4320334789 |
| funders[0].ror | https://ror.org/00097mb19 |
| funders[0].display_name | Japan Science and Technology Agency |
| is_xpac | False |
| apc_list.value | 1850 |
| apc_list.currency | USD |
| apc_list.value_usd | 1850 |
| apc_paid.value | 1850 |
| apc_paid.currency | USD |
| apc_paid.value_usd | 1850 |
| concepts[0].id | https://openalex.org/C131042201 |
| concepts[0].level | 4 |
| concepts[0].score | 0.8998440504074097 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q493198 |
| concepts[0].display_name | Endocentric and exocentric |
| concepts[1].id | https://openalex.org/C41008148 |
| concepts[1].level | 0 |
| concepts[1].score | 0.7605962753295898 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[1].display_name | Computer science |
| concepts[2].id | https://openalex.org/C2776359362 |
| concepts[2].level | 3 |
| concepts[2].score | 0.6904965043067932 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q2145286 |
| concepts[2].display_name | Representation (politics) |
| concepts[3].id | https://openalex.org/C154945302 |
| concepts[3].level | 1 |
| concepts[3].score | 0.5933722853660583 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[3].display_name | Artificial intelligence |
| concepts[4].id | https://openalex.org/C18555067 |
| concepts[4].level | 2 |
| concepts[4].score | 0.5828400254249573 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q8375051 |
| concepts[4].display_name | Joint (building) |
| concepts[5].id | https://openalex.org/C107457646 |
| concepts[5].level | 1 |
| concepts[5].score | 0.4861569106578827 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q207434 |
| concepts[5].display_name | Human–computer interaction |
| concepts[6].id | https://openalex.org/C31972630 |
| concepts[6].level | 1 |
| concepts[6].score | 0.46857950091362 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q844240 |
| concepts[6].display_name | Computer vision |
| concepts[7].id | https://openalex.org/C204321447 |
| concepts[7].level | 1 |
| concepts[7].score | 0.3427775502204895 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q30642 |
| concepts[7].display_name | Natural language processing |
| concepts[8].id | https://openalex.org/C127413603 |
| concepts[8].level | 0 |
| concepts[8].score | 0.07353624701499939 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q11023 |
| concepts[8].display_name | Engineering |
| concepts[9].id | https://openalex.org/C17744445 |
| concepts[9].level | 0 |
| concepts[9].score | 0.0 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q36442 |
| concepts[9].display_name | Political science |
| concepts[10].id | https://openalex.org/C94625758 |
| concepts[10].level | 2 |
| concepts[10].score | 0.0 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q7163 |
| concepts[10].display_name | Politics |
| concepts[11].id | https://openalex.org/C199539241 |
| concepts[11].level | 1 |
| concepts[11].score | 0.0 |
| concepts[11].wikidata | https://www.wikidata.org/wiki/Q7748 |
| concepts[11].display_name | Law |
| concepts[12].id | https://openalex.org/C170154142 |
| concepts[12].level | 1 |
| concepts[12].score | 0.0 |
| concepts[12].wikidata | https://www.wikidata.org/wiki/Q150737 |
| concepts[12].display_name | Architectural engineering |
| concepts[13].id | https://openalex.org/C153962237 |
| concepts[13].level | 3 |
| concepts[13].score | 0.0 |
| concepts[13].wikidata | https://www.wikidata.org/wiki/Q1401131 |
| concepts[13].display_name | Noun phrase |
| concepts[14].id | https://openalex.org/C121934690 |
| concepts[14].level | 2 |
| concepts[14].score | 0.0 |
| concepts[14].wikidata | https://www.wikidata.org/wiki/Q1084 |
| concepts[14].display_name | Noun |
| keywords[0].id | https://openalex.org/keywords/endocentric-and-exocentric |
| keywords[0].score | 0.8998440504074097 |
| keywords[0].display_name | Endocentric and exocentric |
| keywords[1].id | https://openalex.org/keywords/computer-science |
| keywords[1].score | 0.7605962753295898 |
| keywords[1].display_name | Computer science |
| keywords[2].id | https://openalex.org/keywords/representation |
| keywords[2].score | 0.6904965043067932 |
| keywords[2].display_name | Representation (politics) |
| keywords[3].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[3].score | 0.5933722853660583 |
| keywords[3].display_name | Artificial intelligence |
| keywords[4].id | https://openalex.org/keywords/joint |
| keywords[4].score | 0.5828400254249573 |
| keywords[4].display_name | Joint (building) |
| keywords[5].id | https://openalex.org/keywords/human–computer-interaction |
| keywords[5].score | 0.4861569106578827 |
| keywords[5].display_name | Human–computer interaction |
| keywords[6].id | https://openalex.org/keywords/computer-vision |
| keywords[6].score | 0.46857950091362 |
| keywords[6].display_name | Computer vision |
| keywords[7].id | https://openalex.org/keywords/natural-language-processing |
| keywords[7].score | 0.3427775502204895 |
| keywords[7].display_name | Natural language processing |
| keywords[8].id | https://openalex.org/keywords/engineering |
| keywords[8].score | 0.07353624701499939 |
| keywords[8].display_name | Engineering |
| language | en |
| locations[0].id | doi:10.1109/access.2025.3593474 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S2485537415 |
| locations[0].source.issn | 2169-3536 |
| locations[0].source.type | journal |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | 2169-3536 |
| locations[0].source.is_core | True |
| locations[0].source.is_in_doaj | True |
| locations[0].source.display_name | IEEE Access |
| locations[0].source.host_organization | https://openalex.org/P4310319808 |
| locations[0].source.host_organization_name | Institute of Electrical and Electronics Engineers |
| locations[0].source.host_organization_lineage | https://openalex.org/P4310319808 |
| locations[0].source.host_organization_lineage_names | Institute of Electrical and Electronics Engineers |
| locations[0].license | cc-by |
| locations[0].pdf_url | |
| locations[0].version | publishedVersion |
| locations[0].raw_type | journal-article |
| locations[0].license_id | https://openalex.org/licenses/cc-by |
| locations[0].is_accepted | True |
| locations[0].is_published | True |
| locations[0].raw_source_name | IEEE Access |
| locations[0].landing_page_url | https://doi.org/10.1109/access.2025.3593474 |
| locations[1].id | pmh:oai:doaj.org/article:318a381f26db4c20b974ffee1ccef594 |
| locations[1].is_oa | False |
| locations[1].source.id | https://openalex.org/S4306401280 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | False |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | DOAJ (DOAJ: Directory of Open Access Journals) |
| locations[1].source.host_organization | |
| locations[1].source.host_organization_name | |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | submittedVersion |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | False |
| locations[1].raw_source_name | IEEE Access, Vol 13, Pp 140733-140741 (2025) |
| locations[1].landing_page_url | https://doaj.org/article/318a381f26db4c20b974ffee1ccef594 |
| indexed_in | crossref, doaj |
| authorships[0].author.id | https://openalex.org/A5088529030 |
| authorships[0].author.orcid | |
| authorships[0].author.display_name | Zhehao Zhu |
| authorships[0].affiliations[0].raw_affiliation_string | Institute of Industrial Science, University of Tokyo, Tokyo, Japan |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Zhehao Zhu |
| authorships[0].is_corresponding | False |
| authorships[0].raw_affiliation_strings | Institute of Industrial Science, University of Tokyo, Tokyo, Japan |
| authorships[1].author.id | https://openalex.org/A5045996641 |
| authorships[1].author.orcid | https://orcid.org/0000-0003-0097-4537 |
| authorships[1].author.display_name | Yoichi Sato |
| authorships[1].affiliations[0].raw_affiliation_string | Institute of Industrial Science, University of Tokyo, Tokyo, Japan |
| authorships[1].author_position | last |
| authorships[1].raw_author_name | Yoichi Sato |
| authorships[1].is_corresponding | False |
| authorships[1].raw_affiliation_strings | Institute of Industrial Science, University of Tokyo, Tokyo, Japan |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://doi.org/10.1109/access.2025.3593474 |
| open_access.oa_status | gold |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | Cross-View Correspondence Modeling for Joint Representation Learning Between Egocentric and Exocentric Videos |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T03:46:38.306776 |
| primary_topic.id | https://openalex.org/T11105 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9380000233650208 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Advanced Image Processing Techniques |
| related_works | https://openalex.org/W3188871044, https://openalex.org/W2761261542, https://openalex.org/W1992908276, https://openalex.org/W2037731480, https://openalex.org/W4391807812, https://openalex.org/W2441195170, https://openalex.org/W3093754161, https://openalex.org/W32450859, https://openalex.org/W1987124187, https://openalex.org/W2154188682 |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | doi:10.1109/access.2025.3593474 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S2485537415 |
| best_oa_location.source.issn | 2169-3536 |
| best_oa_location.source.type | journal |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | 2169-3536 |
| best_oa_location.source.is_core | True |
| best_oa_location.source.is_in_doaj | True |
| best_oa_location.source.display_name | IEEE Access |
| best_oa_location.source.host_organization | https://openalex.org/P4310319808 |
| best_oa_location.source.host_organization_name | Institute of Electrical and Electronics Engineers |
| best_oa_location.source.host_organization_lineage | https://openalex.org/P4310319808 |
| best_oa_location.source.host_organization_lineage_names | Institute of Electrical and Electronics Engineers |
| best_oa_location.license | cc-by |
| best_oa_location.pdf_url | |
| best_oa_location.version | publishedVersion |
| best_oa_location.raw_type | journal-article |
| best_oa_location.license_id | https://openalex.org/licenses/cc-by |
| best_oa_location.is_accepted | True |
| best_oa_location.is_published | True |
| best_oa_location.raw_source_name | IEEE Access |
| best_oa_location.landing_page_url | https://doi.org/10.1109/access.2025.3593474 |
| primary_location.id | doi:10.1109/access.2025.3593474 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S2485537415 |
| primary_location.source.issn | 2169-3536 |
| primary_location.source.type | journal |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | 2169-3536 |
| primary_location.source.is_core | True |
| primary_location.source.is_in_doaj | True |
| primary_location.source.display_name | IEEE Access |
| primary_location.source.host_organization | https://openalex.org/P4310319808 |
| primary_location.source.host_organization_name | Institute of Electrical and Electronics Engineers |
| primary_location.source.host_organization_lineage | https://openalex.org/P4310319808 |
| primary_location.source.host_organization_lineage_names | Institute of Electrical and Electronics Engineers |
| primary_location.license | cc-by |
| primary_location.pdf_url | |
| primary_location.version | publishedVersion |
| primary_location.raw_type | journal-article |
| primary_location.license_id | https://openalex.org/licenses/cc-by |
| primary_location.is_accepted | True |
| primary_location.is_published | True |
| primary_location.raw_source_name | IEEE Access |
| primary_location.landing_page_url | https://doi.org/10.1109/access.2025.3593474 |
| publication_date | 2025-01-01 |
| publication_year | 2025 |
| referenced_works | https://openalex.org/W6801567822, https://openalex.org/W4312263092, https://openalex.org/W6849878650, https://openalex.org/W4386071476, https://openalex.org/W3195949276, https://openalex.org/W4210822358, https://openalex.org/W3203574385, https://openalex.org/W6846013553, https://openalex.org/W3146543470, https://openalex.org/W2799067027, https://openalex.org/W3092114159, https://openalex.org/W3177419066, https://openalex.org/W4387951823, https://openalex.org/W6853585542, https://openalex.org/W4390873422, https://openalex.org/W4404725431, https://openalex.org/W6791353385, https://openalex.org/W6739901393, https://openalex.org/W6749916090, https://openalex.org/W3207758636, https://openalex.org/W1977420968, https://openalex.org/W3127598906, https://openalex.org/W2997004687, https://openalex.org/W3034667697, https://openalex.org/W2795307598, https://openalex.org/W3133092047, https://openalex.org/W2993447238, https://openalex.org/W2963021791, https://openalex.org/W2886324878, https://openalex.org/W1931090275, https://openalex.org/W2503948444, https://openalex.org/W2610060393, https://openalex.org/W2982168673, https://openalex.org/W3118493476, https://openalex.org/W3151130473, https://openalex.org/W3096609285, https://openalex.org/W3170841864, https://openalex.org/W3171125843, https://openalex.org/W3171516518, https://openalex.org/W2963524571, https://openalex.org/W6765307894, https://openalex.org/W6638444622, https://openalex.org/W2550553598, https://openalex.org/W2109255472, https://openalex.org/W6728084366, https://openalex.org/W2737047298, https://openalex.org/W2963443993, https://openalex.org/W6765052341, https://openalex.org/W6750355821, https://openalex.org/W1955857676 |
| referenced_works_count | 50 |
| abstract_inverted_index.a | 12, 49, 87, 92, 96, 103 |
| abstract_inverted_index.In | 44 |
| abstract_inverted_index.To | 81 |
| abstract_inverted_index.We | 112 |
| abstract_inverted_index.by | 102 |
| abstract_inverted_index.in | 41 |
| abstract_inverted_index.of | 2, 16 |
| abstract_inverted_index.on | 116, 139 |
| abstract_inverted_index.to | 25, 67, 74 |
| abstract_inverted_index.we | 47, 85 |
| abstract_inverted_index.Our | 63 |
| abstract_inverted_index.and | 8, 37, 60, 71, 79, 95, 124 |
| abstract_inverted_index.our | 114, 130 |
| abstract_inverted_index.the | 34, 83 |
| abstract_inverted_index.from | 6 |
| abstract_inverted_index.loss | 89, 94 |
| abstract_inverted_index.more | 13 |
| abstract_inverted_index.such | 42 |
| abstract_inverted_index.that | 52, 107, 129 |
| abstract_inverted_index.they | 31 |
| abstract_inverted_index.this | 45 |
| abstract_inverted_index.Joint | 0 |
| abstract_inverted_index.While | 19 |
| abstract_inverted_index.align | 26, 75 |
| abstract_inverted_index.data. | 43 |
| abstract_inverted_index.human | 3, 17 |
| abstract_inverted_index.loss, | 99 |
| abstract_inverted_index.model | 64 |
| abstract_inverted_index.often | 32 |
| abstract_inverted_index.space | 78 |
| abstract_inverted_index.time. | 80 |
| abstract_inverted_index.train | 82 |
| abstract_inverted_index.video | 142 |
| abstract_inverted_index.views | 10 |
| abstract_inverted_index.work, | 46 |
| abstract_inverted_index.works | 21 |
| abstract_inverted_index.across | 29, 77 |
| abstract_inverted_index.action | 4, 119 |
| abstract_inverted_index.domain | 97 |
| abstract_inverted_index.hybrid | 88 |
| abstract_inverted_index.ignore | 33 |
| abstract_inverted_index.method | 115, 131 |
| abstract_inverted_index.model, | 84 |
| abstract_inverted_index.models | 54 |
| abstract_inverted_index.paired | 23 |
| abstract_inverted_index.pairs. | 111 |
| abstract_inverted_index.sample | 104 |
| abstract_inverted_index.tasks. | 144 |
| abstract_inverted_index.videos | 5, 24 |
| abstract_inverted_index.views, | 30 |
| abstract_inverted_index.between | 58 |
| abstract_inverted_index.complex | 35 |
| abstract_inverted_index.context | 70 |
| abstract_inverted_index.enables | 11 |
| abstract_inverted_index.enhance | 68 |
| abstract_inverted_index.further | 100 |
| abstract_inverted_index.propose | 48 |
| abstract_inverted_index.results | 127 |
| abstract_inverted_index.several | 140 |
| abstract_inverted_index.spatial | 36 |
| abstract_inverted_index.triplet | 93 |
| abstract_inverted_index.videos. | 62 |
| abstract_inverted_index.analysis | 1 |
| abstract_inverted_index.evaluate | 113 |
| abstract_inverted_index.existing | 134 |
| abstract_inverted_index.features | 28, 76 |
| abstract_inverted_index.function | 90 |
| abstract_inverted_index.inherent | 40 |
| abstract_inverted_index.leverage | 22 |
| abstract_inverted_index.multiple | 117 |
| abstract_inverted_index.previous | 20 |
| abstract_inverted_index.temporal | 38 |
| abstract_inverted_index.training | 110 |
| abstract_inverted_index.achieving | 136 |
| abstract_inverted_index.attention | 73 |
| abstract_inverted_index.behavior. | 18 |
| abstract_inverted_index.combining | 91 |
| abstract_inverted_index.including | 122 |
| abstract_inverted_index.introduce | 86 |
| abstract_inverted_index.mechanism | 106 |
| abstract_inverted_index.screening | 105 |
| abstract_inverted_index.Cross-View | 50 |
| abstract_inverted_index.clip-level | 27 |
| abstract_inverted_index.cross-view | 72 |
| abstract_inverted_index.egocentric | 7, 59, 118, 141 |
| abstract_inverted_index.emphasizes | 108 |
| abstract_inverted_index.exocentric | 9, 61 |
| abstract_inverted_index.explicitly | 53 |
| abstract_inverted_index.intra-view | 69 |
| abstract_inverted_index.reinforced | 101 |
| abstract_inverted_index.Transformer | 51 |
| abstract_inverted_index.approaches, | 135 |
| abstract_inverted_index.benchmarks, | 121 |
| abstract_inverted_index.demonstrate | 128 |
| abstract_inverted_index.informative | 109 |
| abstract_inverted_index.outperforms | 133 |
| abstract_inverted_index.performance | 138 |
| abstract_inverted_index.recognition | 120 |
| abstract_inverted_index.Charades-Ego | 123 |
| abstract_inverted_index.Experimental | 126 |
| abstract_inverted_index.consistently | 132 |
| abstract_inverted_index.fine-grained | 55 |
| abstract_inverted_index.incorporates | 65 |
| abstract_inverted_index.comprehensive | 14 |
| abstract_inverted_index.misalignments | 39 |
| abstract_inverted_index.understanding | 15, 143 |
| abstract_inverted_index.EPIC-Kitchens. | 125 |
| abstract_inverted_index.classification | 98 |
| abstract_inverted_index.correspondence | 57 |
| abstract_inverted_index.self-attention | 66 |
| abstract_inverted_index.spatiotemporal | 56 |
| abstract_inverted_index.state-of-the-art | 137 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 2 |
| sustainable_development_goals[0].id | https://metadata.un.org/sdg/10 |
| sustainable_development_goals[0].score | 0.6000000238418579 |
| sustainable_development_goals[0].display_name | Reduced inequalities |
| citation_normalized_percentile.value | 0.31026541 |
| citation_normalized_percentile.is_in_top_1_percent | False |
| citation_normalized_percentile.is_in_top_10_percent | False |