Exo2EgoDVC: Dense Video Captioning of Egocentric Procedural Activities Using Web Instructional Videos Article Swipe
YOU?
·
· 2023
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2311.16444
We propose a novel benchmark for cross-view knowledge transfer of dense video captioning, adapting models from web instructional videos with exocentric views to an egocentric view. While dense video captioning (predicting time segments and their captions) is primarily studied with exocentric videos (e.g., YouCook2), benchmarks with egocentric videos are restricted due to data scarcity. To overcome the limited video availability, transferring knowledge from abundant exocentric web videos is demanded as a practical approach. However, learning the correspondence between exocentric and egocentric views is difficult due to their dynamic view changes. The web videos contain shots showing either full-body or hand regions, while the egocentric view is constantly shifting. This necessitates the in-depth study of cross-view transfer under complex view changes. To this end, we first create a real-life egocentric dataset (EgoYC2) whose captions follow the definition of YouCook2 captions, enabling transfer learning between these datasets with access to their ground-truth. To bridge the view gaps, we propose a view-invariant learning method using adversarial training, which consists of pre-training and fine-tuning stages. Our experiments confirm the effectiveness of overcoming the view change problem and knowledge transfer to egocentric views. Our benchmark pushes the study of cross-view transfer into a new task domain of dense video captioning and envisions methodologies that describe egocentric videos in natural language.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2311.16444
- https://arxiv.org/pdf/2311.16444
- OA Status
- green
- Cited By
- 2
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4389217058
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4389217058Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2311.16444Digital Object Identifier
- Title
-
Exo2EgoDVC: Dense Video Captioning of Egocentric Procedural Activities Using Web Instructional VideosWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2023Year of publication
- Publication date
-
2023-11-28Full publication date if available
- Authors
-
Takehiko Ohkawa, Takuma Yagi, Taichi Nishimura, Ryosuke Furuta, Atsushi Hashimoto, Yoshitaka Ushiku, Yoichi SatoList of authors in order
- Landing page
-
https://arxiv.org/abs/2311.16444Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2311.16444Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2311.16444Direct OA link when available
- Concepts
-
Closed captioning, Computer science, Transfer of learning, Artificial intelligence, Endocentric and exocentric, Human–computer interaction, Computer vision, Multimedia, Noun, Image (mathematics), Noun phraseTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
2Total citation count in OpenAlex
- Citations by year (recent)
-
2025: 1, 2024: 1Per-year citation counts (last 5 years)
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4389217058 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2311.16444 |
| ids.doi | https://doi.org/10.48550/arxiv.2311.16444 |
| ids.openalex | https://openalex.org/W4389217058 |
| fwci | |
| type | preprint |
| title | Exo2EgoDVC: Dense Video Captioning of Egocentric Procedural Activities Using Web Instructional Videos |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T11714 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.998199999332428 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Multimodal Machine Learning Applications |
| topics[1].id | https://openalex.org/T10812 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9962000250816345 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1707 |
| topics[1].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[1].display_name | Human Pose and Action Recognition |
| topics[2].id | https://openalex.org/T11439 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9807000160217285 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1707 |
| topics[2].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[2].display_name | Video Analysis and Summarization |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C157657479 |
| concepts[0].level | 3 |
| concepts[0].score | 0.8477778434753418 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q2367247 |
| concepts[0].display_name | Closed captioning |
| concepts[1].id | https://openalex.org/C41008148 |
| concepts[1].level | 0 |
| concepts[1].score | 0.8034332990646362 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[1].display_name | Computer science |
| concepts[2].id | https://openalex.org/C150899416 |
| concepts[2].level | 2 |
| concepts[2].score | 0.5519678592681885 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q1820378 |
| concepts[2].display_name | Transfer of learning |
| concepts[3].id | https://openalex.org/C154945302 |
| concepts[3].level | 1 |
| concepts[3].score | 0.540024995803833 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[3].display_name | Artificial intelligence |
| concepts[4].id | https://openalex.org/C131042201 |
| concepts[4].level | 4 |
| concepts[4].score | 0.4844929873943329 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q493198 |
| concepts[4].display_name | Endocentric and exocentric |
| concepts[5].id | https://openalex.org/C107457646 |
| concepts[5].level | 1 |
| concepts[5].score | 0.4602223336696625 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q207434 |
| concepts[5].display_name | Human–computer interaction |
| concepts[6].id | https://openalex.org/C31972630 |
| concepts[6].level | 1 |
| concepts[6].score | 0.360365092754364 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q844240 |
| concepts[6].display_name | Computer vision |
| concepts[7].id | https://openalex.org/C49774154 |
| concepts[7].level | 1 |
| concepts[7].score | 0.3466804027557373 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q131765 |
| concepts[7].display_name | Multimedia |
| concepts[8].id | https://openalex.org/C121934690 |
| concepts[8].level | 2 |
| concepts[8].score | 0.0 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q1084 |
| concepts[8].display_name | Noun |
| concepts[9].id | https://openalex.org/C115961682 |
| concepts[9].level | 2 |
| concepts[9].score | 0.0 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q860623 |
| concepts[9].display_name | Image (mathematics) |
| concepts[10].id | https://openalex.org/C153962237 |
| concepts[10].level | 3 |
| concepts[10].score | 0.0 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q1401131 |
| concepts[10].display_name | Noun phrase |
| keywords[0].id | https://openalex.org/keywords/closed-captioning |
| keywords[0].score | 0.8477778434753418 |
| keywords[0].display_name | Closed captioning |
| keywords[1].id | https://openalex.org/keywords/computer-science |
| keywords[1].score | 0.8034332990646362 |
| keywords[1].display_name | Computer science |
| keywords[2].id | https://openalex.org/keywords/transfer-of-learning |
| keywords[2].score | 0.5519678592681885 |
| keywords[2].display_name | Transfer of learning |
| keywords[3].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[3].score | 0.540024995803833 |
| keywords[3].display_name | Artificial intelligence |
| keywords[4].id | https://openalex.org/keywords/endocentric-and-exocentric |
| keywords[4].score | 0.4844929873943329 |
| keywords[4].display_name | Endocentric and exocentric |
| keywords[5].id | https://openalex.org/keywords/human–computer-interaction |
| keywords[5].score | 0.4602223336696625 |
| keywords[5].display_name | Human–computer interaction |
| keywords[6].id | https://openalex.org/keywords/computer-vision |
| keywords[6].score | 0.360365092754364 |
| keywords[6].display_name | Computer vision |
| keywords[7].id | https://openalex.org/keywords/multimedia |
| keywords[7].score | 0.3466804027557373 |
| keywords[7].display_name | Multimedia |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2311.16444 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | cc-by |
| locations[0].pdf_url | https://arxiv.org/pdf/2311.16444 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | https://openalex.org/licenses/cc-by |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2311.16444 |
| locations[1].id | doi:10.48550/arxiv.2311.16444 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | cc-by |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | https://openalex.org/licenses/cc-by |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2311.16444 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5040126789 |
| authorships[0].author.orcid | https://orcid.org/0000-0003-2329-8797 |
| authorships[0].author.display_name | Takehiko Ohkawa |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Ohkawa, Takehiko |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5007247577 |
| authorships[1].author.orcid | https://orcid.org/0000-0003-4050-6543 |
| authorships[1].author.display_name | Takuma Yagi |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Yagi, Takuma |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5061593748 |
| authorships[2].author.orcid | https://orcid.org/0000-0001-8725-7164 |
| authorships[2].author.display_name | Taichi Nishimura |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Nishimura, Taichi |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5091227949 |
| authorships[3].author.orcid | https://orcid.org/0000-0003-1441-889X |
| authorships[3].author.display_name | Ryosuke Furuta |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Furuta, Ryosuke |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5038408644 |
| authorships[4].author.orcid | https://orcid.org/0000-0002-0799-4269 |
| authorships[4].author.display_name | Atsushi Hashimoto |
| authorships[4].author_position | middle |
| authorships[4].raw_author_name | Hashimoto, Atsushi |
| authorships[4].is_corresponding | False |
| authorships[5].author.id | https://openalex.org/A5077707500 |
| authorships[5].author.orcid | https://orcid.org/0000-0002-9014-1389 |
| authorships[5].author.display_name | Yoshitaka Ushiku |
| authorships[5].author_position | middle |
| authorships[5].raw_author_name | Ushiku, Yoshitaka |
| authorships[5].is_corresponding | False |
| authorships[6].author.id | https://openalex.org/A5045996641 |
| authorships[6].author.orcid | https://orcid.org/0000-0003-0097-4537 |
| authorships[6].author.display_name | Yoichi Sato |
| authorships[6].author_position | last |
| authorships[6].raw_author_name | Sato, Yoichi |
| authorships[6].is_corresponding | False |
| has_content.pdf | True |
| has_content.grobid_xml | True |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2311.16444 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2023-12-01T00:00:00 |
| display_name | Exo2EgoDVC: Dense Video Captioning of Egocentric Procedural Activities Using Web Instructional Videos |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T11714 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.998199999332428 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Multimodal Machine Learning Applications |
| related_works | https://openalex.org/W4210416330, https://openalex.org/W2775506363, https://openalex.org/W3088136942, https://openalex.org/W4290852288, https://openalex.org/W2949362007, https://openalex.org/W4388893791, https://openalex.org/W4283207562, https://openalex.org/W2963177403, https://openalex.org/W2330246314, https://openalex.org/W2154188682 |
| cited_by_count | 2 |
| counts_by_year[0].year | 2025 |
| counts_by_year[0].cited_by_count | 1 |
| counts_by_year[1].year | 2024 |
| counts_by_year[1].cited_by_count | 1 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2311.16444 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | cc-by |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2311.16444 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | https://openalex.org/licenses/cc-by |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2311.16444 |
| primary_location.id | pmh:oai:arXiv.org:2311.16444 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | cc-by |
| primary_location.pdf_url | https://arxiv.org/pdf/2311.16444 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | https://openalex.org/licenses/cc-by |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2311.16444 |
| publication_date | 2023-11-28 |
| publication_year | 2023 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 2, 70, 126, 157, 197 |
| abstract_inverted_index.To | 54, 120, 150 |
| abstract_inverted_index.We | 0 |
| abstract_inverted_index.an | 23 |
| abstract_inverted_index.as | 69 |
| abstract_inverted_index.in | 212 |
| abstract_inverted_index.is | 36, 67, 82, 105 |
| abstract_inverted_index.of | 9, 113, 136, 166, 176, 193, 201 |
| abstract_inverted_index.or | 98 |
| abstract_inverted_index.to | 22, 51, 85, 147, 185 |
| abstract_inverted_index.we | 123, 155 |
| abstract_inverted_index.Our | 171, 188 |
| abstract_inverted_index.The | 90 |
| abstract_inverted_index.and | 33, 79, 168, 182, 205 |
| abstract_inverted_index.are | 48 |
| abstract_inverted_index.due | 50, 84 |
| abstract_inverted_index.for | 5 |
| abstract_inverted_index.new | 198 |
| abstract_inverted_index.the | 56, 75, 102, 110, 134, 152, 174, 178, 191 |
| abstract_inverted_index.web | 16, 65, 91 |
| abstract_inverted_index.This | 108 |
| abstract_inverted_index.data | 52 |
| abstract_inverted_index.end, | 122 |
| abstract_inverted_index.from | 15, 62 |
| abstract_inverted_index.hand | 99 |
| abstract_inverted_index.into | 196 |
| abstract_inverted_index.task | 199 |
| abstract_inverted_index.that | 208 |
| abstract_inverted_index.this | 121 |
| abstract_inverted_index.time | 31 |
| abstract_inverted_index.view | 88, 104, 118, 153, 179 |
| abstract_inverted_index.with | 19, 39, 45, 145 |
| abstract_inverted_index.While | 26 |
| abstract_inverted_index.dense | 10, 27, 202 |
| abstract_inverted_index.first | 124 |
| abstract_inverted_index.gaps, | 154 |
| abstract_inverted_index.novel | 3 |
| abstract_inverted_index.shots | 94 |
| abstract_inverted_index.study | 112, 192 |
| abstract_inverted_index.their | 34, 86, 148 |
| abstract_inverted_index.these | 143 |
| abstract_inverted_index.under | 116 |
| abstract_inverted_index.using | 161 |
| abstract_inverted_index.video | 11, 28, 58, 203 |
| abstract_inverted_index.view. | 25 |
| abstract_inverted_index.views | 21, 81 |
| abstract_inverted_index.which | 164 |
| abstract_inverted_index.while | 101 |
| abstract_inverted_index.whose | 131 |
| abstract_inverted_index.(e.g., | 42 |
| abstract_inverted_index.access | 146 |
| abstract_inverted_index.bridge | 151 |
| abstract_inverted_index.change | 180 |
| abstract_inverted_index.create | 125 |
| abstract_inverted_index.domain | 200 |
| abstract_inverted_index.either | 96 |
| abstract_inverted_index.follow | 133 |
| abstract_inverted_index.method | 160 |
| abstract_inverted_index.models | 14 |
| abstract_inverted_index.pushes | 190 |
| abstract_inverted_index.videos | 18, 41, 47, 66, 92, 211 |
| abstract_inverted_index.views. | 187 |
| abstract_inverted_index.between | 77, 142 |
| abstract_inverted_index.complex | 117 |
| abstract_inverted_index.confirm | 173 |
| abstract_inverted_index.contain | 93 |
| abstract_inverted_index.dataset | 129 |
| abstract_inverted_index.dynamic | 87 |
| abstract_inverted_index.limited | 57 |
| abstract_inverted_index.natural | 213 |
| abstract_inverted_index.problem | 181 |
| abstract_inverted_index.propose | 1, 156 |
| abstract_inverted_index.showing | 95 |
| abstract_inverted_index.stages. | 170 |
| abstract_inverted_index.studied | 38 |
| abstract_inverted_index.(EgoYC2) | 130 |
| abstract_inverted_index.However, | 73 |
| abstract_inverted_index.YouCook2 | 137 |
| abstract_inverted_index.abundant | 63 |
| abstract_inverted_index.adapting | 13 |
| abstract_inverted_index.captions | 132 |
| abstract_inverted_index.changes. | 89, 119 |
| abstract_inverted_index.consists | 165 |
| abstract_inverted_index.datasets | 144 |
| abstract_inverted_index.demanded | 68 |
| abstract_inverted_index.describe | 209 |
| abstract_inverted_index.enabling | 139 |
| abstract_inverted_index.in-depth | 111 |
| abstract_inverted_index.learning | 74, 141, 159 |
| abstract_inverted_index.overcome | 55 |
| abstract_inverted_index.regions, | 100 |
| abstract_inverted_index.segments | 32 |
| abstract_inverted_index.transfer | 8, 115, 140, 184, 195 |
| abstract_inverted_index.approach. | 72 |
| abstract_inverted_index.benchmark | 4, 189 |
| abstract_inverted_index.captions) | 35 |
| abstract_inverted_index.captions, | 138 |
| abstract_inverted_index.difficult | 83 |
| abstract_inverted_index.envisions | 206 |
| abstract_inverted_index.full-body | 97 |
| abstract_inverted_index.knowledge | 7, 61, 183 |
| abstract_inverted_index.language. | 214 |
| abstract_inverted_index.practical | 71 |
| abstract_inverted_index.primarily | 37 |
| abstract_inverted_index.real-life | 127 |
| abstract_inverted_index.scarcity. | 53 |
| abstract_inverted_index.shifting. | 107 |
| abstract_inverted_index.training, | 163 |
| abstract_inverted_index.YouCook2), | 43 |
| abstract_inverted_index.benchmarks | 44 |
| abstract_inverted_index.captioning | 29, 204 |
| abstract_inverted_index.constantly | 106 |
| abstract_inverted_index.cross-view | 6, 114, 194 |
| abstract_inverted_index.definition | 135 |
| abstract_inverted_index.egocentric | 24, 46, 80, 103, 128, 186, 210 |
| abstract_inverted_index.exocentric | 20, 40, 64, 78 |
| abstract_inverted_index.overcoming | 177 |
| abstract_inverted_index.restricted | 49 |
| abstract_inverted_index.(predicting | 30 |
| abstract_inverted_index.adversarial | 162 |
| abstract_inverted_index.captioning, | 12 |
| abstract_inverted_index.experiments | 172 |
| abstract_inverted_index.fine-tuning | 169 |
| abstract_inverted_index.necessitates | 109 |
| abstract_inverted_index.pre-training | 167 |
| abstract_inverted_index.transferring | 60 |
| abstract_inverted_index.availability, | 59 |
| abstract_inverted_index.effectiveness | 175 |
| abstract_inverted_index.ground-truth. | 149 |
| abstract_inverted_index.instructional | 17 |
| abstract_inverted_index.methodologies | 207 |
| abstract_inverted_index.correspondence | 76 |
| abstract_inverted_index.view-invariant | 158 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 7 |
| sustainable_development_goals[0].id | https://metadata.un.org/sdg/4 |
| sustainable_development_goals[0].score | 0.5600000023841858 |
| sustainable_development_goals[0].display_name | Quality Education |
| citation_normalized_percentile |