MimicTalk: Mimicking a personalized and expressive 3D talking face in minutes Article Swipe
YOU?
·
· 2024
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2410.06734
Talking face generation (TFG) aims to animate a target identity's face to create realistic talking videos. Personalized TFG is a variant that emphasizes the perceptual identity similarity of the synthesized result (from the perspective of appearance and talking style). While previous works typically solve this problem by learning an individual neural radiance field (NeRF) for each identity to implicitly store its static and dynamic information, we find it inefficient and non-generalized due to the per-identity-per-training framework and the limited training data. To this end, we propose MimicTalk, the first attempt that exploits the rich knowledge from a NeRF-based person-agnostic generic model for improving the efficiency and robustness of personalized TFG. To be specific, (1) we first come up with a person-agnostic 3D TFG model as the base model and propose to adapt it into a specific identity; (2) we propose a static-dynamic-hybrid adaptation pipeline to help the model learn the personalized static appearance and facial dynamic features; (3) To generate the facial motion of the personalized talking style, we propose an in-context stylized audio-to-motion model that mimics the implicit talking style provided in the reference video without information loss by an explicit style representation. The adaptation process to an unseen identity can be performed in 15 minutes, which is 47 times faster than previous person-dependent methods. Experiments show that our MimicTalk surpasses previous baselines regarding video quality, efficiency, and expressiveness. Source code and video samples are available at https://mimictalk.github.io .
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2410.06734
- https://arxiv.org/pdf/2410.06734
- OA Status
- green
- Cited By
- 1
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4403345408
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4403345408Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2410.06734Digital Object Identifier
- Title
-
MimicTalk: Mimicking a personalized and expressive 3D talking face in minutesWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2024Year of publication
- Publication date
-
2024-10-09Full publication date if available
- Authors
-
Zhenhui Ye, Tianyun Zhong, Yi Ren, Ziyue Karen Jiang, Jiawei Huang, Rongjie Huang, Jinglin Liu, Jinzheng He, Chen Zhang, Zehan Wang, Xize Chen, Xiang Yin, Zhou ZhaoList of authors in order
- Landing page
-
https://arxiv.org/abs/2410.06734Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2410.06734Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2410.06734Direct OA link when available
- Concepts
-
Face (sociological concept), Internet privacy, Computer science, Psychology, Linguistics, PhilosophyTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
1Total citation count in OpenAlex
- Citations by year (recent)
-
2025: 1Per-year citation counts (last 5 years)
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4403345408 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2410.06734 |
| ids.doi | https://doi.org/10.48550/arxiv.2410.06734 |
| ids.openalex | https://openalex.org/W4403345408 |
| fwci | |
| type | preprint |
| title | MimicTalk: Mimicking a personalized and expressive 3D talking face in minutes |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T11448 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9980999827384949 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Face recognition and analysis |
| topics[1].id | https://openalex.org/T10057 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9455000162124634 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1707 |
| topics[1].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[1].display_name | Face and Expression Recognition |
| topics[2].id | https://openalex.org/T10775 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9099000096321106 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1707 |
| topics[2].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[2].display_name | Generative Adversarial Networks and Image Synthesis |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C2779304628 |
| concepts[0].level | 2 |
| concepts[0].score | 0.7141773700714111 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q3503480 |
| concepts[0].display_name | Face (sociological concept) |
| concepts[1].id | https://openalex.org/C108827166 |
| concepts[1].level | 1 |
| concepts[1].score | 0.42118561267852783 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q175975 |
| concepts[1].display_name | Internet privacy |
| concepts[2].id | https://openalex.org/C41008148 |
| concepts[2].level | 0 |
| concepts[2].score | 0.3887957036495209 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[2].display_name | Computer science |
| concepts[3].id | https://openalex.org/C15744967 |
| concepts[3].level | 0 |
| concepts[3].score | 0.37507903575897217 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q9418 |
| concepts[3].display_name | Psychology |
| concepts[4].id | https://openalex.org/C41895202 |
| concepts[4].level | 1 |
| concepts[4].score | 0.18385833501815796 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q8162 |
| concepts[4].display_name | Linguistics |
| concepts[5].id | https://openalex.org/C138885662 |
| concepts[5].level | 0 |
| concepts[5].score | 0.10057279467582703 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q5891 |
| concepts[5].display_name | Philosophy |
| keywords[0].id | https://openalex.org/keywords/face |
| keywords[0].score | 0.7141773700714111 |
| keywords[0].display_name | Face (sociological concept) |
| keywords[1].id | https://openalex.org/keywords/internet-privacy |
| keywords[1].score | 0.42118561267852783 |
| keywords[1].display_name | Internet privacy |
| keywords[2].id | https://openalex.org/keywords/computer-science |
| keywords[2].score | 0.3887957036495209 |
| keywords[2].display_name | Computer science |
| keywords[3].id | https://openalex.org/keywords/psychology |
| keywords[3].score | 0.37507903575897217 |
| keywords[3].display_name | Psychology |
| keywords[4].id | https://openalex.org/keywords/linguistics |
| keywords[4].score | 0.18385833501815796 |
| keywords[4].display_name | Linguistics |
| keywords[5].id | https://openalex.org/keywords/philosophy |
| keywords[5].score | 0.10057279467582703 |
| keywords[5].display_name | Philosophy |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2410.06734 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2410.06734 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2410.06734 |
| locations[1].id | doi:10.48550/arxiv.2410.06734 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2410.06734 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5016502904 |
| authorships[0].author.orcid | https://orcid.org/0000-0002-7105-014X |
| authorships[0].author.display_name | Zhenhui Ye |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Ye, Zhenhui |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5112961844 |
| authorships[1].author.orcid | |
| authorships[1].author.display_name | Tianyun Zhong |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Zhong, Tianyun |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5101499461 |
| authorships[2].author.orcid | https://orcid.org/0000-0001-9360-8653 |
| authorships[2].author.display_name | Yi Ren |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Ren, Yi |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5083611896 |
| authorships[3].author.orcid | https://orcid.org/0009-0007-4158-4689 |
| authorships[3].author.display_name | Ziyue Karen Jiang |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Jiang, Ziyue |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5100601282 |
| authorships[4].author.orcid | https://orcid.org/0000-0001-7670-2971 |
| authorships[4].author.display_name | Jiawei Huang |
| authorships[4].author_position | middle |
| authorships[4].raw_author_name | Huang, Jiawei |
| authorships[4].is_corresponding | False |
| authorships[5].author.id | https://openalex.org/A5011787904 |
| authorships[5].author.orcid | https://orcid.org/0000-0002-1695-9000 |
| authorships[5].author.display_name | Rongjie Huang |
| authorships[5].author_position | middle |
| authorships[5].raw_author_name | Huang, Rongjie |
| authorships[5].is_corresponding | False |
| authorships[6].author.id | https://openalex.org/A5065126806 |
| authorships[6].author.orcid | https://orcid.org/0000-0002-9905-3887 |
| authorships[6].author.display_name | Jinglin Liu |
| authorships[6].author_position | middle |
| authorships[6].raw_author_name | Liu, Jinglin |
| authorships[6].is_corresponding | False |
| authorships[7].author.id | https://openalex.org/A5043581249 |
| authorships[7].author.orcid | https://orcid.org/0009-0003-3024-9624 |
| authorships[7].author.display_name | Jinzheng He |
| authorships[7].author_position | middle |
| authorships[7].raw_author_name | He, Jinzheng |
| authorships[7].is_corresponding | False |
| authorships[8].author.id | https://openalex.org/A5100374116 |
| authorships[8].author.orcid | https://orcid.org/0000-0002-4120-0151 |
| authorships[8].author.display_name | Chen Zhang |
| authorships[8].author_position | middle |
| authorships[8].raw_author_name | Zhang, Chen |
| authorships[8].is_corresponding | False |
| authorships[9].author.id | https://openalex.org/A5012728201 |
| authorships[9].author.orcid | https://orcid.org/0000-0002-9516-5611 |
| authorships[9].author.display_name | Zehan Wang |
| authorships[9].author_position | middle |
| authorships[9].raw_author_name | Wang, Zehan |
| authorships[9].is_corresponding | False |
| authorships[10].author.id | https://openalex.org/A5111363105 |
| authorships[10].author.orcid | |
| authorships[10].author.display_name | Xize Chen |
| authorships[10].author_position | middle |
| authorships[10].raw_author_name | Chen, Xize |
| authorships[10].is_corresponding | False |
| authorships[11].author.id | https://openalex.org/A5100446849 |
| authorships[11].author.orcid | https://orcid.org/0000-0002-6096-9943 |
| authorships[11].author.display_name | Xiang Yin |
| authorships[11].author_position | middle |
| authorships[11].raw_author_name | Yin, Xiang |
| authorships[11].is_corresponding | False |
| authorships[12].author.id | https://openalex.org/A5079260216 |
| authorships[12].author.orcid | https://orcid.org/0000-0001-6121-0384 |
| authorships[12].author.display_name | Zhou Zhao |
| authorships[12].author_position | last |
| authorships[12].raw_author_name | Zhao, Zhou |
| authorships[12].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2410.06734 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | MimicTalk: Mimicking a personalized and expressive 3D talking face in minutes |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T11448 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9980999827384949 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Face recognition and analysis |
| related_works | https://openalex.org/W4391375266, https://openalex.org/W2748952813, https://openalex.org/W2390279801, https://openalex.org/W2358668433, https://openalex.org/W4396701345, https://openalex.org/W2376932109, https://openalex.org/W2001405890, https://openalex.org/W4396696052, https://openalex.org/W4402327032, https://openalex.org/W2382290278 |
| cited_by_count | 1 |
| counts_by_year[0].year | 2025 |
| counts_by_year[0].cited_by_count | 1 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2410.06734 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2410.06734 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2410.06734 |
| primary_location.id | pmh:oai:arXiv.org:2410.06734 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2410.06734 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2410.06734 |
| publication_date | 2024-10-09 |
| publication_year | 2024 |
| referenced_works_count | 0 |
| abstract_inverted_index.. | 239 |
| abstract_inverted_index.a | 7, 19, 96, 119, 134, 140 |
| abstract_inverted_index.15 | 205 |
| abstract_inverted_index.3D | 121 |
| abstract_inverted_index.47 | 209 |
| abstract_inverted_index.To | 81, 110, 158 |
| abstract_inverted_index.an | 48, 170, 190, 198 |
| abstract_inverted_index.as | 124 |
| abstract_inverted_index.at | 237 |
| abstract_inverted_index.be | 111, 202 |
| abstract_inverted_index.by | 46, 189 |
| abstract_inverted_index.in | 182, 204 |
| abstract_inverted_index.is | 18, 208 |
| abstract_inverted_index.it | 67, 132 |
| abstract_inverted_index.of | 27, 34, 107, 163 |
| abstract_inverted_index.to | 5, 11, 57, 72, 130, 144, 197 |
| abstract_inverted_index.up | 117 |
| abstract_inverted_index.we | 65, 84, 114, 138, 168 |
| abstract_inverted_index.(1) | 113 |
| abstract_inverted_index.(2) | 137 |
| abstract_inverted_index.(3) | 157 |
| abstract_inverted_index.TFG | 17, 122 |
| abstract_inverted_index.The | 194 |
| abstract_inverted_index.and | 36, 62, 69, 76, 105, 128, 153, 228, 232 |
| abstract_inverted_index.are | 235 |
| abstract_inverted_index.can | 201 |
| abstract_inverted_index.due | 71 |
| abstract_inverted_index.for | 54, 101 |
| abstract_inverted_index.its | 60 |
| abstract_inverted_index.our | 219 |
| abstract_inverted_index.the | 23, 28, 32, 73, 77, 87, 92, 103, 125, 146, 149, 160, 164, 177, 183 |
| abstract_inverted_index.TFG. | 109 |
| abstract_inverted_index.aims | 4 |
| abstract_inverted_index.base | 126 |
| abstract_inverted_index.code | 231 |
| abstract_inverted_index.come | 116 |
| abstract_inverted_index.each | 55 |
| abstract_inverted_index.end, | 83 |
| abstract_inverted_index.face | 1, 10 |
| abstract_inverted_index.find | 66 |
| abstract_inverted_index.from | 95 |
| abstract_inverted_index.help | 145 |
| abstract_inverted_index.into | 133 |
| abstract_inverted_index.loss | 188 |
| abstract_inverted_index.rich | 93 |
| abstract_inverted_index.show | 217 |
| abstract_inverted_index.than | 212 |
| abstract_inverted_index.that | 21, 90, 175, 218 |
| abstract_inverted_index.this | 44, 82 |
| abstract_inverted_index.with | 118 |
| abstract_inverted_index.(TFG) | 3 |
| abstract_inverted_index.(from | 31 |
| abstract_inverted_index.While | 39 |
| abstract_inverted_index.adapt | 131 |
| abstract_inverted_index.data. | 80 |
| abstract_inverted_index.field | 52 |
| abstract_inverted_index.first | 88, 115 |
| abstract_inverted_index.learn | 148 |
| abstract_inverted_index.model | 100, 123, 127, 147, 174 |
| abstract_inverted_index.solve | 43 |
| abstract_inverted_index.store | 59 |
| abstract_inverted_index.style | 180, 192 |
| abstract_inverted_index.times | 210 |
| abstract_inverted_index.video | 185, 225, 233 |
| abstract_inverted_index.which | 207 |
| abstract_inverted_index.works | 41 |
| abstract_inverted_index.(NeRF) | 53 |
| abstract_inverted_index.Source | 230 |
| abstract_inverted_index.create | 12 |
| abstract_inverted_index.facial | 154, 161 |
| abstract_inverted_index.faster | 211 |
| abstract_inverted_index.mimics | 176 |
| abstract_inverted_index.motion | 162 |
| abstract_inverted_index.neural | 50 |
| abstract_inverted_index.result | 30 |
| abstract_inverted_index.static | 61, 151 |
| abstract_inverted_index.style, | 167 |
| abstract_inverted_index.target | 8 |
| abstract_inverted_index.unseen | 199 |
| abstract_inverted_index.Talking | 0 |
| abstract_inverted_index.animate | 6 |
| abstract_inverted_index.attempt | 89 |
| abstract_inverted_index.dynamic | 63, 155 |
| abstract_inverted_index.generic | 99 |
| abstract_inverted_index.limited | 78 |
| abstract_inverted_index.problem | 45 |
| abstract_inverted_index.process | 196 |
| abstract_inverted_index.propose | 85, 129, 139, 169 |
| abstract_inverted_index.samples | 234 |
| abstract_inverted_index.style). | 38 |
| abstract_inverted_index.talking | 14, 37, 166, 179 |
| abstract_inverted_index.variant | 20 |
| abstract_inverted_index.videos. | 15 |
| abstract_inverted_index.without | 186 |
| abstract_inverted_index.explicit | 191 |
| abstract_inverted_index.exploits | 91 |
| abstract_inverted_index.generate | 159 |
| abstract_inverted_index.identity | 25, 56, 200 |
| abstract_inverted_index.implicit | 178 |
| abstract_inverted_index.learning | 47 |
| abstract_inverted_index.methods. | 215 |
| abstract_inverted_index.minutes, | 206 |
| abstract_inverted_index.pipeline | 143 |
| abstract_inverted_index.previous | 40, 213, 222 |
| abstract_inverted_index.provided | 181 |
| abstract_inverted_index.quality, | 226 |
| abstract_inverted_index.radiance | 51 |
| abstract_inverted_index.specific | 135 |
| abstract_inverted_index.stylized | 172 |
| abstract_inverted_index.training | 79 |
| abstract_inverted_index.MimicTalk | 220 |
| abstract_inverted_index.available | 236 |
| abstract_inverted_index.baselines | 223 |
| abstract_inverted_index.features; | 156 |
| abstract_inverted_index.framework | 75 |
| abstract_inverted_index.identity; | 136 |
| abstract_inverted_index.improving | 102 |
| abstract_inverted_index.knowledge | 94 |
| abstract_inverted_index.performed | 203 |
| abstract_inverted_index.realistic | 13 |
| abstract_inverted_index.reference | 184 |
| abstract_inverted_index.regarding | 224 |
| abstract_inverted_index.specific, | 112 |
| abstract_inverted_index.surpasses | 221 |
| abstract_inverted_index.typically | 42 |
| abstract_inverted_index.MimicTalk, | 86 |
| abstract_inverted_index.NeRF-based | 97 |
| abstract_inverted_index.adaptation | 142, 195 |
| abstract_inverted_index.appearance | 35, 152 |
| abstract_inverted_index.efficiency | 104 |
| abstract_inverted_index.emphasizes | 22 |
| abstract_inverted_index.generation | 2 |
| abstract_inverted_index.identity's | 9 |
| abstract_inverted_index.implicitly | 58 |
| abstract_inverted_index.in-context | 171 |
| abstract_inverted_index.individual | 49 |
| abstract_inverted_index.perceptual | 24 |
| abstract_inverted_index.robustness | 106 |
| abstract_inverted_index.similarity | 26 |
| abstract_inverted_index.Experiments | 216 |
| abstract_inverted_index.efficiency, | 227 |
| abstract_inverted_index.inefficient | 68 |
| abstract_inverted_index.information | 187 |
| abstract_inverted_index.perspective | 33 |
| abstract_inverted_index.synthesized | 29 |
| abstract_inverted_index.Personalized | 16 |
| abstract_inverted_index.information, | 64 |
| abstract_inverted_index.personalized | 108, 150, 165 |
| abstract_inverted_index.audio-to-motion | 173 |
| abstract_inverted_index.expressiveness. | 229 |
| abstract_inverted_index.non-generalized | 70 |
| abstract_inverted_index.person-agnostic | 98, 120 |
| abstract_inverted_index.representation. | 193 |
| abstract_inverted_index.person-dependent | 214 |
| abstract_inverted_index.static-dynamic-hybrid | 141 |
| abstract_inverted_index.per-identity-per-training | 74 |
| abstract_inverted_index.https://mimictalk.github.io | 238 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 13 |
| citation_normalized_percentile |