Target Speaker Lipreading by Audio-Visual Self-Distillation Pretraining and Speaker Adaptation Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2502.05758
Lipreading is an important technique for facilitating human-computer interaction in noisy environments. Our previously developed self-supervised learning method, AV2vec, which leverages multimodal self-distillation, has demonstrated promising performance in speaker-independent lipreading on the English LRS3 dataset. However, AV2vec faces challenges such as high training costs and a potential scarcity of audio-visual data for lipreading in languages other than English, such as Chinese. Additionally, most studies concentrate on speakerindependent lipreading models, which struggle to account for the substantial variation in speaking styles across di?erent speakers. To address these issues, we propose a comprehensive approach. First, we investigate cross-lingual transfer learning, adapting a pre-trained AV2vec model from a source language and optimizing it for the lipreading task in a target language. Second, we enhance the accuracy of lipreading for specific target speakers through a speaker adaptation strategy, which is not extensively explored in previous research. Third, after analyzing the complementary performance of lipreading with lip region-of-interest (ROI) and face inputs, we introduce a model ensembling strategy that integrates both, signi?cantly boosting model performance. Our method achieved a character error rate (CER) of 77.3% on the evaluation set of the ChatCLR dataset, which is lower than the top result from the 2024 Chat-scenario Chinese Lipreading Challenge.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2502.05758
- https://arxiv.org/pdf/2502.05758
- OA Status
- green
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4407385333
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4407385333Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2502.05758Digital Object Identifier
- Title
-
Target Speaker Lipreading by Audio-Visual Self-Distillation Pretraining and Speaker AdaptationWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2025Year of publication
- Publication date
-
2025-02-09Full publication date if available
- Authors
-
Jing-Xuan Zhang, Tingzhi Mao, Longjiang Guo, Jin Li, Lichen ZhangList of authors in order
- Landing page
-
https://arxiv.org/abs/2502.05758Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2502.05758Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2502.05758Direct OA link when available
- Concepts
-
Speech recognition, Adaptation (eye), Audio visual, Speaker recognition, Speaker diarisation, Voice analysis, Distillation, Computer science, Psychology, Multimedia, Neuroscience, Organic chemistry, ChemistryTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
0Total citation count in OpenAlex
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4407385333 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2502.05758 |
| ids.doi | https://doi.org/10.48550/arxiv.2502.05758 |
| ids.openalex | https://openalex.org/W4407385333 |
| fwci | 0.0 |
| type | preprint |
| title | Target Speaker Lipreading by Audio-Visual Self-Distillation Pretraining and Speaker Adaptation |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T10860 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.8610000014305115 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1711 |
| topics[0].subfield.display_name | Signal Processing |
| topics[0].display_name | Speech and Audio Processing |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C28490314 |
| concepts[0].level | 1 |
| concepts[0].score | 0.701496422290802 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q189436 |
| concepts[0].display_name | Speech recognition |
| concepts[1].id | https://openalex.org/C139807058 |
| concepts[1].level | 2 |
| concepts[1].score | 0.5954622030258179 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q352374 |
| concepts[1].display_name | Adaptation (eye) |
| concepts[2].id | https://openalex.org/C3017588708 |
| concepts[2].level | 2 |
| concepts[2].score | 0.5301830172538757 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q758901 |
| concepts[2].display_name | Audio visual |
| concepts[3].id | https://openalex.org/C133892786 |
| concepts[3].level | 2 |
| concepts[3].score | 0.4921573996543884 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q1145189 |
| concepts[3].display_name | Speaker recognition |
| concepts[4].id | https://openalex.org/C149838564 |
| concepts[4].level | 3 |
| concepts[4].score | 0.48902592062950134 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q7574248 |
| concepts[4].display_name | Speaker diarisation |
| concepts[5].id | https://openalex.org/C182964821 |
| concepts[5].level | 2 |
| concepts[5].score | 0.48400136828422546 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q7939498 |
| concepts[5].display_name | Voice analysis |
| concepts[6].id | https://openalex.org/C204030448 |
| concepts[6].level | 2 |
| concepts[6].score | 0.4599752426147461 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q101017 |
| concepts[6].display_name | Distillation |
| concepts[7].id | https://openalex.org/C41008148 |
| concepts[7].level | 0 |
| concepts[7].score | 0.4500451982021332 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[7].display_name | Computer science |
| concepts[8].id | https://openalex.org/C15744967 |
| concepts[8].level | 0 |
| concepts[8].score | 0.3824290335178375 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q9418 |
| concepts[8].display_name | Psychology |
| concepts[9].id | https://openalex.org/C49774154 |
| concepts[9].level | 1 |
| concepts[9].score | 0.06980624794960022 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q131765 |
| concepts[9].display_name | Multimedia |
| concepts[10].id | https://openalex.org/C169760540 |
| concepts[10].level | 1 |
| concepts[10].score | 0.06118118762969971 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q207011 |
| concepts[10].display_name | Neuroscience |
| concepts[11].id | https://openalex.org/C178790620 |
| concepts[11].level | 1 |
| concepts[11].score | 0.0 |
| concepts[11].wikidata | https://www.wikidata.org/wiki/Q11351 |
| concepts[11].display_name | Organic chemistry |
| concepts[12].id | https://openalex.org/C185592680 |
| concepts[12].level | 0 |
| concepts[12].score | 0.0 |
| concepts[12].wikidata | https://www.wikidata.org/wiki/Q2329 |
| concepts[12].display_name | Chemistry |
| keywords[0].id | https://openalex.org/keywords/speech-recognition |
| keywords[0].score | 0.701496422290802 |
| keywords[0].display_name | Speech recognition |
| keywords[1].id | https://openalex.org/keywords/adaptation |
| keywords[1].score | 0.5954622030258179 |
| keywords[1].display_name | Adaptation (eye) |
| keywords[2].id | https://openalex.org/keywords/audio-visual |
| keywords[2].score | 0.5301830172538757 |
| keywords[2].display_name | Audio visual |
| keywords[3].id | https://openalex.org/keywords/speaker-recognition |
| keywords[3].score | 0.4921573996543884 |
| keywords[3].display_name | Speaker recognition |
| keywords[4].id | https://openalex.org/keywords/speaker-diarisation |
| keywords[4].score | 0.48902592062950134 |
| keywords[4].display_name | Speaker diarisation |
| keywords[5].id | https://openalex.org/keywords/voice-analysis |
| keywords[5].score | 0.48400136828422546 |
| keywords[5].display_name | Voice analysis |
| keywords[6].id | https://openalex.org/keywords/distillation |
| keywords[6].score | 0.4599752426147461 |
| keywords[6].display_name | Distillation |
| keywords[7].id | https://openalex.org/keywords/computer-science |
| keywords[7].score | 0.4500451982021332 |
| keywords[7].display_name | Computer science |
| keywords[8].id | https://openalex.org/keywords/psychology |
| keywords[8].score | 0.3824290335178375 |
| keywords[8].display_name | Psychology |
| keywords[9].id | https://openalex.org/keywords/multimedia |
| keywords[9].score | 0.06980624794960022 |
| keywords[9].display_name | Multimedia |
| keywords[10].id | https://openalex.org/keywords/neuroscience |
| keywords[10].score | 0.06118118762969971 |
| keywords[10].display_name | Neuroscience |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2502.05758 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | cc-by-nc-nd |
| locations[0].pdf_url | https://arxiv.org/pdf/2502.05758 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | https://openalex.org/licenses/cc-by-nc-nd |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2502.05758 |
| locations[1].id | doi:10.48550/arxiv.2502.05758 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2502.05758 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5073662121 |
| authorships[0].author.orcid | https://orcid.org/0000-0003-4341-3174 |
| authorships[0].author.display_name | Jing-Xuan Zhang |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Zhang, Jing-Xuan |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5082781871 |
| authorships[1].author.orcid | https://orcid.org/0000-0001-9717-643X |
| authorships[1].author.display_name | Tingzhi Mao |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Mao, Tingzhi |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5065527161 |
| authorships[2].author.orcid | https://orcid.org/0000-0003-0720-2505 |
| authorships[2].author.display_name | Longjiang Guo |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Guo, Longjiang |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5100781249 |
| authorships[3].author.orcid | https://orcid.org/0000-0002-0260-3169 |
| authorships[3].author.display_name | Jin Li |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Li, Jin |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5100600344 |
| authorships[4].author.orcid | https://orcid.org/0000-0002-7810-9941 |
| authorships[4].author.display_name | Lichen Zhang |
| authorships[4].author_position | last |
| authorships[4].raw_author_name | Zhang, Lichen |
| authorships[4].is_corresponding | False |
| has_content.pdf | True |
| has_content.grobid_xml | True |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2502.05758 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | Target Speaker Lipreading by Audio-Visual Self-Distillation Pretraining and Speaker Adaptation |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T10860 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.8610000014305115 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1711 |
| primary_topic.subfield.display_name | Signal Processing |
| primary_topic.display_name | Speech and Audio Processing |
| related_works | https://openalex.org/W2206035908, https://openalex.org/W2149220986, https://openalex.org/W1493012537, https://openalex.org/W4247736853, https://openalex.org/W2162158162, https://openalex.org/W1999004162, https://openalex.org/W2125642021, https://openalex.org/W4406496871, https://openalex.org/W1521049138, https://openalex.org/W2023466863 |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2502.05758 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | cc-by-nc-nd |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2502.05758 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | https://openalex.org/licenses/cc-by-nc-nd |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2502.05758 |
| primary_location.id | pmh:oai:arXiv.org:2502.05758 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | cc-by-nc-nd |
| primary_location.pdf_url | https://arxiv.org/pdf/2502.05758 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | https://openalex.org/licenses/cc-by-nc-nd |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2502.05758 |
| publication_date | 2025-02-09 |
| publication_year | 2025 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 45, 89, 99, 104, 115, 130, 159, 173 |
| abstract_inverted_index.To | 83 |
| abstract_inverted_index.an | 2 |
| abstract_inverted_index.as | 40, 59 |
| abstract_inverted_index.in | 9, 27, 53, 77, 114, 139 |
| abstract_inverted_index.is | 1, 135, 189 |
| abstract_inverted_index.it | 109 |
| abstract_inverted_index.of | 48, 123, 148, 178, 184 |
| abstract_inverted_index.on | 30, 65, 180 |
| abstract_inverted_index.to | 71 |
| abstract_inverted_index.we | 87, 93, 119, 157 |
| abstract_inverted_index.Our | 12, 170 |
| abstract_inverted_index.and | 44, 107, 154 |
| abstract_inverted_index.for | 5, 51, 73, 110, 125 |
| abstract_inverted_index.has | 23 |
| abstract_inverted_index.lip | 151 |
| abstract_inverted_index.not | 136 |
| abstract_inverted_index.set | 183 |
| abstract_inverted_index.the | 31, 74, 111, 121, 145, 181, 185, 192, 196 |
| abstract_inverted_index.top | 193 |
| abstract_inverted_index.2024 | 197 |
| abstract_inverted_index.LRS3 | 33 |
| abstract_inverted_index.data | 50 |
| abstract_inverted_index.face | 155 |
| abstract_inverted_index.from | 103, 195 |
| abstract_inverted_index.high | 41 |
| abstract_inverted_index.most | 62 |
| abstract_inverted_index.rate | 176 |
| abstract_inverted_index.such | 39, 58 |
| abstract_inverted_index.task | 113 |
| abstract_inverted_index.than | 56, 191 |
| abstract_inverted_index.that | 163 |
| abstract_inverted_index.with | 150 |
| abstract_inverted_index.(CER) | 177 |
| abstract_inverted_index.(ROI) | 153 |
| abstract_inverted_index.77.3% | 179 |
| abstract_inverted_index.after | 143 |
| abstract_inverted_index.both, | 165 |
| abstract_inverted_index.costs | 43 |
| abstract_inverted_index.error | 175 |
| abstract_inverted_index.faces | 37 |
| abstract_inverted_index.lower | 190 |
| abstract_inverted_index.model | 102, 160, 168 |
| abstract_inverted_index.noisy | 10 |
| abstract_inverted_index.other | 55 |
| abstract_inverted_index.these | 85 |
| abstract_inverted_index.which | 19, 69, 134, 188 |
| abstract_inverted_index.AV2vec | 36, 101 |
| abstract_inverted_index.First, | 92 |
| abstract_inverted_index.Third, | 142 |
| abstract_inverted_index.across | 80 |
| abstract_inverted_index.method | 171 |
| abstract_inverted_index.result | 194 |
| abstract_inverted_index.source | 105 |
| abstract_inverted_index.styles | 79 |
| abstract_inverted_index.target | 116, 127 |
| abstract_inverted_index.AV2vec, | 18 |
| abstract_inverted_index.ChatCLR | 186 |
| abstract_inverted_index.Chinese | 199 |
| abstract_inverted_index.English | 32 |
| abstract_inverted_index.Second, | 118 |
| abstract_inverted_index.account | 72 |
| abstract_inverted_index.address | 84 |
| abstract_inverted_index.enhance | 120 |
| abstract_inverted_index.inputs, | 156 |
| abstract_inverted_index.issues, | 86 |
| abstract_inverted_index.method, | 17 |
| abstract_inverted_index.models, | 68 |
| abstract_inverted_index.propose | 88 |
| abstract_inverted_index.speaker | 131 |
| abstract_inverted_index.studies | 63 |
| abstract_inverted_index.through | 129 |
| abstract_inverted_index.Chinese. | 60 |
| abstract_inverted_index.English, | 57 |
| abstract_inverted_index.However, | 35 |
| abstract_inverted_index.accuracy | 122 |
| abstract_inverted_index.achieved | 172 |
| abstract_inverted_index.adapting | 98 |
| abstract_inverted_index.boosting | 167 |
| abstract_inverted_index.dataset, | 187 |
| abstract_inverted_index.dataset. | 34 |
| abstract_inverted_index.di?erent | 81 |
| abstract_inverted_index.explored | 138 |
| abstract_inverted_index.language | 106 |
| abstract_inverted_index.learning | 16 |
| abstract_inverted_index.previous | 140 |
| abstract_inverted_index.scarcity | 47 |
| abstract_inverted_index.speakers | 128 |
| abstract_inverted_index.speaking | 78 |
| abstract_inverted_index.specific | 126 |
| abstract_inverted_index.strategy | 162 |
| abstract_inverted_index.struggle | 70 |
| abstract_inverted_index.training | 42 |
| abstract_inverted_index.transfer | 96 |
| abstract_inverted_index.analyzing | 144 |
| abstract_inverted_index.approach. | 91 |
| abstract_inverted_index.character | 174 |
| abstract_inverted_index.developed | 14 |
| abstract_inverted_index.important | 3 |
| abstract_inverted_index.introduce | 158 |
| abstract_inverted_index.language. | 117 |
| abstract_inverted_index.languages | 54 |
| abstract_inverted_index.learning, | 97 |
| abstract_inverted_index.leverages | 20 |
| abstract_inverted_index.potential | 46 |
| abstract_inverted_index.promising | 25 |
| abstract_inverted_index.research. | 141 |
| abstract_inverted_index.speakers. | 82 |
| abstract_inverted_index.strategy, | 133 |
| abstract_inverted_index.technique | 4 |
| abstract_inverted_index.variation | 76 |
| abstract_inverted_index.Challenge. | 201 |
| abstract_inverted_index.Lipreading | 0, 200 |
| abstract_inverted_index.adaptation | 132 |
| abstract_inverted_index.challenges | 38 |
| abstract_inverted_index.ensembling | 161 |
| abstract_inverted_index.evaluation | 182 |
| abstract_inverted_index.integrates | 164 |
| abstract_inverted_index.lipreading | 29, 52, 67, 112, 124, 149 |
| abstract_inverted_index.multimodal | 21 |
| abstract_inverted_index.optimizing | 108 |
| abstract_inverted_index.previously | 13 |
| abstract_inverted_index.concentrate | 64 |
| abstract_inverted_index.extensively | 137 |
| abstract_inverted_index.interaction | 8 |
| abstract_inverted_index.investigate | 94 |
| abstract_inverted_index.performance | 26, 147 |
| abstract_inverted_index.pre-trained | 100 |
| abstract_inverted_index.substantial | 75 |
| abstract_inverted_index.audio-visual | 49 |
| abstract_inverted_index.demonstrated | 24 |
| abstract_inverted_index.facilitating | 6 |
| abstract_inverted_index.performance. | 169 |
| abstract_inverted_index.signi?cantly | 166 |
| abstract_inverted_index.Additionally, | 61 |
| abstract_inverted_index.Chat-scenario | 198 |
| abstract_inverted_index.complementary | 146 |
| abstract_inverted_index.comprehensive | 90 |
| abstract_inverted_index.cross-lingual | 95 |
| abstract_inverted_index.environments. | 11 |
| abstract_inverted_index.human-computer | 7 |
| abstract_inverted_index.self-supervised | 15 |
| abstract_inverted_index.region-of-interest | 152 |
| abstract_inverted_index.self-distillation, | 22 |
| abstract_inverted_index.speakerindependent | 66 |
| abstract_inverted_index.speaker-independent | 28 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 5 |
| citation_normalized_percentile |