Improving Speech Related Facial Action Unit Recognition by Audiovisual Information Fusion Article Swipe
YOU?
·
· 2017
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.1706.10197
It is challenging to recognize facial action unit (AU) from spontaneous facial displays, especially when they are accompanied by speech. The major reason is that the information is extracted from a single source, i.e., the visual channel, in the current practice. However, facial activity is highly correlated with voice in natural human communications. Instead of solely improving visual observations, this paper presents a novel audiovisual fusion framework, which makes the best use of visual and acoustic cues in recognizing speech-related facial AUs. In particular, a dynamic Bayesian network (DBN) is employed to explicitly model the semantic and dynamic physiological relationships between AUs and phonemes as well as measurement uncertainty. A pilot audiovisual AU-coded database has been collected to evaluate the proposed framework, which consists of a "clean" subset containing frontal faces under well controlled circumstances and a challenging subset with large head movements and occlusions. Experiments on this database have demonstrated that the proposed framework yields significant improvement in recognizing speech-related AUs compared to the state-of-the-art visual-based methods especially for those AUs whose visual observations are impaired during speech, and more importantly also outperforms feature-level fusion methods by explicitly modeling and exploiting physiological relationships between AUs and phonemes.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/1706.10197
- https://arxiv.org/pdf/1706.10197
- OA Status
- green
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W2726173167
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W2726173167Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.1706.10197Digital Object Identifier
- Title
-
Improving Speech Related Facial Action Unit Recognition by Audiovisual Information FusionWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2017Year of publication
- Publication date
-
2017-06-29Full publication date if available
- Authors
-
Zibo Meng, Shizhong Han, Ping Liu, Yan TongList of authors in order
- Landing page
-
https://arxiv.org/abs/1706.10197Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/1706.10197Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/1706.10197Direct OA link when available
- Concepts
-
Computer science, Dynamic Bayesian network, Speech recognition, Feature (linguistics), Facial expression, Artificial intelligence, Action (physics), Face (sociological concept), Pattern recognition (psychology), Bayesian probability, Social science, Sociology, Linguistics, Philosophy, Physics, Quantum mechanicsTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
0Total citation count in OpenAlex
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W2726173167 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.1706.10197 |
| ids.doi | https://doi.org/10.48550/arxiv.1706.10197 |
| ids.mag | 2726173167 |
| ids.openalex | https://openalex.org/W2726173167 |
| fwci | 0.0 |
| type | preprint |
| title | Improving Speech Related Facial Action Unit Recognition by Audiovisual Information Fusion |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T10860 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9984999895095825 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1711 |
| topics[0].subfield.display_name | Signal Processing |
| topics[0].display_name | Speech and Audio Processing |
| topics[1].id | https://openalex.org/T10667 |
| topics[1].field.id | https://openalex.org/fields/32 |
| topics[1].field.display_name | Psychology |
| topics[1].score | 0.9853000044822693 |
| topics[1].domain.id | https://openalex.org/domains/2 |
| topics[1].domain.display_name | Social Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/3205 |
| topics[1].subfield.display_name | Experimental and Cognitive Psychology |
| topics[1].display_name | Emotion and Mood Recognition |
| topics[2].id | https://openalex.org/T11448 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9722999930381775 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1707 |
| topics[2].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[2].display_name | Face recognition and analysis |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C41008148 |
| concepts[0].level | 0 |
| concepts[0].score | 0.7748974561691284 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[0].display_name | Computer science |
| concepts[1].id | https://openalex.org/C82142266 |
| concepts[1].level | 3 |
| concepts[1].score | 0.6857552528381348 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q3456604 |
| concepts[1].display_name | Dynamic Bayesian network |
| concepts[2].id | https://openalex.org/C28490314 |
| concepts[2].level | 1 |
| concepts[2].score | 0.6735432147979736 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q189436 |
| concepts[2].display_name | Speech recognition |
| concepts[3].id | https://openalex.org/C2776401178 |
| concepts[3].level | 2 |
| concepts[3].score | 0.5938425064086914 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q12050496 |
| concepts[3].display_name | Feature (linguistics) |
| concepts[4].id | https://openalex.org/C195704467 |
| concepts[4].level | 2 |
| concepts[4].score | 0.5156594514846802 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q327968 |
| concepts[4].display_name | Facial expression |
| concepts[5].id | https://openalex.org/C154945302 |
| concepts[5].level | 1 |
| concepts[5].score | 0.5143245458602905 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[5].display_name | Artificial intelligence |
| concepts[6].id | https://openalex.org/C2780791683 |
| concepts[6].level | 2 |
| concepts[6].score | 0.4786421060562134 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q846785 |
| concepts[6].display_name | Action (physics) |
| concepts[7].id | https://openalex.org/C2779304628 |
| concepts[7].level | 2 |
| concepts[7].score | 0.41932213306427 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q3503480 |
| concepts[7].display_name | Face (sociological concept) |
| concepts[8].id | https://openalex.org/C153180895 |
| concepts[8].level | 2 |
| concepts[8].score | 0.37939321994781494 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q7148389 |
| concepts[8].display_name | Pattern recognition (psychology) |
| concepts[9].id | https://openalex.org/C107673813 |
| concepts[9].level | 2 |
| concepts[9].score | 0.3349160850048065 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q812534 |
| concepts[9].display_name | Bayesian probability |
| concepts[10].id | https://openalex.org/C36289849 |
| concepts[10].level | 1 |
| concepts[10].score | 0.0 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q34749 |
| concepts[10].display_name | Social science |
| concepts[11].id | https://openalex.org/C144024400 |
| concepts[11].level | 0 |
| concepts[11].score | 0.0 |
| concepts[11].wikidata | https://www.wikidata.org/wiki/Q21201 |
| concepts[11].display_name | Sociology |
| concepts[12].id | https://openalex.org/C41895202 |
| concepts[12].level | 1 |
| concepts[12].score | 0.0 |
| concepts[12].wikidata | https://www.wikidata.org/wiki/Q8162 |
| concepts[12].display_name | Linguistics |
| concepts[13].id | https://openalex.org/C138885662 |
| concepts[13].level | 0 |
| concepts[13].score | 0.0 |
| concepts[13].wikidata | https://www.wikidata.org/wiki/Q5891 |
| concepts[13].display_name | Philosophy |
| concepts[14].id | https://openalex.org/C121332964 |
| concepts[14].level | 0 |
| concepts[14].score | 0.0 |
| concepts[14].wikidata | https://www.wikidata.org/wiki/Q413 |
| concepts[14].display_name | Physics |
| concepts[15].id | https://openalex.org/C62520636 |
| concepts[15].level | 1 |
| concepts[15].score | 0.0 |
| concepts[15].wikidata | https://www.wikidata.org/wiki/Q944 |
| concepts[15].display_name | Quantum mechanics |
| keywords[0].id | https://openalex.org/keywords/computer-science |
| keywords[0].score | 0.7748974561691284 |
| keywords[0].display_name | Computer science |
| keywords[1].id | https://openalex.org/keywords/dynamic-bayesian-network |
| keywords[1].score | 0.6857552528381348 |
| keywords[1].display_name | Dynamic Bayesian network |
| keywords[2].id | https://openalex.org/keywords/speech-recognition |
| keywords[2].score | 0.6735432147979736 |
| keywords[2].display_name | Speech recognition |
| keywords[3].id | https://openalex.org/keywords/feature |
| keywords[3].score | 0.5938425064086914 |
| keywords[3].display_name | Feature (linguistics) |
| keywords[4].id | https://openalex.org/keywords/facial-expression |
| keywords[4].score | 0.5156594514846802 |
| keywords[4].display_name | Facial expression |
| keywords[5].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[5].score | 0.5143245458602905 |
| keywords[5].display_name | Artificial intelligence |
| keywords[6].id | https://openalex.org/keywords/action |
| keywords[6].score | 0.4786421060562134 |
| keywords[6].display_name | Action (physics) |
| keywords[7].id | https://openalex.org/keywords/face |
| keywords[7].score | 0.41932213306427 |
| keywords[7].display_name | Face (sociological concept) |
| keywords[8].id | https://openalex.org/keywords/pattern-recognition |
| keywords[8].score | 0.37939321994781494 |
| keywords[8].display_name | Pattern recognition (psychology) |
| keywords[9].id | https://openalex.org/keywords/bayesian-probability |
| keywords[9].score | 0.3349160850048065 |
| keywords[9].display_name | Bayesian probability |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:1706.10197 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/1706.10197 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/1706.10197 |
| locations[1].id | doi:10.48550/arxiv.1706.10197 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.1706.10197 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5036993002 |
| authorships[0].author.orcid | https://orcid.org/0000-0001-7299-7290 |
| authorships[0].author.display_name | Zibo Meng |
| authorships[0].affiliations[0].raw_affiliation_string | AI Research Center, Innopeak Technology Inc., Palo Alto, CA, USA |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Zibo Meng |
| authorships[0].is_corresponding | False |
| authorships[0].raw_affiliation_strings | AI Research Center, Innopeak Technology Inc., Palo Alto, CA, USA |
| authorships[1].author.id | https://openalex.org/A5071921121 |
| authorships[1].author.orcid | https://orcid.org/0000-0002-3381-6992 |
| authorships[1].author.display_name | Shizhong Han |
| authorships[1].countries | US |
| authorships[1].affiliations[0].institution_ids | https://openalex.org/I4210149148 |
| authorships[1].affiliations[0].raw_affiliation_string | U.S. Research and Development Center, 12 Sigma Technologies, San Diego, CA, USA |
| authorships[1].institutions[0].id | https://openalex.org/I4210149148 |
| authorships[1].institutions[0].ror | https://ror.org/03nxga177 |
| authorships[1].institutions[0].type | company |
| authorships[1].institutions[0].lineage | https://openalex.org/I4210149148 |
| authorships[1].institutions[0].country_code | US |
| authorships[1].institutions[0].display_name | Sigma Technologies (United States) |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Shizhong Han |
| authorships[1].is_corresponding | False |
| authorships[1].raw_affiliation_strings | U.S. Research and Development Center, 12 Sigma Technologies, San Diego, CA, USA |
| authorships[2].author.id | https://openalex.org/A5100442358 |
| authorships[2].author.orcid | https://orcid.org/0000-0002-3170-3783 |
| authorships[2].author.display_name | Ping Liu |
| authorships[2].affiliations[0].raw_affiliation_string | Big Data Group, U.S. Research Center, JD Inc., Santa Clara, CA, USA |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Ping Liu |
| authorships[2].is_corresponding | False |
| authorships[2].raw_affiliation_strings | Big Data Group, U.S. Research Center, JD Inc., Santa Clara, CA, USA |
| authorships[3].author.id | https://openalex.org/A5036764694 |
| authorships[3].author.orcid | https://orcid.org/0000-0002-5552-0199 |
| authorships[3].author.display_name | Yan Tong |
| authorships[3].countries | US |
| authorships[3].affiliations[0].institution_ids | https://openalex.org/I155781252 |
| authorships[3].affiliations[0].raw_affiliation_string | Department of Computer Science and Engineering, University of South Carolina, Columbia, SC, USA |
| authorships[3].institutions[0].id | https://openalex.org/I155781252 |
| authorships[3].institutions[0].ror | https://ror.org/02b6qw903 |
| authorships[3].institutions[0].type | education |
| authorships[3].institutions[0].lineage | https://openalex.org/I155781252 |
| authorships[3].institutions[0].country_code | US |
| authorships[3].institutions[0].display_name | University of South Carolina |
| authorships[3].author_position | last |
| authorships[3].raw_author_name | Yan Tong |
| authorships[3].is_corresponding | False |
| authorships[3].raw_affiliation_strings | Department of Computer Science and Engineering, University of South Carolina, Columbia, SC, USA |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/1706.10197 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | Improving Speech Related Facial Action Unit Recognition by Audiovisual Information Fusion |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T10860 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9984999895095825 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1711 |
| primary_topic.subfield.display_name | Signal Processing |
| primary_topic.display_name | Speech and Audio Processing |
| related_works | https://openalex.org/W1964038743, https://openalex.org/W2204775314, https://openalex.org/W2108579152, https://openalex.org/W695875, https://openalex.org/W2530648058, https://openalex.org/W2119218276, https://openalex.org/W3128072696, https://openalex.org/W2038391506, https://openalex.org/W1604786655, https://openalex.org/W2132337154 |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:1706.10197 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/1706.10197 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/1706.10197 |
| primary_location.id | pmh:oai:arXiv.org:1706.10197 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/1706.10197 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/1706.10197 |
| publication_date | 2017-06-29 |
| publication_year | 2017 |
| referenced_works_count | 0 |
| abstract_inverted_index.A | 109 |
| abstract_inverted_index.a | 30, 62, 84, 125, 136 |
| abstract_inverted_index.In | 82 |
| abstract_inverted_index.It | 0 |
| abstract_inverted_index.as | 104, 106 |
| abstract_inverted_index.by | 18, 187 |
| abstract_inverted_index.in | 37, 49, 77, 158 |
| abstract_inverted_index.is | 1, 23, 27, 44, 89 |
| abstract_inverted_index.of | 54, 72, 124 |
| abstract_inverted_index.on | 146 |
| abstract_inverted_index.to | 3, 91, 117, 163 |
| abstract_inverted_index.AUs | 101, 161, 171, 195 |
| abstract_inverted_index.The | 20 |
| abstract_inverted_index.and | 74, 96, 102, 135, 143, 179, 190, 196 |
| abstract_inverted_index.are | 16, 175 |
| abstract_inverted_index.for | 169 |
| abstract_inverted_index.has | 114 |
| abstract_inverted_index.the | 25, 34, 38, 69, 94, 119, 152, 164 |
| abstract_inverted_index.use | 71 |
| abstract_inverted_index.(AU) | 8 |
| abstract_inverted_index.AUs. | 81 |
| abstract_inverted_index.also | 182 |
| abstract_inverted_index.been | 115 |
| abstract_inverted_index.best | 70 |
| abstract_inverted_index.cues | 76 |
| abstract_inverted_index.from | 9, 29 |
| abstract_inverted_index.have | 149 |
| abstract_inverted_index.head | 141 |
| abstract_inverted_index.more | 180 |
| abstract_inverted_index.that | 24, 151 |
| abstract_inverted_index.they | 15 |
| abstract_inverted_index.this | 59, 147 |
| abstract_inverted_index.unit | 7 |
| abstract_inverted_index.well | 105, 132 |
| abstract_inverted_index.when | 14 |
| abstract_inverted_index.with | 47, 139 |
| abstract_inverted_index.(DBN) | 88 |
| abstract_inverted_index.faces | 130 |
| abstract_inverted_index.human | 51 |
| abstract_inverted_index.i.e., | 33 |
| abstract_inverted_index.large | 140 |
| abstract_inverted_index.major | 21 |
| abstract_inverted_index.makes | 68 |
| abstract_inverted_index.model | 93 |
| abstract_inverted_index.novel | 63 |
| abstract_inverted_index.paper | 60 |
| abstract_inverted_index.pilot | 110 |
| abstract_inverted_index.those | 170 |
| abstract_inverted_index.under | 131 |
| abstract_inverted_index.voice | 48 |
| abstract_inverted_index.which | 67, 122 |
| abstract_inverted_index.whose | 172 |
| abstract_inverted_index.action | 6 |
| abstract_inverted_index.during | 177 |
| abstract_inverted_index.facial | 5, 11, 42, 80 |
| abstract_inverted_index.fusion | 65, 185 |
| abstract_inverted_index.highly | 45 |
| abstract_inverted_index.reason | 22 |
| abstract_inverted_index.single | 31 |
| abstract_inverted_index.solely | 55 |
| abstract_inverted_index.subset | 127, 138 |
| abstract_inverted_index.visual | 35, 57, 73, 173 |
| abstract_inverted_index.yields | 155 |
| abstract_inverted_index."clean" | 126 |
| abstract_inverted_index.Instead | 53 |
| abstract_inverted_index.between | 100, 194 |
| abstract_inverted_index.current | 39 |
| abstract_inverted_index.dynamic | 85, 97 |
| abstract_inverted_index.frontal | 129 |
| abstract_inverted_index.methods | 167, 186 |
| abstract_inverted_index.natural | 50 |
| abstract_inverted_index.network | 87 |
| abstract_inverted_index.source, | 32 |
| abstract_inverted_index.speech, | 178 |
| abstract_inverted_index.speech. | 19 |
| abstract_inverted_index.AU-coded | 112 |
| abstract_inverted_index.Bayesian | 86 |
| abstract_inverted_index.However, | 41 |
| abstract_inverted_index.acoustic | 75 |
| abstract_inverted_index.activity | 43 |
| abstract_inverted_index.channel, | 36 |
| abstract_inverted_index.compared | 162 |
| abstract_inverted_index.consists | 123 |
| abstract_inverted_index.database | 113, 148 |
| abstract_inverted_index.employed | 90 |
| abstract_inverted_index.evaluate | 118 |
| abstract_inverted_index.impaired | 176 |
| abstract_inverted_index.modeling | 189 |
| abstract_inverted_index.phonemes | 103 |
| abstract_inverted_index.presents | 61 |
| abstract_inverted_index.proposed | 120, 153 |
| abstract_inverted_index.semantic | 95 |
| abstract_inverted_index.collected | 116 |
| abstract_inverted_index.displays, | 12 |
| abstract_inverted_index.extracted | 28 |
| abstract_inverted_index.framework | 154 |
| abstract_inverted_index.improving | 56 |
| abstract_inverted_index.movements | 142 |
| abstract_inverted_index.phonemes. | 197 |
| abstract_inverted_index.practice. | 40 |
| abstract_inverted_index.recognize | 4 |
| abstract_inverted_index.containing | 128 |
| abstract_inverted_index.controlled | 133 |
| abstract_inverted_index.correlated | 46 |
| abstract_inverted_index.especially | 13, 168 |
| abstract_inverted_index.explicitly | 92, 188 |
| abstract_inverted_index.exploiting | 191 |
| abstract_inverted_index.framework, | 66, 121 |
| abstract_inverted_index.Experiments | 145 |
| abstract_inverted_index.accompanied | 17 |
| abstract_inverted_index.audiovisual | 64, 111 |
| abstract_inverted_index.challenging | 2, 137 |
| abstract_inverted_index.importantly | 181 |
| abstract_inverted_index.improvement | 157 |
| abstract_inverted_index.information | 26 |
| abstract_inverted_index.measurement | 107 |
| abstract_inverted_index.occlusions. | 144 |
| abstract_inverted_index.outperforms | 183 |
| abstract_inverted_index.particular, | 83 |
| abstract_inverted_index.recognizing | 78, 159 |
| abstract_inverted_index.significant | 156 |
| abstract_inverted_index.spontaneous | 10 |
| abstract_inverted_index.demonstrated | 150 |
| abstract_inverted_index.observations | 174 |
| abstract_inverted_index.uncertainty. | 108 |
| abstract_inverted_index.visual-based | 166 |
| abstract_inverted_index.circumstances | 134 |
| abstract_inverted_index.feature-level | 184 |
| abstract_inverted_index.observations, | 58 |
| abstract_inverted_index.physiological | 98, 192 |
| abstract_inverted_index.relationships | 99, 193 |
| abstract_inverted_index.speech-related | 79, 160 |
| abstract_inverted_index.communications. | 52 |
| abstract_inverted_index.state-of-the-art | 165 |
| cited_by_percentile_year | |
| countries_distinct_count | 1 |
| institutions_distinct_count | 4 |
| sustainable_development_goals[0].id | https://metadata.un.org/sdg/16 |
| sustainable_development_goals[0].score | 0.6100000143051147 |
| sustainable_development_goals[0].display_name | Peace, Justice and strong institutions |
| citation_normalized_percentile |