Sequential Modeling by Leveraging Non-Uniform Distribution of Speech Emotion Article Swipe
YOU?
·
· 2023
· Open Access
·
· DOI: https://doi.org/10.1109/taslp.2023.3244527
The expression and perception of human emotions are not uniformly distributed over time. Therefore, tracking local changes of emotion within a segment can lead to better models for speech emotion recognition (SER), even when the task is to provide a sentence-level prediction of the emotional content. A challenge to exploring local emotional changes within a sentence is that most existing emotional corpora only provide sentence-level annotations (i.e., one label per sentence). This labeling approach is not appropriate for leveraging the dynamic emotional trends within a sentence. We propose a framework that splits a sentence into a fixed number of chunks, generating chunk-level emotional patterns. The approach relies on emotion rankers to unveil the emotional pattern within a sentence, creating continuous emotional curves. Our approach trains the sentence-level SER model with a sequence-to-sequence formulation by leveraging the retrieved emotional curves. The proposed method achieves the best concordance correlation coefficient (CCC) prediction performance for arousal (0.7120), valence (0.3125), and dominance (0.6324) on the MSP-Podcast corpus. In addition, we validate the approach with experiments on the IEMOCAP and MSP-IMPROV databases. We further compare the retrieved curves with time-continuous emotional traces. The evaluation demonstrates that these retrieved chunk-label curves can effectively capture emotional trends within a sentence, displaying a time-consistency property that is similar to time-continuous traces annotated by human listeners. The proposed SER model learns meaningful, complementary, local information that contributes to the improvement of sentence-level predictions of emotional attributes.
Related Topics
- Type
- article
- Language
- en
- Landing Page
- https://doi.org/10.1109/taslp.2023.3244527
- https://ieeexplore.ieee.org/ielx7/6570655/9970249/10043704.pdf
- OA Status
- bronze
- Cited By
- 8
- References
- 69
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4322707285
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4322707285Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.1109/taslp.2023.3244527Digital Object Identifier
- Title
-
Sequential Modeling by Leveraging Non-Uniform Distribution of Speech EmotionWork title
- Type
-
articleOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2023Year of publication
- Publication date
-
2023-01-01Full publication date if available
- Authors
-
Wei-Cheng Lin, Carlos BussoList of authors in order
- Landing page
-
https://doi.org/10.1109/taslp.2023.3244527Publisher landing page
- PDF URL
-
https://ieeexplore.ieee.org/ielx7/6570655/9970249/10043704.pdfDirect link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
bronzeOpen access status per OpenAlex
- OA URL
-
https://ieeexplore.ieee.org/ielx7/6570655/9970249/10043704.pdfDirect OA link when available
- Concepts
-
Sentence, Computer science, Expressed emotion, Natural language processing, Speech recognition, Artificial intelligence, Emotional expression, Psychology, Cognitive psychology, PsychiatryTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
8Total citation count in OpenAlex
- Citations by year (recent)
-
2025: 2, 2024: 6Per-year citation counts (last 5 years)
- References (count)
-
69Number of works referenced by this work
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4322707285 |
|---|---|
| doi | https://doi.org/10.1109/taslp.2023.3244527 |
| ids.doi | https://doi.org/10.1109/taslp.2023.3244527 |
| ids.openalex | https://openalex.org/W4322707285 |
| fwci | 3.33312171 |
| type | article |
| title | Sequential Modeling by Leveraging Non-Uniform Distribution of Speech Emotion |
| awards[0].id | https://openalex.org/G7065751278 |
| awards[0].funder_id | https://openalex.org/F4320335353 |
| awards[0].display_name | |
| awards[0].funder_award_id | CNS-2016719 |
| awards[0].funder_display_name | National Science Foundation of Sri Lanka |
| biblio.issue | |
| biblio.volume | 31 |
| biblio.last_page | 1099 |
| biblio.first_page | 1087 |
| topics[0].id | https://openalex.org/T10667 |
| topics[0].field.id | https://openalex.org/fields/32 |
| topics[0].field.display_name | Psychology |
| topics[0].score | 0.9998999834060669 |
| topics[0].domain.id | https://openalex.org/domains/2 |
| topics[0].domain.display_name | Social Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/3205 |
| topics[0].subfield.display_name | Experimental and Cognitive Psychology |
| topics[0].display_name | Emotion and Mood Recognition |
| topics[1].id | https://openalex.org/T10664 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9987999796867371 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1702 |
| topics[1].subfield.display_name | Artificial Intelligence |
| topics[1].display_name | Sentiment Analysis and Opinion Mining |
| topics[2].id | https://openalex.org/T10201 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9957000017166138 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1702 |
| topics[2].subfield.display_name | Artificial Intelligence |
| topics[2].display_name | Speech Recognition and Synthesis |
| funders[0].id | https://openalex.org/F4320335353 |
| funders[0].ror | https://ror.org/010xaa060 |
| funders[0].display_name | National Science Foundation of Sri Lanka |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C2777530160 |
| concepts[0].level | 2 |
| concepts[0].score | 0.720287024974823 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q41796 |
| concepts[0].display_name | Sentence |
| concepts[1].id | https://openalex.org/C41008148 |
| concepts[1].level | 0 |
| concepts[1].score | 0.5967459678649902 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[1].display_name | Computer science |
| concepts[2].id | https://openalex.org/C2778143943 |
| concepts[2].level | 2 |
| concepts[2].score | 0.5738113522529602 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q523747 |
| concepts[2].display_name | Expressed emotion |
| concepts[3].id | https://openalex.org/C204321447 |
| concepts[3].level | 1 |
| concepts[3].score | 0.5664839744567871 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q30642 |
| concepts[3].display_name | Natural language processing |
| concepts[4].id | https://openalex.org/C28490314 |
| concepts[4].level | 1 |
| concepts[4].score | 0.5420663356781006 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q189436 |
| concepts[4].display_name | Speech recognition |
| concepts[5].id | https://openalex.org/C154945302 |
| concepts[5].level | 1 |
| concepts[5].score | 0.4983217716217041 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[5].display_name | Artificial intelligence |
| concepts[6].id | https://openalex.org/C143110190 |
| concepts[6].level | 2 |
| concepts[6].score | 0.46725982427597046 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q5373787 |
| concepts[6].display_name | Emotional expression |
| concepts[7].id | https://openalex.org/C15744967 |
| concepts[7].level | 0 |
| concepts[7].score | 0.14716371893882751 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q9418 |
| concepts[7].display_name | Psychology |
| concepts[8].id | https://openalex.org/C180747234 |
| concepts[8].level | 1 |
| concepts[8].score | 0.11489468812942505 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q23373 |
| concepts[8].display_name | Cognitive psychology |
| concepts[9].id | https://openalex.org/C118552586 |
| concepts[9].level | 1 |
| concepts[9].score | 0.0 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q7867 |
| concepts[9].display_name | Psychiatry |
| keywords[0].id | https://openalex.org/keywords/sentence |
| keywords[0].score | 0.720287024974823 |
| keywords[0].display_name | Sentence |
| keywords[1].id | https://openalex.org/keywords/computer-science |
| keywords[1].score | 0.5967459678649902 |
| keywords[1].display_name | Computer science |
| keywords[2].id | https://openalex.org/keywords/expressed-emotion |
| keywords[2].score | 0.5738113522529602 |
| keywords[2].display_name | Expressed emotion |
| keywords[3].id | https://openalex.org/keywords/natural-language-processing |
| keywords[3].score | 0.5664839744567871 |
| keywords[3].display_name | Natural language processing |
| keywords[4].id | https://openalex.org/keywords/speech-recognition |
| keywords[4].score | 0.5420663356781006 |
| keywords[4].display_name | Speech recognition |
| keywords[5].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[5].score | 0.4983217716217041 |
| keywords[5].display_name | Artificial intelligence |
| keywords[6].id | https://openalex.org/keywords/emotional-expression |
| keywords[6].score | 0.46725982427597046 |
| keywords[6].display_name | Emotional expression |
| keywords[7].id | https://openalex.org/keywords/psychology |
| keywords[7].score | 0.14716371893882751 |
| keywords[7].display_name | Psychology |
| keywords[8].id | https://openalex.org/keywords/cognitive-psychology |
| keywords[8].score | 0.11489468812942505 |
| keywords[8].display_name | Cognitive psychology |
| language | en |
| locations[0].id | doi:10.1109/taslp.2023.3244527 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4210169297 |
| locations[0].source.issn | 2329-9290, 2329-9304 |
| locations[0].source.type | journal |
| locations[0].source.is_oa | False |
| locations[0].source.issn_l | 2329-9290 |
| locations[0].source.is_core | True |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | IEEE/ACM Transactions on Audio Speech and Language Processing |
| locations[0].source.host_organization | https://openalex.org/P4310319808 |
| locations[0].source.host_organization_name | Institute of Electrical and Electronics Engineers |
| locations[0].source.host_organization_lineage | https://openalex.org/P4310319808 |
| locations[0].source.host_organization_lineage_names | Institute of Electrical and Electronics Engineers |
| locations[0].license | |
| locations[0].pdf_url | https://ieeexplore.ieee.org/ielx7/6570655/9970249/10043704.pdf |
| locations[0].version | publishedVersion |
| locations[0].raw_type | journal-article |
| locations[0].license_id | |
| locations[0].is_accepted | True |
| locations[0].is_published | True |
| locations[0].raw_source_name | IEEE/ACM Transactions on Audio, Speech, and Language Processing |
| locations[0].landing_page_url | https://doi.org/10.1109/taslp.2023.3244527 |
| indexed_in | crossref |
| authorships[0].author.id | https://openalex.org/A5070819601 |
| authorships[0].author.orcid | https://orcid.org/0000-0003-1933-1590 |
| authorships[0].author.display_name | Wei-Cheng Lin |
| authorships[0].countries | US |
| authorships[0].affiliations[0].institution_ids | https://openalex.org/I162577319 |
| authorships[0].affiliations[0].raw_affiliation_string | Erik Jonsson School of Engineering & Computer Science, The University of Texas at Dallas, Richardson, TX, USA |
| authorships[0].institutions[0].id | https://openalex.org/I162577319 |
| authorships[0].institutions[0].ror | https://ror.org/049emcs32 |
| authorships[0].institutions[0].type | education |
| authorships[0].institutions[0].lineage | https://openalex.org/I162577319 |
| authorships[0].institutions[0].country_code | US |
| authorships[0].institutions[0].display_name | The University of Texas at Dallas |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Wei-Cheng Lin |
| authorships[0].is_corresponding | False |
| authorships[0].raw_affiliation_strings | Erik Jonsson School of Engineering & Computer Science, The University of Texas at Dallas, Richardson, TX, USA |
| authorships[1].author.id | https://openalex.org/A5040793194 |
| authorships[1].author.orcid | https://orcid.org/0000-0002-4075-4072 |
| authorships[1].author.display_name | Carlos Busso |
| authorships[1].countries | US |
| authorships[1].affiliations[0].institution_ids | https://openalex.org/I162577319 |
| authorships[1].affiliations[0].raw_affiliation_string | Erik Jonsson School of Engineering & Computer Science, The University of Texas at Dallas, Richardson, TX, USA |
| authorships[1].institutions[0].id | https://openalex.org/I162577319 |
| authorships[1].institutions[0].ror | https://ror.org/049emcs32 |
| authorships[1].institutions[0].type | education |
| authorships[1].institutions[0].lineage | https://openalex.org/I162577319 |
| authorships[1].institutions[0].country_code | US |
| authorships[1].institutions[0].display_name | The University of Texas at Dallas |
| authorships[1].author_position | last |
| authorships[1].raw_author_name | Carlos Busso |
| authorships[1].is_corresponding | False |
| authorships[1].raw_affiliation_strings | Erik Jonsson School of Engineering & Computer Science, The University of Texas at Dallas, Richardson, TX, USA |
| has_content.pdf | True |
| has_content.grobid_xml | True |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://ieeexplore.ieee.org/ielx7/6570655/9970249/10043704.pdf |
| open_access.oa_status | bronze |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | Sequential Modeling by Leveraging Non-Uniform Distribution of Speech Emotion |
| has_fulltext | True |
| is_retracted | False |
| updated_date | 2025-11-06T03:46:38.306776 |
| primary_topic.id | https://openalex.org/T10667 |
| primary_topic.field.id | https://openalex.org/fields/32 |
| primary_topic.field.display_name | Psychology |
| primary_topic.score | 0.9998999834060669 |
| primary_topic.domain.id | https://openalex.org/domains/2 |
| primary_topic.domain.display_name | Social Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/3205 |
| primary_topic.subfield.display_name | Experimental and Cognitive Psychology |
| primary_topic.display_name | Emotion and Mood Recognition |
| related_works | https://openalex.org/W2035950535, https://openalex.org/W2351555819, https://openalex.org/W2789919619, https://openalex.org/W4283585122, https://openalex.org/W2293457016, https://openalex.org/W3169305685, https://openalex.org/W2351428524, https://openalex.org/W2368779261, https://openalex.org/W1551406738, https://openalex.org/W159132833 |
| cited_by_count | 8 |
| counts_by_year[0].year | 2025 |
| counts_by_year[0].cited_by_count | 2 |
| counts_by_year[1].year | 2024 |
| counts_by_year[1].cited_by_count | 6 |
| locations_count | 1 |
| best_oa_location.id | doi:10.1109/taslp.2023.3244527 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4210169297 |
| best_oa_location.source.issn | 2329-9290, 2329-9304 |
| best_oa_location.source.type | journal |
| best_oa_location.source.is_oa | False |
| best_oa_location.source.issn_l | 2329-9290 |
| best_oa_location.source.is_core | True |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | IEEE/ACM Transactions on Audio Speech and Language Processing |
| best_oa_location.source.host_organization | https://openalex.org/P4310319808 |
| best_oa_location.source.host_organization_name | Institute of Electrical and Electronics Engineers |
| best_oa_location.source.host_organization_lineage | https://openalex.org/P4310319808 |
| best_oa_location.source.host_organization_lineage_names | Institute of Electrical and Electronics Engineers |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://ieeexplore.ieee.org/ielx7/6570655/9970249/10043704.pdf |
| best_oa_location.version | publishedVersion |
| best_oa_location.raw_type | journal-article |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | True |
| best_oa_location.is_published | True |
| best_oa_location.raw_source_name | IEEE/ACM Transactions on Audio, Speech, and Language Processing |
| best_oa_location.landing_page_url | https://doi.org/10.1109/taslp.2023.3244527 |
| primary_location.id | doi:10.1109/taslp.2023.3244527 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4210169297 |
| primary_location.source.issn | 2329-9290, 2329-9304 |
| primary_location.source.type | journal |
| primary_location.source.is_oa | False |
| primary_location.source.issn_l | 2329-9290 |
| primary_location.source.is_core | True |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | IEEE/ACM Transactions on Audio Speech and Language Processing |
| primary_location.source.host_organization | https://openalex.org/P4310319808 |
| primary_location.source.host_organization_name | Institute of Electrical and Electronics Engineers |
| primary_location.source.host_organization_lineage | https://openalex.org/P4310319808 |
| primary_location.source.host_organization_lineage_names | Institute of Electrical and Electronics Engineers |
| primary_location.license | |
| primary_location.pdf_url | https://ieeexplore.ieee.org/ielx7/6570655/9970249/10043704.pdf |
| primary_location.version | publishedVersion |
| primary_location.raw_type | journal-article |
| primary_location.license_id | |
| primary_location.is_accepted | True |
| primary_location.is_published | True |
| primary_location.raw_source_name | IEEE/ACM Transactions on Audio, Speech, and Language Processing |
| primary_location.landing_page_url | https://doi.org/10.1109/taslp.2023.3244527 |
| publication_date | 2023-01-01 |
| publication_year | 2023 |
| referenced_works | https://openalex.org/W2550557083, https://openalex.org/W175750906, https://openalex.org/W2342475039, https://openalex.org/W2742542661, https://openalex.org/W6628256265, https://openalex.org/W2161450073, https://openalex.org/W99485238, https://openalex.org/W1489770186, https://openalex.org/W2045528981, https://openalex.org/W3095093565, https://openalex.org/W2132555391, https://openalex.org/W2408520939, https://openalex.org/W2893146414, https://openalex.org/W6733353833, https://openalex.org/W2973181312, https://openalex.org/W2165857685, https://openalex.org/W2899500748, https://openalex.org/W2407578400, https://openalex.org/W2338438311, https://openalex.org/W3095118468, https://openalex.org/W3164582967, https://openalex.org/W2889113107, https://openalex.org/W1785074626, https://openalex.org/W1981950962, https://openalex.org/W1668904664, https://openalex.org/W2117752179, https://openalex.org/W2912663049, https://openalex.org/W2518511005, https://openalex.org/W2885005742, https://openalex.org/W2087618018, https://openalex.org/W2625297138, https://openalex.org/W2748702193, https://openalex.org/W2891588573, https://openalex.org/W2123119128, https://openalex.org/W2000838212, https://openalex.org/W3120709499, https://openalex.org/W2578895956, https://openalex.org/W2153822685, https://openalex.org/W2127141656, https://openalex.org/W2295001676, https://openalex.org/W2900358852, https://openalex.org/W2058787788, https://openalex.org/W6767677336, https://openalex.org/W2163293471, https://openalex.org/W1993008008, https://openalex.org/W2491770404, https://openalex.org/W2026984028, https://openalex.org/W2508783453, https://openalex.org/W2404446881, https://openalex.org/W2673304402, https://openalex.org/W2401417847, https://openalex.org/W2525412388, https://openalex.org/W2146334809, https://openalex.org/W2144005487, https://openalex.org/W2085662862, https://openalex.org/W2131406347, https://openalex.org/W6631190155, https://openalex.org/W3097352614, https://openalex.org/W6780218876, https://openalex.org/W3209059054, https://openalex.org/W4361994820, https://openalex.org/W3162811262, https://openalex.org/W4284960025, https://openalex.org/W2979826702, https://openalex.org/W3162840325, https://openalex.org/W2346454595, https://openalex.org/W2028899742, https://openalex.org/W2583743457, https://openalex.org/W2971704668 |
| referenced_works_count | 69 |
| abstract_inverted_index.A | 48 |
| abstract_inverted_index.a | 20, 41, 56, 86, 90, 94, 97, 118, 132, 207, 210 |
| abstract_inverted_index.In | 169 |
| abstract_inverted_index.We | 88, 183 |
| abstract_inverted_index.by | 137, 220 |
| abstract_inverted_index.is | 38, 58, 76, 214 |
| abstract_inverted_index.of | 4, 17, 44, 100, 237, 240 |
| abstract_inverted_index.on | 109, 165, 177 |
| abstract_inverted_index.to | 24, 39, 50, 112, 216, 234 |
| abstract_inverted_index.we | 171 |
| abstract_inverted_index.Our | 124 |
| abstract_inverted_index.SER | 129, 225 |
| abstract_inverted_index.The | 0, 106, 143, 193, 223 |
| abstract_inverted_index.and | 2, 162, 180 |
| abstract_inverted_index.are | 7 |
| abstract_inverted_index.can | 22, 201 |
| abstract_inverted_index.for | 27, 79, 157 |
| abstract_inverted_index.not | 8, 77 |
| abstract_inverted_index.one | 69 |
| abstract_inverted_index.per | 71 |
| abstract_inverted_index.the | 36, 45, 81, 114, 127, 139, 147, 166, 173, 178, 186, 235 |
| abstract_inverted_index.This | 73 |
| abstract_inverted_index.best | 148 |
| abstract_inverted_index.even | 34 |
| abstract_inverted_index.into | 96 |
| abstract_inverted_index.lead | 23 |
| abstract_inverted_index.most | 60 |
| abstract_inverted_index.only | 64 |
| abstract_inverted_index.over | 11 |
| abstract_inverted_index.task | 37 |
| abstract_inverted_index.that | 59, 92, 196, 213, 232 |
| abstract_inverted_index.when | 35 |
| abstract_inverted_index.with | 131, 175, 189 |
| abstract_inverted_index.(CCC) | 154 |
| abstract_inverted_index.fixed | 98 |
| abstract_inverted_index.human | 5, 221 |
| abstract_inverted_index.label | 70 |
| abstract_inverted_index.local | 15, 52, 230 |
| abstract_inverted_index.model | 130, 226 |
| abstract_inverted_index.these | 197 |
| abstract_inverted_index.time. | 12 |
| abstract_inverted_index.(SER), | 33 |
| abstract_inverted_index.(i.e., | 68 |
| abstract_inverted_index.better | 25 |
| abstract_inverted_index.curves | 188, 200 |
| abstract_inverted_index.learns | 227 |
| abstract_inverted_index.method | 145 |
| abstract_inverted_index.models | 26 |
| abstract_inverted_index.number | 99 |
| abstract_inverted_index.relies | 108 |
| abstract_inverted_index.splits | 93 |
| abstract_inverted_index.traces | 218 |
| abstract_inverted_index.trains | 126 |
| abstract_inverted_index.trends | 84, 205 |
| abstract_inverted_index.unveil | 113 |
| abstract_inverted_index.within | 19, 55, 85, 117, 206 |
| abstract_inverted_index.<italic | 28, 133, 149 |
| abstract_inverted_index.IEMOCAP | 179 |
| abstract_inverted_index.arousal | 158 |
| abstract_inverted_index.capture | 203 |
| abstract_inverted_index.changes | 16, 54 |
| abstract_inverted_index.chunks, | 101 |
| abstract_inverted_index.compare | 185 |
| abstract_inverted_index.corpora | 63 |
| abstract_inverted_index.corpus. | 168 |
| abstract_inverted_index.curves. | 123, 142 |
| abstract_inverted_index.dynamic | 82 |
| abstract_inverted_index.emotion | 18, 31, 110 |
| abstract_inverted_index.further | 184 |
| abstract_inverted_index.pattern | 116 |
| abstract_inverted_index.propose | 89 |
| abstract_inverted_index.provide | 40, 65 |
| abstract_inverted_index.rankers | 111 |
| abstract_inverted_index.segment | 21 |
| abstract_inverted_index.similar | 215 |
| abstract_inverted_index.traces. | 192 |
| abstract_inverted_index.valence | 160 |
| abstract_inverted_index.(0.6324) | 164 |
| abstract_inverted_index.achieves | 146 |
| abstract_inverted_index.approach | 75, 107, 125, 174 |
| abstract_inverted_index.content. | 47 |
| abstract_inverted_index.creating | 120 |
| abstract_inverted_index.emotions | 6 |
| abstract_inverted_index.existing | 61 |
| abstract_inverted_index.labeling | 74 |
| abstract_inverted_index.property | 212 |
| abstract_inverted_index.proposed | 144, 224 |
| abstract_inverted_index.sentence | 57, 95 |
| abstract_inverted_index.tracking | 14 |
| abstract_inverted_index.validate | 172 |
| abstract_inverted_index.(0.3125), | 161 |
| abstract_inverted_index.(0.7120), | 159 |
| abstract_inverted_index.addition, | 170 |
| abstract_inverted_index.annotated | 219 |
| abstract_inverted_index.challenge | 49 |
| abstract_inverted_index.dominance | 163 |
| abstract_inverted_index.emotional | 46, 53, 62, 83, 104, 115, 122, 141, 191, 204, 241 |
| abstract_inverted_index.exploring | 51 |
| abstract_inverted_index.framework | 91 |
| abstract_inverted_index.patterns. | 105 |
| abstract_inverted_index.retrieved | 140, 187, 198 |
| abstract_inverted_index.sentence, | 119, 208 |
| abstract_inverted_index.sentence. | 87 |
| abstract_inverted_index.uniformly | 9 |
| abstract_inverted_index.MSP-IMPROV | 181 |
| abstract_inverted_index.Therefore, | 13 |
| abstract_inverted_index.continuous | 121 |
| abstract_inverted_index.databases. | 182 |
| abstract_inverted_index.displaying | 209 |
| abstract_inverted_index.evaluation | 194 |
| abstract_inverted_index.expression | 1 |
| abstract_inverted_index.generating | 102 |
| abstract_inverted_index.leveraging | 80, 138 |
| abstract_inverted_index.listeners. | 222 |
| abstract_inverted_index.perception | 3 |
| abstract_inverted_index.prediction | 43, 155 |
| abstract_inverted_index.sentence). | 72 |
| abstract_inverted_index.MSP-Podcast | 167 |
| abstract_inverted_index.annotations | 67 |
| abstract_inverted_index.appropriate | 78 |
| abstract_inverted_index.attributes. | 242 |
| abstract_inverted_index.chunk-label | 199 |
| abstract_inverted_index.chunk-level | 103 |
| abstract_inverted_index.contributes | 233 |
| abstract_inverted_index.correlation | 152 |
| abstract_inverted_index.distributed | 10 |
| abstract_inverted_index.effectively | 202 |
| abstract_inverted_index.experiments | 176 |
| abstract_inverted_index.formulation | 136 |
| abstract_inverted_index.improvement | 236 |
| abstract_inverted_index.information | 231 |
| abstract_inverted_index.meaningful, | 228 |
| abstract_inverted_index.performance | 156 |
| abstract_inverted_index.predictions | 239 |
| abstract_inverted_index.demonstrates | 195 |
| abstract_inverted_index.complementary, | 229 |
| abstract_inverted_index.sentence-level | 42, 66, 128, 238 |
| abstract_inverted_index.coefficient</i> | 153 |
| abstract_inverted_index.recognition</i> | 32 |
| abstract_inverted_index.time-continuous | 190, 217 |
| abstract_inverted_index.time-consistency | 211 |
| abstract_inverted_index.xmlns:mml="http://www.w3.org/1998/Math/MathML" | 29, 134, 150 |
| abstract_inverted_index.xmlns:xlink="http://www.w3.org/1999/xlink">speech | 30 |
| abstract_inverted_index.xmlns:xlink="http://www.w3.org/1999/xlink">concordance | 151 |
| abstract_inverted_index.xmlns:xlink="http://www.w3.org/1999/xlink">sequence-to-sequence</i> | 135 |
| cited_by_percentile_year.max | 98 |
| cited_by_percentile_year.min | 95 |
| countries_distinct_count | 1 |
| institutions_distinct_count | 2 |
| sustainable_development_goals[0].id | https://metadata.un.org/sdg/4 |
| sustainable_development_goals[0].score | 0.6100000143051147 |
| sustainable_development_goals[0].display_name | Quality Education |
| citation_normalized_percentile.value | 0.87882656 |
| citation_normalized_percentile.is_in_top_1_percent | False |
| citation_normalized_percentile.is_in_top_10_percent | True |