Layer-Adapted Implicit Distribution Alignment Networks for Cross-Corpus Speech Emotion Recognition Article Swipe
YOU?
·
· 2023
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2310.03992
In this paper, we propose a new unsupervised domain adaptation (DA) method called layer-adapted implicit distribution alignment networks (LIDAN) to address the challenge of cross-corpus speech emotion recognition (SER). LIDAN extends our previous ICASSP work, deep implicit distribution alignment networks (DIDAN), whose key contribution lies in the introduction of a novel regularization term called implicit distribution alignment (IDA). This term allows DIDAN trained on source (training) speech samples to remain applicable to predicting emotion labels for target (testing) speech samples, regardless of corpus variance in cross-corpus SER. To further enhance this method, we extend IDA to layer-adapted IDA (LIDA), resulting in LIDAN. This layer-adpated extention consists of three modified IDA terms that consider emotion labels at different levels of granularity. These terms are strategically arranged within different fully connected layers in LIDAN, aligning with the increasing emotion-discriminative abilities with respect to the layer depth. This arrangement enables LIDAN to more effectively learn emotion-discriminative and corpus-invariant features for SER across various corpora compared to DIDAN. It is also worthy to mention that unlike most existing methods that rely on estimating statistical moments to describe pre-assumed explicit distributions, both IDA and LIDA take a different approach. They utilize an idea of target sample reconstruction to directly bridge the feature distribution gap without making assumptions about their distribution type. As a result, DIDAN and LIDAN can be viewed as implicit cross-corpus SER methods. To evaluate LIDAN, we conducted extensive cross-corpus SER experiments on EmoDB, eNTERFACE, and CASIA corpora. The experimental results demonstrate that LIDAN surpasses recent state-of-the-art explicit unsupervised DA methods in tackling cross-corpus SER tasks.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2310.03992
- https://arxiv.org/pdf/2310.03992
- OA Status
- green
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4394645367
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4394645367Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2310.03992Digital Object Identifier
- Title
-
Layer-Adapted Implicit Distribution Alignment Networks for Cross-Corpus Speech Emotion RecognitionWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2023Year of publication
- Publication date
-
2023-10-06Full publication date if available
- Authors
-
Yan Zhao, Yuan Zong, Jincen Wang, Hailun Lian, Cheng Lu, Li Zhao, Wenming ZhengList of authors in order
- Landing page
-
https://arxiv.org/abs/2310.03992Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2310.03992Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2310.03992Direct OA link when available
- Concepts
-
Discriminative model, Regularization (linguistics), Artificial intelligence, Computer science, Speech recognition, Natural language processing, Pattern recognition (psychology)Top concepts (fields/topics) attached by OpenAlex
- Cited by
-
0Total citation count in OpenAlex
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4394645367 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2310.03992 |
| ids.doi | https://doi.org/10.48550/arxiv.2310.03992 |
| ids.openalex | https://openalex.org/W4394645367 |
| fwci | |
| type | preprint |
| title | Layer-Adapted Implicit Distribution Alignment Networks for Cross-Corpus Speech Emotion Recognition |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T10667 |
| topics[0].field.id | https://openalex.org/fields/32 |
| topics[0].field.display_name | Psychology |
| topics[0].score | 0.9934999942779541 |
| topics[0].domain.id | https://openalex.org/domains/2 |
| topics[0].domain.display_name | Social Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/3205 |
| topics[0].subfield.display_name | Experimental and Cognitive Psychology |
| topics[0].display_name | Emotion and Mood Recognition |
| topics[1].id | https://openalex.org/T10664 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9833999872207642 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1702 |
| topics[1].subfield.display_name | Artificial Intelligence |
| topics[1].display_name | Sentiment Analysis and Opinion Mining |
| topics[2].id | https://openalex.org/T10201 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9733999967575073 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1702 |
| topics[2].subfield.display_name | Artificial Intelligence |
| topics[2].display_name | Speech Recognition and Synthesis |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C97931131 |
| concepts[0].level | 2 |
| concepts[0].score | 0.7282556295394897 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q5282087 |
| concepts[0].display_name | Discriminative model |
| concepts[1].id | https://openalex.org/C2776135515 |
| concepts[1].level | 2 |
| concepts[1].score | 0.5293166041374207 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q17143721 |
| concepts[1].display_name | Regularization (linguistics) |
| concepts[2].id | https://openalex.org/C154945302 |
| concepts[2].level | 1 |
| concepts[2].score | 0.5281538963317871 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[2].display_name | Artificial intelligence |
| concepts[3].id | https://openalex.org/C41008148 |
| concepts[3].level | 0 |
| concepts[3].score | 0.5281246304512024 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[3].display_name | Computer science |
| concepts[4].id | https://openalex.org/C28490314 |
| concepts[4].level | 1 |
| concepts[4].score | 0.5082609057426453 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q189436 |
| concepts[4].display_name | Speech recognition |
| concepts[5].id | https://openalex.org/C204321447 |
| concepts[5].level | 1 |
| concepts[5].score | 0.39369890093803406 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q30642 |
| concepts[5].display_name | Natural language processing |
| concepts[6].id | https://openalex.org/C153180895 |
| concepts[6].level | 2 |
| concepts[6].score | 0.3863860070705414 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q7148389 |
| concepts[6].display_name | Pattern recognition (psychology) |
| keywords[0].id | https://openalex.org/keywords/discriminative-model |
| keywords[0].score | 0.7282556295394897 |
| keywords[0].display_name | Discriminative model |
| keywords[1].id | https://openalex.org/keywords/regularization |
| keywords[1].score | 0.5293166041374207 |
| keywords[1].display_name | Regularization (linguistics) |
| keywords[2].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[2].score | 0.5281538963317871 |
| keywords[2].display_name | Artificial intelligence |
| keywords[3].id | https://openalex.org/keywords/computer-science |
| keywords[3].score | 0.5281246304512024 |
| keywords[3].display_name | Computer science |
| keywords[4].id | https://openalex.org/keywords/speech-recognition |
| keywords[4].score | 0.5082609057426453 |
| keywords[4].display_name | Speech recognition |
| keywords[5].id | https://openalex.org/keywords/natural-language-processing |
| keywords[5].score | 0.39369890093803406 |
| keywords[5].display_name | Natural language processing |
| keywords[6].id | https://openalex.org/keywords/pattern-recognition |
| keywords[6].score | 0.3863860070705414 |
| keywords[6].display_name | Pattern recognition (psychology) |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2310.03992 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2310.03992 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2310.03992 |
| locations[1].id | doi:10.48550/arxiv.2310.03992 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2310.03992 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5100727732 |
| authorships[0].author.orcid | https://orcid.org/0000-0003-4577-7078 |
| authorships[0].author.display_name | Yan Zhao |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Yan Zhao |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5027316177 |
| authorships[1].author.orcid | https://orcid.org/0000-0002-0839-8792 |
| authorships[1].author.display_name | Yuan Zong |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Yuan Zong |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5044597514 |
| authorships[2].author.orcid | https://orcid.org/0009-0001-9144-105X |
| authorships[2].author.display_name | Jincen Wang |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Jincen Wang |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5091060125 |
| authorships[3].author.orcid | https://orcid.org/0000-0002-1355-9503 |
| authorships[3].author.display_name | Hailun Lian |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Hailun Lian |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5054796879 |
| authorships[4].author.orcid | https://orcid.org/0000-0002-1477-1020 |
| authorships[4].author.display_name | Cheng Lu |
| authorships[4].author_position | middle |
| authorships[4].raw_author_name | Cheng Lu |
| authorships[4].is_corresponding | False |
| authorships[5].author.id | https://openalex.org/A5101872700 |
| authorships[5].author.orcid | https://orcid.org/0000-0002-1067-0185 |
| authorships[5].author.display_name | Li Zhao |
| authorships[5].author_position | middle |
| authorships[5].raw_author_name | Li Zhao |
| authorships[5].is_corresponding | False |
| authorships[6].author.id | https://openalex.org/A5029771864 |
| authorships[6].author.orcid | https://orcid.org/0000-0002-7764-5179 |
| authorships[6].author.display_name | Wenming Zheng |
| authorships[6].author_position | last |
| authorships[6].raw_author_name | Wenming Zheng |
| authorships[6].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2310.03992 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | Layer-Adapted Implicit Distribution Alignment Networks for Cross-Corpus Speech Emotion Recognition |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T10667 |
| primary_topic.field.id | https://openalex.org/fields/32 |
| primary_topic.field.display_name | Psychology |
| primary_topic.score | 0.9934999942779541 |
| primary_topic.domain.id | https://openalex.org/domains/2 |
| primary_topic.domain.display_name | Social Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/3205 |
| primary_topic.subfield.display_name | Experimental and Cognitive Psychology |
| primary_topic.display_name | Emotion and Mood Recognition |
| related_works | https://openalex.org/W4389116644, https://openalex.org/W2153315159, https://openalex.org/W3103844505, https://openalex.org/W259157601, https://openalex.org/W4205463238, https://openalex.org/W2761785940, https://openalex.org/W1482209366, https://openalex.org/W2404514746, https://openalex.org/W1652783584, https://openalex.org/W2082783427 |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2310.03992 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2310.03992 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2310.03992 |
| primary_location.id | pmh:oai:arXiv.org:2310.03992 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2310.03992 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2310.03992 |
| publication_date | 2023-10-06 |
| publication_year | 2023 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 5, 49, 191, 217 |
| abstract_inverted_index.As | 216 |
| abstract_inverted_index.DA | 256 |
| abstract_inverted_index.In | 0 |
| abstract_inverted_index.It | 164 |
| abstract_inverted_index.To | 87, 230 |
| abstract_inverted_index.an | 196 |
| abstract_inverted_index.as | 225 |
| abstract_inverted_index.at | 115 |
| abstract_inverted_index.be | 223 |
| abstract_inverted_index.in | 45, 84, 100, 130, 258 |
| abstract_inverted_index.is | 165 |
| abstract_inverted_index.of | 23, 48, 81, 106, 118, 198 |
| abstract_inverted_index.on | 63, 177, 239 |
| abstract_inverted_index.to | 19, 68, 71, 95, 140, 148, 162, 168, 181, 202 |
| abstract_inverted_index.we | 3, 92, 233 |
| abstract_inverted_index.IDA | 94, 97, 109, 187 |
| abstract_inverted_index.SER | 157, 228, 237, 261 |
| abstract_inverted_index.The | 245 |
| abstract_inverted_index.and | 153, 188, 220, 242 |
| abstract_inverted_index.are | 122 |
| abstract_inverted_index.can | 222 |
| abstract_inverted_index.for | 75, 156 |
| abstract_inverted_index.gap | 208 |
| abstract_inverted_index.key | 42 |
| abstract_inverted_index.new | 6 |
| abstract_inverted_index.our | 31 |
| abstract_inverted_index.the | 21, 46, 134, 141, 205 |
| abstract_inverted_index.(DA) | 10 |
| abstract_inverted_index.LIDA | 189 |
| abstract_inverted_index.SER. | 86 |
| abstract_inverted_index.They | 194 |
| abstract_inverted_index.This | 58, 102, 144 |
| abstract_inverted_index.also | 166 |
| abstract_inverted_index.both | 186 |
| abstract_inverted_index.deep | 35 |
| abstract_inverted_index.idea | 197 |
| abstract_inverted_index.lies | 44 |
| abstract_inverted_index.more | 149 |
| abstract_inverted_index.most | 172 |
| abstract_inverted_index.rely | 176 |
| abstract_inverted_index.take | 190 |
| abstract_inverted_index.term | 52, 59 |
| abstract_inverted_index.that | 111, 170, 175, 249 |
| abstract_inverted_index.this | 1, 90 |
| abstract_inverted_index.with | 133, 138 |
| abstract_inverted_index.CASIA | 243 |
| abstract_inverted_index.DIDAN | 61, 219 |
| abstract_inverted_index.LIDAN | 29, 147, 221, 250 |
| abstract_inverted_index.These | 120 |
| abstract_inverted_index.about | 212 |
| abstract_inverted_index.fully | 127 |
| abstract_inverted_index.layer | 142 |
| abstract_inverted_index.learn | 151 |
| abstract_inverted_index.novel | 50 |
| abstract_inverted_index.terms | 110, 121 |
| abstract_inverted_index.their | 213 |
| abstract_inverted_index.three | 107 |
| abstract_inverted_index.type. | 215 |
| abstract_inverted_index.whose | 41 |
| abstract_inverted_index.work, | 34 |
| abstract_inverted_index.(IDA). | 57 |
| abstract_inverted_index.(SER). | 28 |
| abstract_inverted_index.DIDAN. | 163 |
| abstract_inverted_index.EmoDB, | 240 |
| abstract_inverted_index.ICASSP | 33 |
| abstract_inverted_index.LIDAN, | 131, 232 |
| abstract_inverted_index.LIDAN. | 101 |
| abstract_inverted_index.across | 158 |
| abstract_inverted_index.allows | 60 |
| abstract_inverted_index.bridge | 204 |
| abstract_inverted_index.called | 12, 53 |
| abstract_inverted_index.corpus | 82 |
| abstract_inverted_index.depth. | 143 |
| abstract_inverted_index.domain | 8 |
| abstract_inverted_index.extend | 93 |
| abstract_inverted_index.labels | 74, 114 |
| abstract_inverted_index.layers | 129 |
| abstract_inverted_index.levels | 117 |
| abstract_inverted_index.making | 210 |
| abstract_inverted_index.method | 11 |
| abstract_inverted_index.paper, | 2 |
| abstract_inverted_index.recent | 252 |
| abstract_inverted_index.remain | 69 |
| abstract_inverted_index.sample | 200 |
| abstract_inverted_index.source | 64 |
| abstract_inverted_index.speech | 25, 66, 78 |
| abstract_inverted_index.target | 76, 199 |
| abstract_inverted_index.tasks. | 262 |
| abstract_inverted_index.unlike | 171 |
| abstract_inverted_index.viewed | 224 |
| abstract_inverted_index.within | 125 |
| abstract_inverted_index.worthy | 167 |
| abstract_inverted_index.(LIDA), | 98 |
| abstract_inverted_index.(LIDAN) | 18 |
| abstract_inverted_index.address | 20 |
| abstract_inverted_index.corpora | 160 |
| abstract_inverted_index.emotion | 26, 73, 113 |
| abstract_inverted_index.enables | 146 |
| abstract_inverted_index.enhance | 89 |
| abstract_inverted_index.extends | 30 |
| abstract_inverted_index.feature | 206 |
| abstract_inverted_index.further | 88 |
| abstract_inverted_index.mention | 169 |
| abstract_inverted_index.method, | 91 |
| abstract_inverted_index.methods | 174, 257 |
| abstract_inverted_index.moments | 180 |
| abstract_inverted_index.propose | 4 |
| abstract_inverted_index.respect | 139 |
| abstract_inverted_index.result, | 218 |
| abstract_inverted_index.results | 247 |
| abstract_inverted_index.samples | 67 |
| abstract_inverted_index.trained | 62 |
| abstract_inverted_index.utilize | 195 |
| abstract_inverted_index.various | 159 |
| abstract_inverted_index.without | 209 |
| abstract_inverted_index.(DIDAN), | 40 |
| abstract_inverted_index.aligning | 132 |
| abstract_inverted_index.arranged | 124 |
| abstract_inverted_index.compared | 161 |
| abstract_inverted_index.consider | 112 |
| abstract_inverted_index.consists | 105 |
| abstract_inverted_index.corpora. | 244 |
| abstract_inverted_index.describe | 182 |
| abstract_inverted_index.directly | 203 |
| abstract_inverted_index.evaluate | 231 |
| abstract_inverted_index.existing | 173 |
| abstract_inverted_index.explicit | 184, 254 |
| abstract_inverted_index.features | 155 |
| abstract_inverted_index.implicit | 14, 36, 54, 226 |
| abstract_inverted_index.methods. | 229 |
| abstract_inverted_index.modified | 108 |
| abstract_inverted_index.networks | 17, 39 |
| abstract_inverted_index.previous | 32 |
| abstract_inverted_index.samples, | 79 |
| abstract_inverted_index.tackling | 259 |
| abstract_inverted_index.variance | 83 |
| abstract_inverted_index.(testing) | 77 |
| abstract_inverted_index.abilities | 137 |
| abstract_inverted_index.alignment | 16, 38, 56 |
| abstract_inverted_index.approach. | 193 |
| abstract_inverted_index.challenge | 22 |
| abstract_inverted_index.conducted | 234 |
| abstract_inverted_index.connected | 128 |
| abstract_inverted_index.different | 116, 126, 192 |
| abstract_inverted_index.extensive | 235 |
| abstract_inverted_index.extention | 104 |
| abstract_inverted_index.resulting | 99 |
| abstract_inverted_index.surpasses | 251 |
| abstract_inverted_index.(training) | 65 |
| abstract_inverted_index.adaptation | 9 |
| abstract_inverted_index.applicable | 70 |
| abstract_inverted_index.eNTERFACE, | 241 |
| abstract_inverted_index.estimating | 178 |
| abstract_inverted_index.increasing | 135 |
| abstract_inverted_index.predicting | 72 |
| abstract_inverted_index.regardless | 80 |
| abstract_inverted_index.arrangement | 145 |
| abstract_inverted_index.assumptions | 211 |
| abstract_inverted_index.demonstrate | 248 |
| abstract_inverted_index.effectively | 150 |
| abstract_inverted_index.experiments | 238 |
| abstract_inverted_index.pre-assumed | 183 |
| abstract_inverted_index.recognition | 27 |
| abstract_inverted_index.statistical | 179 |
| abstract_inverted_index.contribution | 43 |
| abstract_inverted_index.cross-corpus | 24, 85, 227, 236, 260 |
| abstract_inverted_index.distribution | 15, 37, 55, 207, 214 |
| abstract_inverted_index.experimental | 246 |
| abstract_inverted_index.granularity. | 119 |
| abstract_inverted_index.introduction | 47 |
| abstract_inverted_index.unsupervised | 7, 255 |
| abstract_inverted_index.layer-adapted | 13, 96 |
| abstract_inverted_index.layer-adpated | 103 |
| abstract_inverted_index.strategically | 123 |
| abstract_inverted_index.distributions, | 185 |
| abstract_inverted_index.reconstruction | 201 |
| abstract_inverted_index.regularization | 51 |
| abstract_inverted_index.corpus-invariant | 154 |
| abstract_inverted_index.state-of-the-art | 253 |
| abstract_inverted_index.emotion-discriminative | 136, 152 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 7 |
| sustainable_development_goals[0].id | https://metadata.un.org/sdg/10 |
| sustainable_development_goals[0].score | 0.6899999976158142 |
| sustainable_development_goals[0].display_name | Reduced inequalities |
| citation_normalized_percentile |