Leveraging Cross-Attention Transformer and Multi-Feature Fusion for Cross-Linguistic Speech Emotion Recognition Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2501.10408
Speech Emotion Recognition (SER) plays a crucial role in enhancing human-computer interaction. Cross-Linguistic SER (CLSER) has been a challenging research problem due to significant variability in linguistic and acoustic features of different languages. In this study, we propose a novel approach HuMP-CAT, which combines HuBERT, MFCC, and prosodic characteristics. These features are fused using a cross-attention transformer (CAT) mechanism during feature extraction. Transfer learning is applied to gain from a source emotional speech dataset to the target corpus for emotion recognition. We use IEMOCAP as the source dataset to train the source model and evaluate the proposed method on seven datasets in five languages (e.g., English, German, Spanish, Italian, and Chinese). We show that, by fine-tuning the source model with a small portion of speech from the target datasets, HuMP-CAT achieves an average accuracy of 78.75% across the seven datasets, with notable performance of 88.69% on EMODB (German language) and 79.48% on EMOVO (Italian language). Our extensive evaluation demonstrates that HuMP-CAT outperforms existing methods across multiple target languages.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2501.10408
- https://arxiv.org/pdf/2501.10408
- OA Status
- green
- Cited By
- 2
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4406692240
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4406692240Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2501.10408Digital Object Identifier
- Title
-
Leveraging Cross-Attention Transformer and Multi-Feature Fusion for Cross-Linguistic Speech Emotion RecognitionWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2025Year of publication
- Publication date
-
2025-01-06Full publication date if available
- Authors
-
Ruoyu Zhao, Xiantao Jiang, Fei Yu, Victor C. M. Leung, Tao Wang, Shaohu ZhangList of authors in order
- Landing page
-
https://arxiv.org/abs/2501.10408Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2501.10408Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2501.10408Direct OA link when available
- Concepts
-
Transformer, Feature (linguistics), Speech recognition, Computer science, Emotion recognition, Fusion, Natural language processing, Linguistics, Psychology, Artificial intelligence, Engineering, Electrical engineering, Voltage, PhilosophyTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
2Total citation count in OpenAlex
- Citations by year (recent)
-
2025: 2Per-year citation counts (last 5 years)
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4406692240 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2501.10408 |
| ids.doi | https://doi.org/10.48550/arxiv.2501.10408 |
| ids.openalex | https://openalex.org/W4406692240 |
| fwci | |
| type | preprint |
| title | Leveraging Cross-Attention Transformer and Multi-Feature Fusion for Cross-Linguistic Speech Emotion Recognition |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T10860 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9200999736785889 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1711 |
| topics[0].subfield.display_name | Signal Processing |
| topics[0].display_name | Speech and Audio Processing |
| topics[1].id | https://openalex.org/T10667 |
| topics[1].field.id | https://openalex.org/fields/32 |
| topics[1].field.display_name | Psychology |
| topics[1].score | 0.909500002861023 |
| topics[1].domain.id | https://openalex.org/domains/2 |
| topics[1].domain.display_name | Social Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/3205 |
| topics[1].subfield.display_name | Experimental and Cognitive Psychology |
| topics[1].display_name | Emotion and Mood Recognition |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C66322947 |
| concepts[0].level | 3 |
| concepts[0].score | 0.6962109208106995 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q11658 |
| concepts[0].display_name | Transformer |
| concepts[1].id | https://openalex.org/C2776401178 |
| concepts[1].level | 2 |
| concepts[1].score | 0.5606569647789001 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q12050496 |
| concepts[1].display_name | Feature (linguistics) |
| concepts[2].id | https://openalex.org/C28490314 |
| concepts[2].level | 1 |
| concepts[2].score | 0.5585553050041199 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q189436 |
| concepts[2].display_name | Speech recognition |
| concepts[3].id | https://openalex.org/C41008148 |
| concepts[3].level | 0 |
| concepts[3].score | 0.5374631881713867 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[3].display_name | Computer science |
| concepts[4].id | https://openalex.org/C2777438025 |
| concepts[4].level | 2 |
| concepts[4].score | 0.5309281945228577 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q1339090 |
| concepts[4].display_name | Emotion recognition |
| concepts[5].id | https://openalex.org/C158525013 |
| concepts[5].level | 2 |
| concepts[5].score | 0.4382564127445221 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q2593739 |
| concepts[5].display_name | Fusion |
| concepts[6].id | https://openalex.org/C204321447 |
| concepts[6].level | 1 |
| concepts[6].score | 0.4217851161956787 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q30642 |
| concepts[6].display_name | Natural language processing |
| concepts[7].id | https://openalex.org/C41895202 |
| concepts[7].level | 1 |
| concepts[7].score | 0.38014280796051025 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q8162 |
| concepts[7].display_name | Linguistics |
| concepts[8].id | https://openalex.org/C15744967 |
| concepts[8].level | 0 |
| concepts[8].score | 0.37598007917404175 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q9418 |
| concepts[8].display_name | Psychology |
| concepts[9].id | https://openalex.org/C154945302 |
| concepts[9].level | 1 |
| concepts[9].score | 0.3428187966346741 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[9].display_name | Artificial intelligence |
| concepts[10].id | https://openalex.org/C127413603 |
| concepts[10].level | 0 |
| concepts[10].score | 0.15079671144485474 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q11023 |
| concepts[10].display_name | Engineering |
| concepts[11].id | https://openalex.org/C119599485 |
| concepts[11].level | 1 |
| concepts[11].score | 0.06124117970466614 |
| concepts[11].wikidata | https://www.wikidata.org/wiki/Q43035 |
| concepts[11].display_name | Electrical engineering |
| concepts[12].id | https://openalex.org/C165801399 |
| concepts[12].level | 2 |
| concepts[12].score | 0.05793207883834839 |
| concepts[12].wikidata | https://www.wikidata.org/wiki/Q25428 |
| concepts[12].display_name | Voltage |
| concepts[13].id | https://openalex.org/C138885662 |
| concepts[13].level | 0 |
| concepts[13].score | 0.0 |
| concepts[13].wikidata | https://www.wikidata.org/wiki/Q5891 |
| concepts[13].display_name | Philosophy |
| keywords[0].id | https://openalex.org/keywords/transformer |
| keywords[0].score | 0.6962109208106995 |
| keywords[0].display_name | Transformer |
| keywords[1].id | https://openalex.org/keywords/feature |
| keywords[1].score | 0.5606569647789001 |
| keywords[1].display_name | Feature (linguistics) |
| keywords[2].id | https://openalex.org/keywords/speech-recognition |
| keywords[2].score | 0.5585553050041199 |
| keywords[2].display_name | Speech recognition |
| keywords[3].id | https://openalex.org/keywords/computer-science |
| keywords[3].score | 0.5374631881713867 |
| keywords[3].display_name | Computer science |
| keywords[4].id | https://openalex.org/keywords/emotion-recognition |
| keywords[4].score | 0.5309281945228577 |
| keywords[4].display_name | Emotion recognition |
| keywords[5].id | https://openalex.org/keywords/fusion |
| keywords[5].score | 0.4382564127445221 |
| keywords[5].display_name | Fusion |
| keywords[6].id | https://openalex.org/keywords/natural-language-processing |
| keywords[6].score | 0.4217851161956787 |
| keywords[6].display_name | Natural language processing |
| keywords[7].id | https://openalex.org/keywords/linguistics |
| keywords[7].score | 0.38014280796051025 |
| keywords[7].display_name | Linguistics |
| keywords[8].id | https://openalex.org/keywords/psychology |
| keywords[8].score | 0.37598007917404175 |
| keywords[8].display_name | Psychology |
| keywords[9].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[9].score | 0.3428187966346741 |
| keywords[9].display_name | Artificial intelligence |
| keywords[10].id | https://openalex.org/keywords/engineering |
| keywords[10].score | 0.15079671144485474 |
| keywords[10].display_name | Engineering |
| keywords[11].id | https://openalex.org/keywords/electrical-engineering |
| keywords[11].score | 0.06124117970466614 |
| keywords[11].display_name | Electrical engineering |
| keywords[12].id | https://openalex.org/keywords/voltage |
| keywords[12].score | 0.05793207883834839 |
| keywords[12].display_name | Voltage |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2501.10408 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2501.10408 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2501.10408 |
| locations[1].id | doi:10.48550/arxiv.2501.10408 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | cc-by |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | https://openalex.org/licenses/cc-by |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2501.10408 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5007599604 |
| authorships[0].author.orcid | https://orcid.org/0000-0003-3631-1890 |
| authorships[0].author.display_name | Ruoyu Zhao |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Zhao, Ruoyu |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5103090237 |
| authorships[1].author.orcid | https://orcid.org/0000-0002-2226-482X |
| authorships[1].author.display_name | Xiantao Jiang |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Jiang, Xiantao |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5003996424 |
| authorships[2].author.orcid | https://orcid.org/0000-0002-3091-7640 |
| authorships[2].author.display_name | Fei Yu |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Yu, F. Richard |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5035919267 |
| authorships[3].author.orcid | https://orcid.org/0000-0003-3529-2640 |
| authorships[3].author.display_name | Victor C. M. Leung |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Leung, Victor C. M. |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5100453544 |
| authorships[4].author.orcid | https://orcid.org/0000-0002-5121-0599 |
| authorships[4].author.display_name | Tao Wang |
| authorships[4].author_position | middle |
| authorships[4].raw_author_name | Wang, Tao |
| authorships[4].is_corresponding | False |
| authorships[5].author.id | https://openalex.org/A5012682606 |
| authorships[5].author.orcid | https://orcid.org/0000-0001-8985-515X |
| authorships[5].author.display_name | Shaohu Zhang |
| authorships[5].author_position | last |
| authorships[5].raw_author_name | Zhang, Shaohu |
| authorships[5].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2501.10408 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | Leveraging Cross-Attention Transformer and Multi-Feature Fusion for Cross-Linguistic Speech Emotion Recognition |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T10860 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9200999736785889 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1711 |
| primary_topic.subfield.display_name | Signal Processing |
| primary_topic.display_name | Speech and Audio Processing |
| related_works | https://openalex.org/W3147584709, https://openalex.org/W2099421762, https://openalex.org/W2530546662, https://openalex.org/W2967030268, https://openalex.org/W2977677679, https://openalex.org/W2185253430, https://openalex.org/W1992327129, https://openalex.org/W4210345652, https://openalex.org/W3126677997, https://openalex.org/W1610857240 |
| cited_by_count | 2 |
| counts_by_year[0].year | 2025 |
| counts_by_year[0].cited_by_count | 2 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2501.10408 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2501.10408 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2501.10408 |
| primary_location.id | pmh:oai:arXiv.org:2501.10408 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2501.10408 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2501.10408 |
| publication_date | 2025-01-06 |
| publication_year | 2025 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 5, 17, 38, 54, 69, 120 |
| abstract_inverted_index.In | 33 |
| abstract_inverted_index.We | 81, 111 |
| abstract_inverted_index.an | 131 |
| abstract_inverted_index.as | 84 |
| abstract_inverted_index.by | 114 |
| abstract_inverted_index.in | 8, 25, 101 |
| abstract_inverted_index.is | 64 |
| abstract_inverted_index.of | 30, 123, 134, 143 |
| abstract_inverted_index.on | 98, 145, 151 |
| abstract_inverted_index.to | 22, 66, 74, 88 |
| abstract_inverted_index.we | 36 |
| abstract_inverted_index.Our | 155 |
| abstract_inverted_index.SER | 13 |
| abstract_inverted_index.and | 27, 46, 93, 109, 149 |
| abstract_inverted_index.are | 51 |
| abstract_inverted_index.due | 21 |
| abstract_inverted_index.for | 78 |
| abstract_inverted_index.has | 15 |
| abstract_inverted_index.the | 75, 85, 90, 95, 116, 126, 137 |
| abstract_inverted_index.use | 82 |
| abstract_inverted_index.been | 16 |
| abstract_inverted_index.five | 102 |
| abstract_inverted_index.from | 68, 125 |
| abstract_inverted_index.gain | 67 |
| abstract_inverted_index.role | 7 |
| abstract_inverted_index.show | 112 |
| abstract_inverted_index.that | 159 |
| abstract_inverted_index.this | 34 |
| abstract_inverted_index.with | 119, 140 |
| abstract_inverted_index.(CAT) | 57 |
| abstract_inverted_index.(SER) | 3 |
| abstract_inverted_index.EMODB | 146 |
| abstract_inverted_index.EMOVO | 152 |
| abstract_inverted_index.MFCC, | 45 |
| abstract_inverted_index.These | 49 |
| abstract_inverted_index.fused | 52 |
| abstract_inverted_index.model | 92, 118 |
| abstract_inverted_index.novel | 39 |
| abstract_inverted_index.plays | 4 |
| abstract_inverted_index.seven | 99, 138 |
| abstract_inverted_index.small | 121 |
| abstract_inverted_index.that, | 113 |
| abstract_inverted_index.train | 89 |
| abstract_inverted_index.using | 53 |
| abstract_inverted_index.which | 42 |
| abstract_inverted_index.(e.g., | 104 |
| abstract_inverted_index.78.75% | 135 |
| abstract_inverted_index.79.48% | 150 |
| abstract_inverted_index.88.69% | 144 |
| abstract_inverted_index.Speech | 0 |
| abstract_inverted_index.across | 136, 164 |
| abstract_inverted_index.corpus | 77 |
| abstract_inverted_index.during | 59 |
| abstract_inverted_index.method | 97 |
| abstract_inverted_index.source | 70, 86, 91, 117 |
| abstract_inverted_index.speech | 72, 124 |
| abstract_inverted_index.study, | 35 |
| abstract_inverted_index.target | 76, 127, 166 |
| abstract_inverted_index.(CLSER) | 14 |
| abstract_inverted_index.(German | 147 |
| abstract_inverted_index.Emotion | 1 |
| abstract_inverted_index.German, | 106 |
| abstract_inverted_index.HuBERT, | 44 |
| abstract_inverted_index.IEMOCAP | 83 |
| abstract_inverted_index.applied | 65 |
| abstract_inverted_index.average | 132 |
| abstract_inverted_index.crucial | 6 |
| abstract_inverted_index.dataset | 73, 87 |
| abstract_inverted_index.emotion | 79 |
| abstract_inverted_index.feature | 60 |
| abstract_inverted_index.methods | 163 |
| abstract_inverted_index.notable | 141 |
| abstract_inverted_index.portion | 122 |
| abstract_inverted_index.problem | 20 |
| abstract_inverted_index.propose | 37 |
| abstract_inverted_index.(Italian | 153 |
| abstract_inverted_index.English, | 105 |
| abstract_inverted_index.HuMP-CAT | 129, 160 |
| abstract_inverted_index.Italian, | 108 |
| abstract_inverted_index.Spanish, | 107 |
| abstract_inverted_index.Transfer | 62 |
| abstract_inverted_index.accuracy | 133 |
| abstract_inverted_index.achieves | 130 |
| abstract_inverted_index.acoustic | 28 |
| abstract_inverted_index.approach | 40 |
| abstract_inverted_index.combines | 43 |
| abstract_inverted_index.datasets | 100 |
| abstract_inverted_index.evaluate | 94 |
| abstract_inverted_index.existing | 162 |
| abstract_inverted_index.features | 29, 50 |
| abstract_inverted_index.learning | 63 |
| abstract_inverted_index.multiple | 165 |
| abstract_inverted_index.proposed | 96 |
| abstract_inverted_index.prosodic | 47 |
| abstract_inverted_index.research | 19 |
| abstract_inverted_index.Chinese). | 110 |
| abstract_inverted_index.HuMP-CAT, | 41 |
| abstract_inverted_index.datasets, | 128, 139 |
| abstract_inverted_index.different | 31 |
| abstract_inverted_index.emotional | 71 |
| abstract_inverted_index.enhancing | 9 |
| abstract_inverted_index.extensive | 156 |
| abstract_inverted_index.language) | 148 |
| abstract_inverted_index.languages | 103 |
| abstract_inverted_index.mechanism | 58 |
| abstract_inverted_index.evaluation | 157 |
| abstract_inverted_index.language). | 154 |
| abstract_inverted_index.languages. | 32, 167 |
| abstract_inverted_index.linguistic | 26 |
| abstract_inverted_index.Recognition | 2 |
| abstract_inverted_index.challenging | 18 |
| abstract_inverted_index.extraction. | 61 |
| abstract_inverted_index.fine-tuning | 115 |
| abstract_inverted_index.outperforms | 161 |
| abstract_inverted_index.performance | 142 |
| abstract_inverted_index.significant | 23 |
| abstract_inverted_index.transformer | 56 |
| abstract_inverted_index.variability | 24 |
| abstract_inverted_index.demonstrates | 158 |
| abstract_inverted_index.interaction. | 11 |
| abstract_inverted_index.recognition. | 80 |
| abstract_inverted_index.human-computer | 10 |
| abstract_inverted_index.cross-attention | 55 |
| abstract_inverted_index.Cross-Linguistic | 12 |
| abstract_inverted_index.characteristics. | 48 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 6 |
| citation_normalized_percentile |