Joyful: Joint Modality Fusion and Graph Contrastive Learning for Multimoda Emotion Recognition Article Swipe
YOU?
·
· 2023
· Open Access
·
· DOI: https://doi.org/10.18653/v1/2023.emnlp-main.996
Multimodal emotion recognition aims to recognize emotions for each utterance from multiple modalities, which has received increasing attention for its application in human-machine interaction. Current graph-based methods fail to simultaneously depict global contextual features and local diverse uni-modal features in a dialogue. Furthermore, with the number of graph layers increasing, they easily fall into over-smoothing. In this paper, we propose a method for joint modality fusion and graph contrastive learning for multimodal emotion recognition (Joyful), where multimodality fusion, contrastive learning, and emotion recognition are jointly optimized. Specifically, we first design a new multimodal fusion mechanism that can provide deep interaction and fusion between the global contextual and uni-modal specific features. Then, we introduce a graph contrastive learning framework with inter- and intra-view contrastive losses to learn more distinguishable representations for samples with different sentiments. Extensive experiments on three benchmark datasets indicate that Joyful achieved state-of-the-art (SOTA) performance compared with all baselines. Code is released on Github (https://anonymous.4open.science/r/MERC-7F88).
Related Topics
- Type
- article
- Language
- en
- Landing Page
- https://doi.org/10.18653/v1/2023.emnlp-main.996
- https://aclanthology.org/2023.emnlp-main.996.pdf
- OA Status
- gold
- Cited By
- 28
- References
- 95
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4389520025
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4389520025Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.18653/v1/2023.emnlp-main.996Digital Object Identifier
- Title
-
Joyful: Joint Modality Fusion and Graph Contrastive Learning for Multimoda Emotion RecognitionWork title
- Type
-
articleOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2023Year of publication
- Publication date
-
2023-01-01Full publication date if available
- Authors
-
Dongyuan Li, Yusong Wang, Kotaro Funakoshi, Manabu OkumuraList of authors in order
- Landing page
-
https://doi.org/10.18653/v1/2023.emnlp-main.996Publisher landing page
- PDF URL
-
https://aclanthology.org/2023.emnlp-main.996.pdfDirect link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
goldOpen access status per OpenAlex
- OA URL
-
https://aclanthology.org/2023.emnlp-main.996.pdfDirect OA link when available
- Concepts
-
Computer science, Modalities, Artificial intelligence, Utterance, Graph, Emotion recognition, Modality (human–computer interaction), Smoothing, Modal, Natural language processing, Machine learning, Theoretical computer science, Computer vision, Sociology, Polymer chemistry, Chemistry, Social scienceTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
28Total citation count in OpenAlex
- Citations by year (recent)
-
2025: 15, 2024: 11, 2023: 2Per-year citation counts (last 5 years)
- References (count)
-
95Number of works referenced by this work
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4389520025 |
|---|---|
| doi | https://doi.org/10.18653/v1/2023.emnlp-main.996 |
| ids.doi | https://doi.org/10.18653/v1/2023.emnlp-main.996 |
| ids.openalex | https://openalex.org/W4389520025 |
| fwci | 11.66592598 |
| type | article |
| title | Joyful: Joint Modality Fusion and Graph Contrastive Learning for Multimoda Emotion Recognition |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | 16069 |
| biblio.first_page | 16051 |
| topics[0].id | https://openalex.org/T10667 |
| topics[0].field.id | https://openalex.org/fields/32 |
| topics[0].field.display_name | Psychology |
| topics[0].score | 0.9980000257492065 |
| topics[0].domain.id | https://openalex.org/domains/2 |
| topics[0].domain.display_name | Social Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/3205 |
| topics[0].subfield.display_name | Experimental and Cognitive Psychology |
| topics[0].display_name | Emotion and Mood Recognition |
| topics[1].id | https://openalex.org/T10664 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9952999949455261 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1702 |
| topics[1].subfield.display_name | Artificial Intelligence |
| topics[1].display_name | Sentiment Analysis and Opinion Mining |
| topics[2].id | https://openalex.org/T13731 |
| topics[2].field.id | https://openalex.org/fields/33 |
| topics[2].field.display_name | Social Sciences |
| topics[2].score | 0.9233999848365784 |
| topics[2].domain.id | https://openalex.org/domains/2 |
| topics[2].domain.display_name | Social Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/3322 |
| topics[2].subfield.display_name | Urban Studies |
| topics[2].display_name | Advanced Computing and Algorithms |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C41008148 |
| concepts[0].level | 0 |
| concepts[0].score | 0.7376236915588379 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[0].display_name | Computer science |
| concepts[1].id | https://openalex.org/C2779903281 |
| concepts[1].level | 2 |
| concepts[1].score | 0.5966922640800476 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q6888026 |
| concepts[1].display_name | Modalities |
| concepts[2].id | https://openalex.org/C154945302 |
| concepts[2].level | 1 |
| concepts[2].score | 0.5423468351364136 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[2].display_name | Artificial intelligence |
| concepts[3].id | https://openalex.org/C2775852435 |
| concepts[3].level | 2 |
| concepts[3].score | 0.5423354506492615 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q258403 |
| concepts[3].display_name | Utterance |
| concepts[4].id | https://openalex.org/C132525143 |
| concepts[4].level | 2 |
| concepts[4].score | 0.5308317542076111 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q141488 |
| concepts[4].display_name | Graph |
| concepts[5].id | https://openalex.org/C2777438025 |
| concepts[5].level | 2 |
| concepts[5].score | 0.46034032106399536 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q1339090 |
| concepts[5].display_name | Emotion recognition |
| concepts[6].id | https://openalex.org/C2780226545 |
| concepts[6].level | 2 |
| concepts[6].score | 0.44812294840812683 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q6888030 |
| concepts[6].display_name | Modality (human–computer interaction) |
| concepts[7].id | https://openalex.org/C3770464 |
| concepts[7].level | 2 |
| concepts[7].score | 0.43757307529449463 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q775963 |
| concepts[7].display_name | Smoothing |
| concepts[8].id | https://openalex.org/C71139939 |
| concepts[8].level | 2 |
| concepts[8].score | 0.4190555810928345 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q910194 |
| concepts[8].display_name | Modal |
| concepts[9].id | https://openalex.org/C204321447 |
| concepts[9].level | 1 |
| concepts[9].score | 0.4054625332355499 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q30642 |
| concepts[9].display_name | Natural language processing |
| concepts[10].id | https://openalex.org/C119857082 |
| concepts[10].level | 1 |
| concepts[10].score | 0.32491403818130493 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q2539 |
| concepts[10].display_name | Machine learning |
| concepts[11].id | https://openalex.org/C80444323 |
| concepts[11].level | 1 |
| concepts[11].score | 0.1298409104347229 |
| concepts[11].wikidata | https://www.wikidata.org/wiki/Q2878974 |
| concepts[11].display_name | Theoretical computer science |
| concepts[12].id | https://openalex.org/C31972630 |
| concepts[12].level | 1 |
| concepts[12].score | 0.11127299070358276 |
| concepts[12].wikidata | https://www.wikidata.org/wiki/Q844240 |
| concepts[12].display_name | Computer vision |
| concepts[13].id | https://openalex.org/C144024400 |
| concepts[13].level | 0 |
| concepts[13].score | 0.0 |
| concepts[13].wikidata | https://www.wikidata.org/wiki/Q21201 |
| concepts[13].display_name | Sociology |
| concepts[14].id | https://openalex.org/C188027245 |
| concepts[14].level | 1 |
| concepts[14].score | 0.0 |
| concepts[14].wikidata | https://www.wikidata.org/wiki/Q750446 |
| concepts[14].display_name | Polymer chemistry |
| concepts[15].id | https://openalex.org/C185592680 |
| concepts[15].level | 0 |
| concepts[15].score | 0.0 |
| concepts[15].wikidata | https://www.wikidata.org/wiki/Q2329 |
| concepts[15].display_name | Chemistry |
| concepts[16].id | https://openalex.org/C36289849 |
| concepts[16].level | 1 |
| concepts[16].score | 0.0 |
| concepts[16].wikidata | https://www.wikidata.org/wiki/Q34749 |
| concepts[16].display_name | Social science |
| keywords[0].id | https://openalex.org/keywords/computer-science |
| keywords[0].score | 0.7376236915588379 |
| keywords[0].display_name | Computer science |
| keywords[1].id | https://openalex.org/keywords/modalities |
| keywords[1].score | 0.5966922640800476 |
| keywords[1].display_name | Modalities |
| keywords[2].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[2].score | 0.5423468351364136 |
| keywords[2].display_name | Artificial intelligence |
| keywords[3].id | https://openalex.org/keywords/utterance |
| keywords[3].score | 0.5423354506492615 |
| keywords[3].display_name | Utterance |
| keywords[4].id | https://openalex.org/keywords/graph |
| keywords[4].score | 0.5308317542076111 |
| keywords[4].display_name | Graph |
| keywords[5].id | https://openalex.org/keywords/emotion-recognition |
| keywords[5].score | 0.46034032106399536 |
| keywords[5].display_name | Emotion recognition |
| keywords[6].id | https://openalex.org/keywords/modality |
| keywords[6].score | 0.44812294840812683 |
| keywords[6].display_name | Modality (human–computer interaction) |
| keywords[7].id | https://openalex.org/keywords/smoothing |
| keywords[7].score | 0.43757307529449463 |
| keywords[7].display_name | Smoothing |
| keywords[8].id | https://openalex.org/keywords/modal |
| keywords[8].score | 0.4190555810928345 |
| keywords[8].display_name | Modal |
| keywords[9].id | https://openalex.org/keywords/natural-language-processing |
| keywords[9].score | 0.4054625332355499 |
| keywords[9].display_name | Natural language processing |
| keywords[10].id | https://openalex.org/keywords/machine-learning |
| keywords[10].score | 0.32491403818130493 |
| keywords[10].display_name | Machine learning |
| keywords[11].id | https://openalex.org/keywords/theoretical-computer-science |
| keywords[11].score | 0.1298409104347229 |
| keywords[11].display_name | Theoretical computer science |
| keywords[12].id | https://openalex.org/keywords/computer-vision |
| keywords[12].score | 0.11127299070358276 |
| keywords[12].display_name | Computer vision |
| language | en |
| locations[0].id | doi:10.18653/v1/2023.emnlp-main.996 |
| locations[0].is_oa | True |
| locations[0].source | |
| locations[0].license | cc-by |
| locations[0].pdf_url | https://aclanthology.org/2023.emnlp-main.996.pdf |
| locations[0].version | publishedVersion |
| locations[0].raw_type | proceedings-article |
| locations[0].license_id | https://openalex.org/licenses/cc-by |
| locations[0].is_accepted | True |
| locations[0].is_published | True |
| locations[0].raw_source_name | Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing |
| locations[0].landing_page_url | https://doi.org/10.18653/v1/2023.emnlp-main.996 |
| indexed_in | crossref |
| authorships[0].author.id | https://openalex.org/A5047165324 |
| authorships[0].author.orcid | https://orcid.org/0000-0002-4462-3563 |
| authorships[0].author.display_name | Dongyuan Li |
| authorships[0].countries | JP |
| authorships[0].affiliations[0].institution_ids | https://openalex.org/I114531698 |
| authorships[0].affiliations[0].raw_affiliation_string | Tokyo Institute of Technology, Tokyo, Japan |
| authorships[0].institutions[0].id | https://openalex.org/I114531698 |
| authorships[0].institutions[0].ror | https://ror.org/0112mx960 |
| authorships[0].institutions[0].type | education |
| authorships[0].institutions[0].lineage | https://openalex.org/I114531698 |
| authorships[0].institutions[0].country_code | JP |
| authorships[0].institutions[0].display_name | Tokyo Institute of Technology |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Dongyuan Li |
| authorships[0].is_corresponding | False |
| authorships[0].raw_affiliation_strings | Tokyo Institute of Technology, Tokyo, Japan |
| authorships[1].author.id | https://openalex.org/A5100706501 |
| authorships[1].author.orcid | https://orcid.org/0009-0002-6668-3230 |
| authorships[1].author.display_name | Yusong Wang |
| authorships[1].countries | JP |
| authorships[1].affiliations[0].institution_ids | https://openalex.org/I114531698 |
| authorships[1].affiliations[0].raw_affiliation_string | Tokyo Institute of Technology, Tokyo, Japan |
| authorships[1].institutions[0].id | https://openalex.org/I114531698 |
| authorships[1].institutions[0].ror | https://ror.org/0112mx960 |
| authorships[1].institutions[0].type | education |
| authorships[1].institutions[0].lineage | https://openalex.org/I114531698 |
| authorships[1].institutions[0].country_code | JP |
| authorships[1].institutions[0].display_name | Tokyo Institute of Technology |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Yusong Wang |
| authorships[1].is_corresponding | False |
| authorships[1].raw_affiliation_strings | Tokyo Institute of Technology, Tokyo, Japan |
| authorships[2].author.id | https://openalex.org/A5069989297 |
| authorships[2].author.orcid | https://orcid.org/0000-0002-4529-4634 |
| authorships[2].author.display_name | Kotaro Funakoshi |
| authorships[2].countries | JP |
| authorships[2].affiliations[0].institution_ids | https://openalex.org/I114531698 |
| authorships[2].affiliations[0].raw_affiliation_string | Tokyo Institute of Technology, Tokyo, Japan |
| authorships[2].institutions[0].id | https://openalex.org/I114531698 |
| authorships[2].institutions[0].ror | https://ror.org/0112mx960 |
| authorships[2].institutions[0].type | education |
| authorships[2].institutions[0].lineage | https://openalex.org/I114531698 |
| authorships[2].institutions[0].country_code | JP |
| authorships[2].institutions[0].display_name | Tokyo Institute of Technology |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Kotaro Funakoshi |
| authorships[2].is_corresponding | False |
| authorships[2].raw_affiliation_strings | Tokyo Institute of Technology, Tokyo, Japan |
| authorships[3].author.id | https://openalex.org/A5035876897 |
| authorships[3].author.orcid | |
| authorships[3].author.display_name | Manabu Okumura |
| authorships[3].countries | JP |
| authorships[3].affiliations[0].institution_ids | https://openalex.org/I114531698 |
| authorships[3].affiliations[0].raw_affiliation_string | Tokyo Institute of Technology, Tokyo, Japan |
| authorships[3].institutions[0].id | https://openalex.org/I114531698 |
| authorships[3].institutions[0].ror | https://ror.org/0112mx960 |
| authorships[3].institutions[0].type | education |
| authorships[3].institutions[0].lineage | https://openalex.org/I114531698 |
| authorships[3].institutions[0].country_code | JP |
| authorships[3].institutions[0].display_name | Tokyo Institute of Technology |
| authorships[3].author_position | last |
| authorships[3].raw_author_name | Manabu Okumura |
| authorships[3].is_corresponding | False |
| authorships[3].raw_affiliation_strings | Tokyo Institute of Technology, Tokyo, Japan |
| has_content.pdf | True |
| has_content.grobid_xml | True |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://aclanthology.org/2023.emnlp-main.996.pdf |
| open_access.oa_status | gold |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | Joyful: Joint Modality Fusion and Graph Contrastive Learning for Multimoda Emotion Recognition |
| has_fulltext | True |
| is_retracted | False |
| updated_date | 2025-11-06T03:46:38.306776 |
| primary_topic.id | https://openalex.org/T10667 |
| primary_topic.field.id | https://openalex.org/fields/32 |
| primary_topic.field.display_name | Psychology |
| primary_topic.score | 0.9980000257492065 |
| primary_topic.domain.id | https://openalex.org/domains/2 |
| primary_topic.domain.display_name | Social Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/3205 |
| primary_topic.subfield.display_name | Experimental and Cognitive Psychology |
| primary_topic.display_name | Emotion and Mood Recognition |
| related_works | https://openalex.org/W73545470, https://openalex.org/W4224266612, https://openalex.org/W2383394264, https://openalex.org/W4320153225, https://openalex.org/W4293261942, https://openalex.org/W3191326035, https://openalex.org/W3125968744, https://openalex.org/W2167701463, https://openalex.org/W2110287964, https://openalex.org/W4307407935 |
| cited_by_count | 28 |
| counts_by_year[0].year | 2025 |
| counts_by_year[0].cited_by_count | 15 |
| counts_by_year[1].year | 2024 |
| counts_by_year[1].cited_by_count | 11 |
| counts_by_year[2].year | 2023 |
| counts_by_year[2].cited_by_count | 2 |
| locations_count | 1 |
| best_oa_location.id | doi:10.18653/v1/2023.emnlp-main.996 |
| best_oa_location.is_oa | True |
| best_oa_location.source | |
| best_oa_location.license | cc-by |
| best_oa_location.pdf_url | https://aclanthology.org/2023.emnlp-main.996.pdf |
| best_oa_location.version | publishedVersion |
| best_oa_location.raw_type | proceedings-article |
| best_oa_location.license_id | https://openalex.org/licenses/cc-by |
| best_oa_location.is_accepted | True |
| best_oa_location.is_published | True |
| best_oa_location.raw_source_name | Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing |
| best_oa_location.landing_page_url | https://doi.org/10.18653/v1/2023.emnlp-main.996 |
| primary_location.id | doi:10.18653/v1/2023.emnlp-main.996 |
| primary_location.is_oa | True |
| primary_location.source | |
| primary_location.license | cc-by |
| primary_location.pdf_url | https://aclanthology.org/2023.emnlp-main.996.pdf |
| primary_location.version | publishedVersion |
| primary_location.raw_type | proceedings-article |
| primary_location.license_id | https://openalex.org/licenses/cc-by |
| primary_location.is_accepted | True |
| primary_location.is_published | True |
| primary_location.raw_source_name | Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing |
| primary_location.landing_page_url | https://doi.org/10.18653/v1/2023.emnlp-main.996 |
| publication_date | 2023-01-01 |
| publication_year | 2023 |
| referenced_works | https://openalex.org/W4385571826, https://openalex.org/W4287065249, https://openalex.org/W3198792454, https://openalex.org/W4285300583, https://openalex.org/W2970641574, https://openalex.org/W3200135021, https://openalex.org/W3211342368, https://openalex.org/W4385571296, https://openalex.org/W3167098825, https://openalex.org/W4283016415, https://openalex.org/W4283072378, https://openalex.org/W3104210310, https://openalex.org/W3099056802, https://openalex.org/W3035020961, https://openalex.org/W4385572291, https://openalex.org/W3045969489, https://openalex.org/W2883409523, https://openalex.org/W4313020336, https://openalex.org/W2110065044, https://openalex.org/W3173688449, https://openalex.org/W4226025958, https://openalex.org/W2807126412, https://openalex.org/W3174906557, https://openalex.org/W3211849317, https://openalex.org/W2996849360, https://openalex.org/W4294558607, https://openalex.org/W3007282427, https://openalex.org/W2963446712, https://openalex.org/W4224286930, https://openalex.org/W2963686995, https://openalex.org/W2116341502, https://openalex.org/W2146334809, https://openalex.org/W3007868330, https://openalex.org/W1882423120, https://openalex.org/W4224916778, https://openalex.org/W4313142416, https://openalex.org/W3086452730, https://openalex.org/W3105484484, https://openalex.org/W3173751215, https://openalex.org/W4385573848, https://openalex.org/W4229977739, https://openalex.org/W4290878031, https://openalex.org/W4293569541, https://openalex.org/W4283799299, https://openalex.org/W1832693441, https://openalex.org/W1538131130, https://openalex.org/W4205102929, https://openalex.org/W4221155339, https://openalex.org/W2964051877, https://openalex.org/W3098556456, https://openalex.org/W2556418146, https://openalex.org/W2808359495, https://openalex.org/W2798935874, https://openalex.org/W4287887917, https://openalex.org/W3093051361, https://openalex.org/W3169801598, https://openalex.org/W3199412388, https://openalex.org/W3168391820, https://openalex.org/W2964300796, https://openalex.org/W4379806227, https://openalex.org/W4298089296, https://openalex.org/W3175552668, https://openalex.org/W3176713826, https://openalex.org/W2026417691, https://openalex.org/W3095602948, https://openalex.org/W2985882473, https://openalex.org/W3176028309, https://openalex.org/W4285261162, https://openalex.org/W2963341956, https://openalex.org/W2963873807, https://openalex.org/W3205519684, https://openalex.org/W3115382905, https://openalex.org/W1522301498, https://openalex.org/W2982108874, https://openalex.org/W3034266838, https://openalex.org/W2085662862, https://openalex.org/W4306317319, https://openalex.org/W2941807126, https://openalex.org/W2964015378, https://openalex.org/W3128412859, https://openalex.org/W4221154966, https://openalex.org/W4297800176, https://openalex.org/W2005422315, https://openalex.org/W2964010806, https://openalex.org/W2965373594, https://openalex.org/W3156636935, https://openalex.org/W4229508619, https://openalex.org/W3167036875, https://openalex.org/W3176399185, https://openalex.org/W4224318954, https://openalex.org/W2964216663, https://openalex.org/W4385572018, https://openalex.org/W4223555601, https://openalex.org/W4283773758, https://openalex.org/W3033039844 |
| referenced_works_count | 95 |
| abstract_inverted_index.a | 40, 60, 90, 113 |
| abstract_inverted_index.In | 55 |
| abstract_inverted_index.in | 21, 39 |
| abstract_inverted_index.is | 152 |
| abstract_inverted_index.of | 46 |
| abstract_inverted_index.on | 136, 154 |
| abstract_inverted_index.to | 4, 28, 124 |
| abstract_inverted_index.we | 58, 87, 111 |
| abstract_inverted_index.all | 149 |
| abstract_inverted_index.and | 34, 66, 80, 100, 106, 120 |
| abstract_inverted_index.are | 83 |
| abstract_inverted_index.can | 96 |
| abstract_inverted_index.for | 7, 18, 62, 70, 129 |
| abstract_inverted_index.has | 14 |
| abstract_inverted_index.its | 19 |
| abstract_inverted_index.new | 91 |
| abstract_inverted_index.the | 44, 103 |
| abstract_inverted_index.Code | 151 |
| abstract_inverted_index.aims | 3 |
| abstract_inverted_index.deep | 98 |
| abstract_inverted_index.each | 8 |
| abstract_inverted_index.fail | 27 |
| abstract_inverted_index.fall | 52 |
| abstract_inverted_index.from | 10 |
| abstract_inverted_index.into | 53 |
| abstract_inverted_index.more | 126 |
| abstract_inverted_index.that | 95, 141 |
| abstract_inverted_index.they | 50 |
| abstract_inverted_index.this | 56 |
| abstract_inverted_index.with | 43, 118, 131, 148 |
| abstract_inverted_index.Then, | 110 |
| abstract_inverted_index.first | 88 |
| abstract_inverted_index.graph | 47, 67, 114 |
| abstract_inverted_index.joint | 63 |
| abstract_inverted_index.learn | 125 |
| abstract_inverted_index.local | 35 |
| abstract_inverted_index.three | 137 |
| abstract_inverted_index.where | 75 |
| abstract_inverted_index.which | 13 |
| abstract_inverted_index.(SOTA) | 145 |
| abstract_inverted_index.Github | 155 |
| abstract_inverted_index.Joyful | 142 |
| abstract_inverted_index.depict | 30 |
| abstract_inverted_index.design | 89 |
| abstract_inverted_index.easily | 51 |
| abstract_inverted_index.fusion | 65, 93, 101 |
| abstract_inverted_index.global | 31, 104 |
| abstract_inverted_index.inter- | 119 |
| abstract_inverted_index.layers | 48 |
| abstract_inverted_index.losses | 123 |
| abstract_inverted_index.method | 61 |
| abstract_inverted_index.number | 45 |
| abstract_inverted_index.paper, | 57 |
| abstract_inverted_index.Current | 24 |
| abstract_inverted_index.between | 102 |
| abstract_inverted_index.diverse | 36 |
| abstract_inverted_index.emotion | 1, 72, 81 |
| abstract_inverted_index.fusion, | 77 |
| abstract_inverted_index.jointly | 84 |
| abstract_inverted_index.methods | 26 |
| abstract_inverted_index.propose | 59 |
| abstract_inverted_index.provide | 97 |
| abstract_inverted_index.samples | 130 |
| abstract_inverted_index.achieved | 143 |
| abstract_inverted_index.compared | 147 |
| abstract_inverted_index.datasets | 139 |
| abstract_inverted_index.emotions | 6 |
| abstract_inverted_index.features | 33, 38 |
| abstract_inverted_index.indicate | 140 |
| abstract_inverted_index.learning | 69, 116 |
| abstract_inverted_index.modality | 64 |
| abstract_inverted_index.multiple | 11 |
| abstract_inverted_index.received | 15 |
| abstract_inverted_index.released | 153 |
| abstract_inverted_index.specific | 108 |
| abstract_inverted_index.(Joyful), | 74 |
| abstract_inverted_index.Extensive | 134 |
| abstract_inverted_index.attention | 17 |
| abstract_inverted_index.benchmark | 138 |
| abstract_inverted_index.dialogue. | 41 |
| abstract_inverted_index.different | 132 |
| abstract_inverted_index.features. | 109 |
| abstract_inverted_index.framework | 117 |
| abstract_inverted_index.introduce | 112 |
| abstract_inverted_index.learning, | 79 |
| abstract_inverted_index.mechanism | 94 |
| abstract_inverted_index.recognize | 5 |
| abstract_inverted_index.uni-modal | 37, 107 |
| abstract_inverted_index.utterance | 9 |
| abstract_inverted_index.Multimodal | 0 |
| abstract_inverted_index.baselines. | 150 |
| abstract_inverted_index.contextual | 32, 105 |
| abstract_inverted_index.increasing | 16 |
| abstract_inverted_index.intra-view | 121 |
| abstract_inverted_index.multimodal | 71, 92 |
| abstract_inverted_index.optimized. | 85 |
| abstract_inverted_index.application | 20 |
| abstract_inverted_index.contrastive | 68, 78, 115, 122 |
| abstract_inverted_index.experiments | 135 |
| abstract_inverted_index.graph-based | 25 |
| abstract_inverted_index.increasing, | 49 |
| abstract_inverted_index.interaction | 99 |
| abstract_inverted_index.modalities, | 12 |
| abstract_inverted_index.performance | 146 |
| abstract_inverted_index.recognition | 2, 73, 82 |
| abstract_inverted_index.sentiments. | 133 |
| abstract_inverted_index.Furthermore, | 42 |
| abstract_inverted_index.interaction. | 23 |
| abstract_inverted_index.Specifically, | 86 |
| abstract_inverted_index.human-machine | 22 |
| abstract_inverted_index.multimodality | 76 |
| abstract_inverted_index.simultaneously | 29 |
| abstract_inverted_index.distinguishable | 127 |
| abstract_inverted_index.over-smoothing. | 54 |
| abstract_inverted_index.representations | 128 |
| abstract_inverted_index.state-of-the-art | 144 |
| abstract_inverted_index.(https://anonymous.4open.science/r/MERC-7F88). | 156 |
| cited_by_percentile_year.max | 100 |
| cited_by_percentile_year.min | 94 |
| countries_distinct_count | 1 |
| institutions_distinct_count | 4 |
| sustainable_development_goals[0].id | https://metadata.un.org/sdg/4 |
| sustainable_development_goals[0].score | 0.5 |
| sustainable_development_goals[0].display_name | Quality Education |
| citation_normalized_percentile.value | 0.98242671 |
| citation_normalized_percentile.is_in_top_1_percent | True |
| citation_normalized_percentile.is_in_top_10_percent | True |