VyAnG-Net: A Novel Multi-Modal Sarcasm Recognition Model by Uncovering Visual, Acoustic and Glossary Features Article Swipe
YOU?
·
· 2024
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2408.10246
Various linguistic and non-linguistic clues, such as excessive emphasis on a word, a shift in the tone of voice, or an awkward expression, frequently convey sarcasm. The computer vision problem of sarcasm recognition in conversation aims to identify hidden sarcastic, criticizing, and metaphorical information embedded in everyday dialogue. Prior, sarcasm recognition has focused mainly on text. Still, it is critical to consider all textual information, audio stream, facial expression, and body position for reliable sarcasm identification. Hence, we propose a novel approach that combines a lightweight depth attention module with a self-regulated ConvNet to concentrate on the most crucial features of visual data and an attentional tokenizer based strategy to extract the most critical context-specific information from the textual data. The following is a list of the key contributions that our experimentation has made in response to performing the task of Multi-modal Sarcasm Recognition: an attentional tokenizer branch to get beneficial features from the glossary content provided by the subtitles; a visual branch for acquiring the most prominent features from the video frames; an utterance-level feature extraction from acoustic content and a multi-headed attention based feature fusion branch to blend features obtained from multiple modalities. Extensive testing on one of the benchmark video datasets, MUSTaRD, yielded an accuracy of 79.86% for speaker dependent and 76.94% for speaker independent configuration demonstrating that our approach is superior to the existing methods. We have also conducted a cross-dataset analysis to test the adaptability of VyAnG-Net with unseen samples of another dataset MUStARD++.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2408.10246
- https://arxiv.org/pdf/2408.10246
- OA Status
- green
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4403006760
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4403006760Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2408.10246Digital Object Identifier
- Title
-
VyAnG-Net: A Novel Multi-Modal Sarcasm Recognition Model by Uncovering Visual, Acoustic and Glossary FeaturesWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2024Year of publication
- Publication date
-
2024-08-05Full publication date if available
- Authors
-
Ananya Pandey, Dinesh Kumar VishwakarmaList of authors in order
- Landing page
-
https://arxiv.org/abs/2408.10246Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2408.10246Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2408.10246Direct OA link when available
- Concepts
-
Sarcasm, Glossary, Modal, Computer science, Artificial intelligence, Speech recognition, Net (polyhedron), Natural language processing, Pattern recognition (psychology), Linguistics, Irony, Mathematics, Polymer chemistry, Chemistry, Geometry, PhilosophyTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
0Total citation count in OpenAlex
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4403006760 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2408.10246 |
| ids.doi | https://doi.org/10.48550/arxiv.2408.10246 |
| ids.openalex | https://openalex.org/W4403006760 |
| fwci | |
| type | preprint |
| title | VyAnG-Net: A Novel Multi-Modal Sarcasm Recognition Model by Uncovering Visual, Acoustic and Glossary Features |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T11665 |
| topics[0].field.id | https://openalex.org/fields/13 |
| topics[0].field.display_name | Biochemistry, Genetics and Molecular Biology |
| topics[0].score | 0.9786999821662903 |
| topics[0].domain.id | https://openalex.org/domains/1 |
| topics[0].domain.display_name | Life Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1309 |
| topics[0].subfield.display_name | Developmental Biology |
| topics[0].display_name | Animal Vocal Communication and Behavior |
| topics[1].id | https://openalex.org/T10199 |
| topics[1].field.id | https://openalex.org/fields/23 |
| topics[1].field.display_name | Environmental Science |
| topics[1].score | 0.954800009727478 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/2303 |
| topics[1].subfield.display_name | Ecology |
| topics[1].display_name | Wildlife Ecology and Conservation |
| topics[2].id | https://openalex.org/T10992 |
| topics[2].field.id | https://openalex.org/fields/12 |
| topics[2].field.display_name | Arts and Humanities |
| topics[2].score | 0.9025999903678894 |
| topics[2].domain.id | https://openalex.org/domains/2 |
| topics[2].domain.display_name | Social Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1204 |
| topics[2].subfield.display_name | Archeology |
| topics[2].display_name | Forensic Anthropology and Bioarchaeology Studies |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C2776207355 |
| concepts[0].level | 3 |
| concepts[0].score | 0.9057430028915405 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q191035 |
| concepts[0].display_name | Sarcasm |
| concepts[1].id | https://openalex.org/C2780031656 |
| concepts[1].level | 2 |
| concepts[1].score | 0.8821939826011658 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q859161 |
| concepts[1].display_name | Glossary |
| concepts[2].id | https://openalex.org/C71139939 |
| concepts[2].level | 2 |
| concepts[2].score | 0.6869417428970337 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q910194 |
| concepts[2].display_name | Modal |
| concepts[3].id | https://openalex.org/C41008148 |
| concepts[3].level | 0 |
| concepts[3].score | 0.6390405297279358 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[3].display_name | Computer science |
| concepts[4].id | https://openalex.org/C154945302 |
| concepts[4].level | 1 |
| concepts[4].score | 0.5421867966651917 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[4].display_name | Artificial intelligence |
| concepts[5].id | https://openalex.org/C28490314 |
| concepts[5].level | 1 |
| concepts[5].score | 0.540007472038269 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q189436 |
| concepts[5].display_name | Speech recognition |
| concepts[6].id | https://openalex.org/C14166107 |
| concepts[6].level | 2 |
| concepts[6].score | 0.4955946207046509 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q253829 |
| concepts[6].display_name | Net (polyhedron) |
| concepts[7].id | https://openalex.org/C204321447 |
| concepts[7].level | 1 |
| concepts[7].score | 0.3892882466316223 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q30642 |
| concepts[7].display_name | Natural language processing |
| concepts[8].id | https://openalex.org/C153180895 |
| concepts[8].level | 2 |
| concepts[8].score | 0.3464050590991974 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q7148389 |
| concepts[8].display_name | Pattern recognition (psychology) |
| concepts[9].id | https://openalex.org/C41895202 |
| concepts[9].level | 1 |
| concepts[9].score | 0.2141011357307434 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q8162 |
| concepts[9].display_name | Linguistics |
| concepts[10].id | https://openalex.org/C2779975665 |
| concepts[10].level | 2 |
| concepts[10].score | 0.1448247730731964 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q131361 |
| concepts[10].display_name | Irony |
| concepts[11].id | https://openalex.org/C33923547 |
| concepts[11].level | 0 |
| concepts[11].score | 0.08483287692070007 |
| concepts[11].wikidata | https://www.wikidata.org/wiki/Q395 |
| concepts[11].display_name | Mathematics |
| concepts[12].id | https://openalex.org/C188027245 |
| concepts[12].level | 1 |
| concepts[12].score | 0.0 |
| concepts[12].wikidata | https://www.wikidata.org/wiki/Q750446 |
| concepts[12].display_name | Polymer chemistry |
| concepts[13].id | https://openalex.org/C185592680 |
| concepts[13].level | 0 |
| concepts[13].score | 0.0 |
| concepts[13].wikidata | https://www.wikidata.org/wiki/Q2329 |
| concepts[13].display_name | Chemistry |
| concepts[14].id | https://openalex.org/C2524010 |
| concepts[14].level | 1 |
| concepts[14].score | 0.0 |
| concepts[14].wikidata | https://www.wikidata.org/wiki/Q8087 |
| concepts[14].display_name | Geometry |
| concepts[15].id | https://openalex.org/C138885662 |
| concepts[15].level | 0 |
| concepts[15].score | 0.0 |
| concepts[15].wikidata | https://www.wikidata.org/wiki/Q5891 |
| concepts[15].display_name | Philosophy |
| keywords[0].id | https://openalex.org/keywords/sarcasm |
| keywords[0].score | 0.9057430028915405 |
| keywords[0].display_name | Sarcasm |
| keywords[1].id | https://openalex.org/keywords/glossary |
| keywords[1].score | 0.8821939826011658 |
| keywords[1].display_name | Glossary |
| keywords[2].id | https://openalex.org/keywords/modal |
| keywords[2].score | 0.6869417428970337 |
| keywords[2].display_name | Modal |
| keywords[3].id | https://openalex.org/keywords/computer-science |
| keywords[3].score | 0.6390405297279358 |
| keywords[3].display_name | Computer science |
| keywords[4].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[4].score | 0.5421867966651917 |
| keywords[4].display_name | Artificial intelligence |
| keywords[5].id | https://openalex.org/keywords/speech-recognition |
| keywords[5].score | 0.540007472038269 |
| keywords[5].display_name | Speech recognition |
| keywords[6].id | https://openalex.org/keywords/net |
| keywords[6].score | 0.4955946207046509 |
| keywords[6].display_name | Net (polyhedron) |
| keywords[7].id | https://openalex.org/keywords/natural-language-processing |
| keywords[7].score | 0.3892882466316223 |
| keywords[7].display_name | Natural language processing |
| keywords[8].id | https://openalex.org/keywords/pattern-recognition |
| keywords[8].score | 0.3464050590991974 |
| keywords[8].display_name | Pattern recognition (psychology) |
| keywords[9].id | https://openalex.org/keywords/linguistics |
| keywords[9].score | 0.2141011357307434 |
| keywords[9].display_name | Linguistics |
| keywords[10].id | https://openalex.org/keywords/irony |
| keywords[10].score | 0.1448247730731964 |
| keywords[10].display_name | Irony |
| keywords[11].id | https://openalex.org/keywords/mathematics |
| keywords[11].score | 0.08483287692070007 |
| keywords[11].display_name | Mathematics |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2408.10246 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2408.10246 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2408.10246 |
| locations[1].id | doi:10.48550/arxiv.2408.10246 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2408.10246 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5037343821 |
| authorships[0].author.orcid | https://orcid.org/0000-0002-2419-6314 |
| authorships[0].author.display_name | Ananya Pandey |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Pandey, Ananya |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5021449557 |
| authorships[1].author.orcid | https://orcid.org/0000-0002-1026-0047 |
| authorships[1].author.display_name | Dinesh Kumar Vishwakarma |
| authorships[1].author_position | last |
| authorships[1].raw_author_name | Vishwakarma, Dinesh Kumar |
| authorships[1].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2408.10246 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2024-10-01T00:00:00 |
| display_name | VyAnG-Net: A Novel Multi-Modal Sarcasm Recognition Model by Uncovering Visual, Acoustic and Glossary Features |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T11665 |
| primary_topic.field.id | https://openalex.org/fields/13 |
| primary_topic.field.display_name | Biochemistry, Genetics and Molecular Biology |
| primary_topic.score | 0.9786999821662903 |
| primary_topic.domain.id | https://openalex.org/domains/1 |
| primary_topic.domain.display_name | Life Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1309 |
| primary_topic.subfield.display_name | Developmental Biology |
| primary_topic.display_name | Animal Vocal Communication and Behavior |
| related_works | https://openalex.org/W2900446122, https://openalex.org/W3037315328, https://openalex.org/W4312684429, https://openalex.org/W3101138303, https://openalex.org/W2349372848, https://openalex.org/W2753593955, https://openalex.org/W2907442881, https://openalex.org/W3018705632, https://openalex.org/W4311456904, https://openalex.org/W3207396986 |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2408.10246 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2408.10246 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2408.10246 |
| primary_location.id | pmh:oai:arXiv.org:2408.10246 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2408.10246 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2408.10246 |
| publication_date | 2024-08-05 |
| publication_year | 2024 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 10, 12, 79, 84, 90, 123, 160, 181, 233 |
| abstract_inverted_index.We | 229 |
| abstract_inverted_index.an | 20, 104, 144, 173, 206 |
| abstract_inverted_index.as | 6 |
| abstract_inverted_index.by | 157 |
| abstract_inverted_index.in | 14, 33, 45, 134 |
| abstract_inverted_index.is | 58, 122, 223 |
| abstract_inverted_index.it | 57 |
| abstract_inverted_index.of | 17, 30, 100, 125, 140, 199, 208, 240, 245 |
| abstract_inverted_index.on | 9, 54, 95, 197 |
| abstract_inverted_index.or | 19 |
| abstract_inverted_index.to | 36, 60, 93, 109, 136, 148, 188, 225, 236 |
| abstract_inverted_index.we | 77 |
| abstract_inverted_index.The | 26, 120 |
| abstract_inverted_index.all | 62 |
| abstract_inverted_index.and | 2, 41, 69, 103, 180, 213 |
| abstract_inverted_index.for | 72, 163, 210, 215 |
| abstract_inverted_index.get | 149 |
| abstract_inverted_index.has | 51, 132 |
| abstract_inverted_index.key | 127 |
| abstract_inverted_index.one | 198 |
| abstract_inverted_index.our | 130, 221 |
| abstract_inverted_index.the | 15, 96, 111, 117, 126, 138, 153, 158, 165, 170, 200, 226, 238 |
| abstract_inverted_index.aims | 35 |
| abstract_inverted_index.also | 231 |
| abstract_inverted_index.body | 70 |
| abstract_inverted_index.data | 102 |
| abstract_inverted_index.from | 116, 152, 169, 177, 192 |
| abstract_inverted_index.have | 230 |
| abstract_inverted_index.list | 124 |
| abstract_inverted_index.made | 133 |
| abstract_inverted_index.most | 97, 112, 166 |
| abstract_inverted_index.such | 5 |
| abstract_inverted_index.task | 139 |
| abstract_inverted_index.test | 237 |
| abstract_inverted_index.that | 82, 129, 220 |
| abstract_inverted_index.tone | 16 |
| abstract_inverted_index.with | 89, 242 |
| abstract_inverted_index.audio | 65 |
| abstract_inverted_index.based | 107, 184 |
| abstract_inverted_index.blend | 189 |
| abstract_inverted_index.data. | 119 |
| abstract_inverted_index.depth | 86 |
| abstract_inverted_index.novel | 80 |
| abstract_inverted_index.shift | 13 |
| abstract_inverted_index.text. | 55 |
| abstract_inverted_index.video | 171, 202 |
| abstract_inverted_index.word, | 11 |
| abstract_inverted_index.76.94% | 214 |
| abstract_inverted_index.79.86% | 209 |
| abstract_inverted_index.Hence, | 76 |
| abstract_inverted_index.Prior, | 48 |
| abstract_inverted_index.Still, | 56 |
| abstract_inverted_index.branch | 147, 162, 187 |
| abstract_inverted_index.clues, | 4 |
| abstract_inverted_index.convey | 24 |
| abstract_inverted_index.facial | 67 |
| abstract_inverted_index.fusion | 186 |
| abstract_inverted_index.hidden | 38 |
| abstract_inverted_index.mainly | 53 |
| abstract_inverted_index.module | 88 |
| abstract_inverted_index.unseen | 243 |
| abstract_inverted_index.vision | 28 |
| abstract_inverted_index.visual | 101, 161 |
| abstract_inverted_index.voice, | 18 |
| abstract_inverted_index.ConvNet | 92 |
| abstract_inverted_index.Sarcasm | 142 |
| abstract_inverted_index.Various | 0 |
| abstract_inverted_index.another | 246 |
| abstract_inverted_index.awkward | 21 |
| abstract_inverted_index.content | 155, 179 |
| abstract_inverted_index.crucial | 98 |
| abstract_inverted_index.dataset | 247 |
| abstract_inverted_index.extract | 110 |
| abstract_inverted_index.feature | 175, 185 |
| abstract_inverted_index.focused | 52 |
| abstract_inverted_index.frames; | 172 |
| abstract_inverted_index.problem | 29 |
| abstract_inverted_index.propose | 78 |
| abstract_inverted_index.samples | 244 |
| abstract_inverted_index.sarcasm | 31, 49, 74 |
| abstract_inverted_index.speaker | 211, 216 |
| abstract_inverted_index.stream, | 66 |
| abstract_inverted_index.testing | 196 |
| abstract_inverted_index.textual | 63, 118 |
| abstract_inverted_index.yielded | 205 |
| abstract_inverted_index.MUSTaRD, | 204 |
| abstract_inverted_index.accuracy | 207 |
| abstract_inverted_index.acoustic | 178 |
| abstract_inverted_index.analysis | 235 |
| abstract_inverted_index.approach | 81, 222 |
| abstract_inverted_index.combines | 83 |
| abstract_inverted_index.computer | 27 |
| abstract_inverted_index.consider | 61 |
| abstract_inverted_index.critical | 59, 113 |
| abstract_inverted_index.embedded | 44 |
| abstract_inverted_index.emphasis | 8 |
| abstract_inverted_index.everyday | 46 |
| abstract_inverted_index.existing | 227 |
| abstract_inverted_index.features | 99, 151, 168, 190 |
| abstract_inverted_index.glossary | 154 |
| abstract_inverted_index.identify | 37 |
| abstract_inverted_index.methods. | 228 |
| abstract_inverted_index.multiple | 193 |
| abstract_inverted_index.obtained | 191 |
| abstract_inverted_index.position | 71 |
| abstract_inverted_index.provided | 156 |
| abstract_inverted_index.reliable | 73 |
| abstract_inverted_index.response | 135 |
| abstract_inverted_index.sarcasm. | 25 |
| abstract_inverted_index.strategy | 108 |
| abstract_inverted_index.superior | 224 |
| abstract_inverted_index.Extensive | 195 |
| abstract_inverted_index.VyAnG-Net | 241 |
| abstract_inverted_index.acquiring | 164 |
| abstract_inverted_index.attention | 87, 183 |
| abstract_inverted_index.benchmark | 201 |
| abstract_inverted_index.conducted | 232 |
| abstract_inverted_index.datasets, | 203 |
| abstract_inverted_index.dependent | 212 |
| abstract_inverted_index.dialogue. | 47 |
| abstract_inverted_index.excessive | 7 |
| abstract_inverted_index.following | 121 |
| abstract_inverted_index.prominent | 167 |
| abstract_inverted_index.tokenizer | 106, 146 |
| abstract_inverted_index.MUStARD++. | 248 |
| abstract_inverted_index.beneficial | 150 |
| abstract_inverted_index.extraction | 176 |
| abstract_inverted_index.frequently | 23 |
| abstract_inverted_index.linguistic | 1 |
| abstract_inverted_index.performing | 137 |
| abstract_inverted_index.sarcastic, | 39 |
| abstract_inverted_index.subtitles; | 159 |
| abstract_inverted_index.Multi-modal | 141 |
| abstract_inverted_index.attentional | 105, 145 |
| abstract_inverted_index.concentrate | 94 |
| abstract_inverted_index.expression, | 22, 68 |
| abstract_inverted_index.independent | 217 |
| abstract_inverted_index.information | 43, 115 |
| abstract_inverted_index.lightweight | 85 |
| abstract_inverted_index.modalities. | 194 |
| abstract_inverted_index.recognition | 32, 50 |
| abstract_inverted_index.Recognition: | 143 |
| abstract_inverted_index.adaptability | 239 |
| abstract_inverted_index.conversation | 34 |
| abstract_inverted_index.criticizing, | 40 |
| abstract_inverted_index.information, | 64 |
| abstract_inverted_index.metaphorical | 42 |
| abstract_inverted_index.multi-headed | 182 |
| abstract_inverted_index.configuration | 218 |
| abstract_inverted_index.contributions | 128 |
| abstract_inverted_index.cross-dataset | 234 |
| abstract_inverted_index.demonstrating | 219 |
| abstract_inverted_index.non-linguistic | 3 |
| abstract_inverted_index.self-regulated | 91 |
| abstract_inverted_index.experimentation | 131 |
| abstract_inverted_index.identification. | 75 |
| abstract_inverted_index.utterance-level | 174 |
| abstract_inverted_index.context-specific | 114 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 2 |
| citation_normalized_percentile |