What Do Deep Saliency Models Learn about Visual Attention? Article Swipe
YOU?
·
· 2023
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2310.09679
In recent years, deep saliency models have made significant progress in predicting human visual attention. However, the mechanisms behind their success remain largely unexplained due to the opaque nature of deep neural networks. In this paper, we present a novel analytic framework that sheds light on the implicit features learned by saliency models and provides principled interpretation and quantification of their contributions to saliency prediction. Our approach decomposes these implicit features into interpretable bases that are explicitly aligned with semantic attributes and reformulates saliency prediction as a weighted combination of probability maps connecting the bases and saliency. By applying our framework, we conduct extensive analyses from various perspectives, including the positive and negative weights of semantics, the impact of training data and architectural designs, the progressive influences of fine-tuning, and common failure patterns of state-of-the-art deep saliency models. Additionally, we demonstrate the effectiveness of our framework by exploring visual attention characteristics in various application scenarios, such as the atypical attention of people with autism spectrum disorder, attention to emotion-eliciting stimuli, and attention evolution over time. Our code is publicly available at \url{https://github.com/szzexpoi/saliency_analysis}.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2310.09679
- https://arxiv.org/pdf/2310.09679
- OA Status
- green
- Cited By
- 5
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4387723914
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4387723914Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2310.09679Digital Object Identifier
- Title
-
What Do Deep Saliency Models Learn about Visual Attention?Work title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2023Year of publication
- Publication date
-
2023-10-14Full publication date if available
- Authors
-
Shi Chen, Ming Jiang, Qi ZhaoList of authors in order
- Landing page
-
https://arxiv.org/abs/2310.09679Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2310.09679Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2310.09679Direct OA link when available
- Concepts
-
Computer science, Artificial intelligence, Deep learning, Semantics (computer science), Interpretation (philosophy), Code (set theory), Machine learning, Deep neural networks, Visual attention, Saliency map, Image (mathematics), Psychology, Cognition, Programming language, Neuroscience, Set (abstract data type)Top concepts (fields/topics) attached by OpenAlex
- Cited by
-
5Total citation count in OpenAlex
- Citations by year (recent)
-
2025: 2, 2024: 3Per-year citation counts (last 5 years)
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4387723914 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2310.09679 |
| ids.doi | https://doi.org/10.48550/arxiv.2310.09679 |
| ids.openalex | https://openalex.org/W4387723914 |
| fwci | |
| type | preprint |
| title | What Do Deep Saliency Models Learn about Visual Attention? |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T11605 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9998999834060669 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Visual Attention and Saliency Detection |
| topics[1].id | https://openalex.org/T11094 |
| topics[1].field.id | https://openalex.org/fields/28 |
| topics[1].field.display_name | Neuroscience |
| topics[1].score | 0.9729999899864197 |
| topics[1].domain.id | https://openalex.org/domains/1 |
| topics[1].domain.display_name | Life Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/2805 |
| topics[1].subfield.display_name | Cognitive Neuroscience |
| topics[1].display_name | Face Recognition and Perception |
| topics[2].id | https://openalex.org/T10427 |
| topics[2].field.id | https://openalex.org/fields/28 |
| topics[2].field.display_name | Neuroscience |
| topics[2].score | 0.9279000163078308 |
| topics[2].domain.id | https://openalex.org/domains/1 |
| topics[2].domain.display_name | Life Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/2805 |
| topics[2].subfield.display_name | Cognitive Neuroscience |
| topics[2].display_name | Visual perception and processing mechanisms |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C41008148 |
| concepts[0].level | 0 |
| concepts[0].score | 0.7139437198638916 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[0].display_name | Computer science |
| concepts[1].id | https://openalex.org/C154945302 |
| concepts[1].level | 1 |
| concepts[1].score | 0.6796454191207886 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[1].display_name | Artificial intelligence |
| concepts[2].id | https://openalex.org/C108583219 |
| concepts[2].level | 2 |
| concepts[2].score | 0.6456137895584106 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q197536 |
| concepts[2].display_name | Deep learning |
| concepts[3].id | https://openalex.org/C184337299 |
| concepts[3].level | 2 |
| concepts[3].score | 0.5992783308029175 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q1437428 |
| concepts[3].display_name | Semantics (computer science) |
| concepts[4].id | https://openalex.org/C527412718 |
| concepts[4].level | 2 |
| concepts[4].score | 0.5334029197692871 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q855395 |
| concepts[4].display_name | Interpretation (philosophy) |
| concepts[5].id | https://openalex.org/C2776760102 |
| concepts[5].level | 3 |
| concepts[5].score | 0.5313117504119873 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q5139990 |
| concepts[5].display_name | Code (set theory) |
| concepts[6].id | https://openalex.org/C119857082 |
| concepts[6].level | 1 |
| concepts[6].score | 0.4743470251560211 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q2539 |
| concepts[6].display_name | Machine learning |
| concepts[7].id | https://openalex.org/C2984842247 |
| concepts[7].level | 3 |
| concepts[7].score | 0.4719747006893158 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q197536 |
| concepts[7].display_name | Deep neural networks |
| concepts[8].id | https://openalex.org/C2986089797 |
| concepts[8].level | 3 |
| concepts[8].score | 0.45510047674179077 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q6501338 |
| concepts[8].display_name | Visual attention |
| concepts[9].id | https://openalex.org/C2779679900 |
| concepts[9].level | 3 |
| concepts[9].score | 0.44303062558174133 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q25304431 |
| concepts[9].display_name | Saliency map |
| concepts[10].id | https://openalex.org/C115961682 |
| concepts[10].level | 2 |
| concepts[10].score | 0.2304506003856659 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q860623 |
| concepts[10].display_name | Image (mathematics) |
| concepts[11].id | https://openalex.org/C15744967 |
| concepts[11].level | 0 |
| concepts[11].score | 0.15787702798843384 |
| concepts[11].wikidata | https://www.wikidata.org/wiki/Q9418 |
| concepts[11].display_name | Psychology |
| concepts[12].id | https://openalex.org/C169900460 |
| concepts[12].level | 2 |
| concepts[12].score | 0.10826238989830017 |
| concepts[12].wikidata | https://www.wikidata.org/wiki/Q2200417 |
| concepts[12].display_name | Cognition |
| concepts[13].id | https://openalex.org/C199360897 |
| concepts[13].level | 1 |
| concepts[13].score | 0.0 |
| concepts[13].wikidata | https://www.wikidata.org/wiki/Q9143 |
| concepts[13].display_name | Programming language |
| concepts[14].id | https://openalex.org/C169760540 |
| concepts[14].level | 1 |
| concepts[14].score | 0.0 |
| concepts[14].wikidata | https://www.wikidata.org/wiki/Q207011 |
| concepts[14].display_name | Neuroscience |
| concepts[15].id | https://openalex.org/C177264268 |
| concepts[15].level | 2 |
| concepts[15].score | 0.0 |
| concepts[15].wikidata | https://www.wikidata.org/wiki/Q1514741 |
| concepts[15].display_name | Set (abstract data type) |
| keywords[0].id | https://openalex.org/keywords/computer-science |
| keywords[0].score | 0.7139437198638916 |
| keywords[0].display_name | Computer science |
| keywords[1].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[1].score | 0.6796454191207886 |
| keywords[1].display_name | Artificial intelligence |
| keywords[2].id | https://openalex.org/keywords/deep-learning |
| keywords[2].score | 0.6456137895584106 |
| keywords[2].display_name | Deep learning |
| keywords[3].id | https://openalex.org/keywords/semantics |
| keywords[3].score | 0.5992783308029175 |
| keywords[3].display_name | Semantics (computer science) |
| keywords[4].id | https://openalex.org/keywords/interpretation |
| keywords[4].score | 0.5334029197692871 |
| keywords[4].display_name | Interpretation (philosophy) |
| keywords[5].id | https://openalex.org/keywords/code |
| keywords[5].score | 0.5313117504119873 |
| keywords[5].display_name | Code (set theory) |
| keywords[6].id | https://openalex.org/keywords/machine-learning |
| keywords[6].score | 0.4743470251560211 |
| keywords[6].display_name | Machine learning |
| keywords[7].id | https://openalex.org/keywords/deep-neural-networks |
| keywords[7].score | 0.4719747006893158 |
| keywords[7].display_name | Deep neural networks |
| keywords[8].id | https://openalex.org/keywords/visual-attention |
| keywords[8].score | 0.45510047674179077 |
| keywords[8].display_name | Visual attention |
| keywords[9].id | https://openalex.org/keywords/saliency-map |
| keywords[9].score | 0.44303062558174133 |
| keywords[9].display_name | Saliency map |
| keywords[10].id | https://openalex.org/keywords/image |
| keywords[10].score | 0.2304506003856659 |
| keywords[10].display_name | Image (mathematics) |
| keywords[11].id | https://openalex.org/keywords/psychology |
| keywords[11].score | 0.15787702798843384 |
| keywords[11].display_name | Psychology |
| keywords[12].id | https://openalex.org/keywords/cognition |
| keywords[12].score | 0.10826238989830017 |
| keywords[12].display_name | Cognition |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2310.09679 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2310.09679 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2310.09679 |
| locations[1].id | doi:10.48550/arxiv.2310.09679 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | cc-by |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | https://openalex.org/licenses/cc-by |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2310.09679 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5100362202 |
| authorships[0].author.orcid | https://orcid.org/0000-0002-3749-4767 |
| authorships[0].author.display_name | Shi Chen |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Chen, Shi |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5018896387 |
| authorships[1].author.orcid | https://orcid.org/0000-0001-6439-5476 |
| authorships[1].author.display_name | Ming Jiang |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Jiang, Ming |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5047419128 |
| authorships[2].author.orcid | https://orcid.org/0000-0003-3054-8934 |
| authorships[2].author.display_name | Qi Zhao |
| authorships[2].author_position | last |
| authorships[2].raw_author_name | Zhao, Qi |
| authorships[2].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2310.09679 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | What Do Deep Saliency Models Learn about Visual Attention? |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T11605 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9998999834060669 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Visual Attention and Saliency Detection |
| related_works | https://openalex.org/W4377865163, https://openalex.org/W3193857078, https://openalex.org/W2888956734, https://openalex.org/W3000197790, https://openalex.org/W4315865067, https://openalex.org/W2979433843, https://openalex.org/W3208304128, https://openalex.org/W2155482448, https://openalex.org/W2363309472, https://openalex.org/W1513816165 |
| cited_by_count | 5 |
| counts_by_year[0].year | 2025 |
| counts_by_year[0].cited_by_count | 2 |
| counts_by_year[1].year | 2024 |
| counts_by_year[1].cited_by_count | 3 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2310.09679 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2310.09679 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2310.09679 |
| primary_location.id | pmh:oai:arXiv.org:2310.09679 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2310.09679 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2310.09679 |
| publication_date | 2023-10-14 |
| publication_year | 2023 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 38, 86 |
| abstract_inverted_index.By | 97 |
| abstract_inverted_index.In | 0, 33 |
| abstract_inverted_index.as | 85, 156 |
| abstract_inverted_index.at | 180 |
| abstract_inverted_index.by | 50, 146 |
| abstract_inverted_index.in | 10, 151 |
| abstract_inverted_index.is | 177 |
| abstract_inverted_index.of | 29, 59, 89, 114, 118, 127, 133, 143, 160 |
| abstract_inverted_index.on | 45 |
| abstract_inverted_index.to | 25, 62, 167 |
| abstract_inverted_index.we | 36, 101, 139 |
| abstract_inverted_index.Our | 65, 175 |
| abstract_inverted_index.and | 53, 57, 81, 95, 111, 121, 129, 170 |
| abstract_inverted_index.are | 75 |
| abstract_inverted_index.due | 24 |
| abstract_inverted_index.our | 99, 144 |
| abstract_inverted_index.the | 16, 26, 46, 93, 109, 116, 124, 141, 157 |
| abstract_inverted_index.code | 176 |
| abstract_inverted_index.data | 120 |
| abstract_inverted_index.deep | 3, 30, 135 |
| abstract_inverted_index.from | 105 |
| abstract_inverted_index.have | 6 |
| abstract_inverted_index.into | 71 |
| abstract_inverted_index.made | 7 |
| abstract_inverted_index.maps | 91 |
| abstract_inverted_index.over | 173 |
| abstract_inverted_index.such | 155 |
| abstract_inverted_index.that | 42, 74 |
| abstract_inverted_index.this | 34 |
| abstract_inverted_index.with | 78, 162 |
| abstract_inverted_index.bases | 73, 94 |
| abstract_inverted_index.human | 12 |
| abstract_inverted_index.light | 44 |
| abstract_inverted_index.novel | 39 |
| abstract_inverted_index.sheds | 43 |
| abstract_inverted_index.their | 19, 60 |
| abstract_inverted_index.these | 68 |
| abstract_inverted_index.time. | 174 |
| abstract_inverted_index.autism | 163 |
| abstract_inverted_index.behind | 18 |
| abstract_inverted_index.common | 130 |
| abstract_inverted_index.impact | 117 |
| abstract_inverted_index.models | 5, 52 |
| abstract_inverted_index.nature | 28 |
| abstract_inverted_index.neural | 31 |
| abstract_inverted_index.opaque | 27 |
| abstract_inverted_index.paper, | 35 |
| abstract_inverted_index.people | 161 |
| abstract_inverted_index.recent | 1 |
| abstract_inverted_index.remain | 21 |
| abstract_inverted_index.visual | 13, 148 |
| abstract_inverted_index.years, | 2 |
| abstract_inverted_index.aligned | 77 |
| abstract_inverted_index.conduct | 102 |
| abstract_inverted_index.failure | 131 |
| abstract_inverted_index.largely | 22 |
| abstract_inverted_index.learned | 49 |
| abstract_inverted_index.models. | 137 |
| abstract_inverted_index.present | 37 |
| abstract_inverted_index.success | 20 |
| abstract_inverted_index.various | 106, 152 |
| abstract_inverted_index.weights | 113 |
| abstract_inverted_index.However, | 15 |
| abstract_inverted_index.analyses | 104 |
| abstract_inverted_index.analytic | 40 |
| abstract_inverted_index.applying | 98 |
| abstract_inverted_index.approach | 66 |
| abstract_inverted_index.atypical | 158 |
| abstract_inverted_index.designs, | 123 |
| abstract_inverted_index.features | 48, 70 |
| abstract_inverted_index.implicit | 47, 69 |
| abstract_inverted_index.negative | 112 |
| abstract_inverted_index.patterns | 132 |
| abstract_inverted_index.positive | 110 |
| abstract_inverted_index.progress | 9 |
| abstract_inverted_index.provides | 54 |
| abstract_inverted_index.publicly | 178 |
| abstract_inverted_index.saliency | 4, 51, 63, 83, 136 |
| abstract_inverted_index.semantic | 79 |
| abstract_inverted_index.spectrum | 164 |
| abstract_inverted_index.stimuli, | 169 |
| abstract_inverted_index.training | 119 |
| abstract_inverted_index.weighted | 87 |
| abstract_inverted_index.attention | 149, 159, 166, 171 |
| abstract_inverted_index.available | 179 |
| abstract_inverted_index.disorder, | 165 |
| abstract_inverted_index.evolution | 172 |
| abstract_inverted_index.exploring | 147 |
| abstract_inverted_index.extensive | 103 |
| abstract_inverted_index.framework | 41, 145 |
| abstract_inverted_index.including | 108 |
| abstract_inverted_index.networks. | 32 |
| abstract_inverted_index.saliency. | 96 |
| abstract_inverted_index.attention. | 14 |
| abstract_inverted_index.attributes | 80 |
| abstract_inverted_index.connecting | 92 |
| abstract_inverted_index.decomposes | 67 |
| abstract_inverted_index.explicitly | 76 |
| abstract_inverted_index.framework, | 100 |
| abstract_inverted_index.influences | 126 |
| abstract_inverted_index.mechanisms | 17 |
| abstract_inverted_index.predicting | 11 |
| abstract_inverted_index.prediction | 84 |
| abstract_inverted_index.principled | 55 |
| abstract_inverted_index.scenarios, | 154 |
| abstract_inverted_index.semantics, | 115 |
| abstract_inverted_index.application | 153 |
| abstract_inverted_index.combination | 88 |
| abstract_inverted_index.demonstrate | 140 |
| abstract_inverted_index.prediction. | 64 |
| abstract_inverted_index.probability | 90 |
| abstract_inverted_index.progressive | 125 |
| abstract_inverted_index.significant | 8 |
| abstract_inverted_index.unexplained | 23 |
| abstract_inverted_index.fine-tuning, | 128 |
| abstract_inverted_index.reformulates | 82 |
| abstract_inverted_index.Additionally, | 138 |
| abstract_inverted_index.architectural | 122 |
| abstract_inverted_index.contributions | 61 |
| abstract_inverted_index.effectiveness | 142 |
| abstract_inverted_index.interpretable | 72 |
| abstract_inverted_index.perspectives, | 107 |
| abstract_inverted_index.interpretation | 56 |
| abstract_inverted_index.quantification | 58 |
| abstract_inverted_index.characteristics | 150 |
| abstract_inverted_index.state-of-the-art | 134 |
| abstract_inverted_index.emotion-eliciting | 168 |
| abstract_inverted_index.\url{https://github.com/szzexpoi/saliency_analysis}. | 181 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 3 |
| citation_normalized_percentile |