Saliency-Aware Deep Learning Approach for Enhanced Endoscopic Image Super-Resolution Article Swipe
YOU?
·
· 2024
· Open Access
·
· DOI: https://doi.org/10.1109/access.2024.3402953
The adoption of Stereo Imaging technology within endoscopic procedures represents a transformative advancement in medical imaging, providing surgeons with depth perception and detailed views of internal anatomy for enhanced diagnostic accuracy and surgical precision. However, the practical application of stereo imaging in endoscopy faces challenges, including the generation of low-resolution and blurred images, which can hinder the effectiveness of medical diagnoses and interventions. Our research introduces an endoscopic image SR model in response to these specific. This model features an innovative feature extraction module and an advanced cross-view feature interaction module tailored for the intricacies of endoscopic imagery. Initially trained on the SCARED dataset, our model was rigorously tested across four additional publicly available endoscopic image datasets at scales 2, 4, and 8, demonstrating unparalleled performance improvements in endoscopic SR. Our results are compelling. They show that our model not only substantially enhances the quality of endoscopic images but also consistently surpasses other existing methods like E-SEVSR, DCSSRNet, and CCSBESR in all tested datasets, in quantitative measures such as PSNR and SSIM, and in qualitative evaluations. The successful application of our SR model in endoscopic imaging has the potential to revolutionize medical diagnostics and surgery, significantly increasing the precision and effectiveness of endoscopic procedures. The code will be released on GitHub and can be accessed at https://github.com/cu-vtrg-lab/Saliency-Aware-Deep-Learning-Approach-for-Enhanced-Endoscopic-Image-SR.
Related Topics
- Type
- article
- Language
- en
- Landing Page
- https://doi.org/10.1109/access.2024.3402953
- https://ieeexplore.ieee.org/ielx7/6287639/6514899/10534254.pdf
- OA Status
- gold
- Cited By
- 21
- References
- 51
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4398150979
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4398150979Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.1109/access.2024.3402953Digital Object Identifier
- Title
-
Saliency-Aware Deep Learning Approach for Enhanced Endoscopic Image Super-ResolutionWork title
- Type
-
articleOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2024Year of publication
- Publication date
-
2024-01-01Full publication date if available
- Authors
-
Mansoor Hayat, Supavadee AramvithList of authors in order
- Landing page
-
https://doi.org/10.1109/access.2024.3402953Publisher landing page
- PDF URL
-
https://ieeexplore.ieee.org/ielx7/6287639/6514899/10534254.pdfDirect link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
goldOpen access status per OpenAlex
- OA URL
-
https://ieeexplore.ieee.org/ielx7/6287639/6514899/10534254.pdfDirect OA link when available
- Concepts
-
Computer science, Artificial intelligence, Deep learning, Computer vision, Image (mathematics), Image resolutionTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
21Total citation count in OpenAlex
- Citations by year (recent)
-
2025: 10, 2024: 11Per-year citation counts (last 5 years)
- References (count)
-
51Number of works referenced by this work
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4398150979 |
|---|---|
| doi | https://doi.org/10.1109/access.2024.3402953 |
| ids.doi | https://doi.org/10.1109/access.2024.3402953 |
| ids.openalex | https://openalex.org/W4398150979 |
| fwci | 12.91363503 |
| type | article |
| title | Saliency-Aware Deep Learning Approach for Enhanced Endoscopic Image Super-Resolution |
| biblio.issue | |
| biblio.volume | 12 |
| biblio.last_page | 83465 |
| biblio.first_page | 83452 |
| topics[0].id | https://openalex.org/T11659 |
| topics[0].field.id | https://openalex.org/fields/22 |
| topics[0].field.display_name | Engineering |
| topics[0].score | 0.9642000198364258 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/2214 |
| topics[0].subfield.display_name | Media Technology |
| topics[0].display_name | Advanced Image Fusion Techniques |
| funders[0].id | https://openalex.org/F4320321557 |
| funders[0].ror | https://ror.org/028wp3y58 |
| funders[0].display_name | Chulalongkorn University |
| is_xpac | False |
| apc_list.value | 1850 |
| apc_list.currency | USD |
| apc_list.value_usd | 1850 |
| apc_paid.value | 1850 |
| apc_paid.currency | USD |
| apc_paid.value_usd | 1850 |
| concepts[0].id | https://openalex.org/C41008148 |
| concepts[0].level | 0 |
| concepts[0].score | 0.7476553320884705 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[0].display_name | Computer science |
| concepts[1].id | https://openalex.org/C154945302 |
| concepts[1].level | 1 |
| concepts[1].score | 0.5673074722290039 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[1].display_name | Artificial intelligence |
| concepts[2].id | https://openalex.org/C108583219 |
| concepts[2].level | 2 |
| concepts[2].score | 0.5217681527137756 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q197536 |
| concepts[2].display_name | Deep learning |
| concepts[3].id | https://openalex.org/C31972630 |
| concepts[3].level | 1 |
| concepts[3].score | 0.4986722469329834 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q844240 |
| concepts[3].display_name | Computer vision |
| concepts[4].id | https://openalex.org/C115961682 |
| concepts[4].level | 2 |
| concepts[4].score | 0.4611641764640808 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q860623 |
| concepts[4].display_name | Image (mathematics) |
| concepts[5].id | https://openalex.org/C205372480 |
| concepts[5].level | 2 |
| concepts[5].score | 0.41840797662734985 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q210521 |
| concepts[5].display_name | Image resolution |
| keywords[0].id | https://openalex.org/keywords/computer-science |
| keywords[0].score | 0.7476553320884705 |
| keywords[0].display_name | Computer science |
| keywords[1].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[1].score | 0.5673074722290039 |
| keywords[1].display_name | Artificial intelligence |
| keywords[2].id | https://openalex.org/keywords/deep-learning |
| keywords[2].score | 0.5217681527137756 |
| keywords[2].display_name | Deep learning |
| keywords[3].id | https://openalex.org/keywords/computer-vision |
| keywords[3].score | 0.4986722469329834 |
| keywords[3].display_name | Computer vision |
| keywords[4].id | https://openalex.org/keywords/image |
| keywords[4].score | 0.4611641764640808 |
| keywords[4].display_name | Image (mathematics) |
| keywords[5].id | https://openalex.org/keywords/image-resolution |
| keywords[5].score | 0.41840797662734985 |
| keywords[5].display_name | Image resolution |
| language | en |
| locations[0].id | doi:10.1109/access.2024.3402953 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S2485537415 |
| locations[0].source.issn | 2169-3536 |
| locations[0].source.type | journal |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | 2169-3536 |
| locations[0].source.is_core | True |
| locations[0].source.is_in_doaj | True |
| locations[0].source.display_name | IEEE Access |
| locations[0].source.host_organization | https://openalex.org/P4310319808 |
| locations[0].source.host_organization_name | Institute of Electrical and Electronics Engineers |
| locations[0].source.host_organization_lineage | https://openalex.org/P4310319808 |
| locations[0].source.host_organization_lineage_names | Institute of Electrical and Electronics Engineers |
| locations[0].license | cc-by |
| locations[0].pdf_url | https://ieeexplore.ieee.org/ielx7/6287639/6514899/10534254.pdf |
| locations[0].version | publishedVersion |
| locations[0].raw_type | journal-article |
| locations[0].license_id | https://openalex.org/licenses/cc-by |
| locations[0].is_accepted | True |
| locations[0].is_published | True |
| locations[0].raw_source_name | IEEE Access |
| locations[0].landing_page_url | https://doi.org/10.1109/access.2024.3402953 |
| locations[1].id | pmh:oai:doaj.org/article:f91b4f5388514582a104eed6f10ba94d |
| locations[1].is_oa | False |
| locations[1].source.id | https://openalex.org/S4306401280 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | False |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | DOAJ (DOAJ: Directory of Open Access Journals) |
| locations[1].source.host_organization | |
| locations[1].source.host_organization_name | |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | submittedVersion |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | False |
| locations[1].raw_source_name | IEEE Access, Vol 12, Pp 83452-83465 (2024) |
| locations[1].landing_page_url | https://doaj.org/article/f91b4f5388514582a104eed6f10ba94d |
| indexed_in | crossref, doaj |
| authorships[0].author.id | https://openalex.org/A5101992584 |
| authorships[0].author.orcid | https://orcid.org/0000-0002-1131-2398 |
| authorships[0].author.display_name | Mansoor Hayat |
| authorships[0].countries | TH |
| authorships[0].affiliations[0].institution_ids | https://openalex.org/I158708052 |
| authorships[0].affiliations[0].raw_affiliation_string | Department of Electrical Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, Thailand |
| authorships[0].institutions[0].id | https://openalex.org/I158708052 |
| authorships[0].institutions[0].ror | https://ror.org/028wp3y58 |
| authorships[0].institutions[0].type | education |
| authorships[0].institutions[0].lineage | https://openalex.org/I158708052 |
| authorships[0].institutions[0].country_code | TH |
| authorships[0].institutions[0].display_name | Chulalongkorn University |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Mansoor Hayat |
| authorships[0].is_corresponding | False |
| authorships[0].raw_affiliation_strings | Department of Electrical Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, Thailand |
| authorships[1].author.id | https://openalex.org/A5069698375 |
| authorships[1].author.orcid | https://orcid.org/0000-0001-9840-3171 |
| authorships[1].author.display_name | Supavadee Aramvith |
| authorships[1].countries | TH |
| authorships[1].affiliations[0].institution_ids | https://openalex.org/I158708052 |
| authorships[1].affiliations[0].raw_affiliation_string | Multimedia Data Analytics and Processing Research Unit, Department of Electrical Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, Thailand |
| authorships[1].institutions[0].id | https://openalex.org/I158708052 |
| authorships[1].institutions[0].ror | https://ror.org/028wp3y58 |
| authorships[1].institutions[0].type | education |
| authorships[1].institutions[0].lineage | https://openalex.org/I158708052 |
| authorships[1].institutions[0].country_code | TH |
| authorships[1].institutions[0].display_name | Chulalongkorn University |
| authorships[1].author_position | last |
| authorships[1].raw_author_name | Supavadee Aramvith |
| authorships[1].is_corresponding | False |
| authorships[1].raw_affiliation_strings | Multimedia Data Analytics and Processing Research Unit, Department of Electrical Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, Thailand |
| has_content.pdf | True |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://ieeexplore.ieee.org/ielx7/6287639/6514899/10534254.pdf |
| open_access.oa_status | gold |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | Saliency-Aware Deep Learning Approach for Enhanced Endoscopic Image Super-Resolution |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T03:46:38.306776 |
| primary_topic.id | https://openalex.org/T11659 |
| primary_topic.field.id | https://openalex.org/fields/22 |
| primary_topic.field.display_name | Engineering |
| primary_topic.score | 0.9642000198364258 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/2214 |
| primary_topic.subfield.display_name | Media Technology |
| primary_topic.display_name | Advanced Image Fusion Techniques |
| related_works | https://openalex.org/W2731899572, https://openalex.org/W3215138031, https://openalex.org/W2058170566, https://openalex.org/W2755342338, https://openalex.org/W2772917594, https://openalex.org/W2775347418, https://openalex.org/W2166024367, https://openalex.org/W3009238340, https://openalex.org/W3116076068, https://openalex.org/W2229312674 |
| cited_by_count | 21 |
| counts_by_year[0].year | 2025 |
| counts_by_year[0].cited_by_count | 10 |
| counts_by_year[1].year | 2024 |
| counts_by_year[1].cited_by_count | 11 |
| locations_count | 2 |
| best_oa_location.id | doi:10.1109/access.2024.3402953 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S2485537415 |
| best_oa_location.source.issn | 2169-3536 |
| best_oa_location.source.type | journal |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | 2169-3536 |
| best_oa_location.source.is_core | True |
| best_oa_location.source.is_in_doaj | True |
| best_oa_location.source.display_name | IEEE Access |
| best_oa_location.source.host_organization | https://openalex.org/P4310319808 |
| best_oa_location.source.host_organization_name | Institute of Electrical and Electronics Engineers |
| best_oa_location.source.host_organization_lineage | https://openalex.org/P4310319808 |
| best_oa_location.source.host_organization_lineage_names | Institute of Electrical and Electronics Engineers |
| best_oa_location.license | cc-by |
| best_oa_location.pdf_url | https://ieeexplore.ieee.org/ielx7/6287639/6514899/10534254.pdf |
| best_oa_location.version | publishedVersion |
| best_oa_location.raw_type | journal-article |
| best_oa_location.license_id | https://openalex.org/licenses/cc-by |
| best_oa_location.is_accepted | True |
| best_oa_location.is_published | True |
| best_oa_location.raw_source_name | IEEE Access |
| best_oa_location.landing_page_url | https://doi.org/10.1109/access.2024.3402953 |
| primary_location.id | doi:10.1109/access.2024.3402953 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S2485537415 |
| primary_location.source.issn | 2169-3536 |
| primary_location.source.type | journal |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | 2169-3536 |
| primary_location.source.is_core | True |
| primary_location.source.is_in_doaj | True |
| primary_location.source.display_name | IEEE Access |
| primary_location.source.host_organization | https://openalex.org/P4310319808 |
| primary_location.source.host_organization_name | Institute of Electrical and Electronics Engineers |
| primary_location.source.host_organization_lineage | https://openalex.org/P4310319808 |
| primary_location.source.host_organization_lineage_names | Institute of Electrical and Electronics Engineers |
| primary_location.license | cc-by |
| primary_location.pdf_url | https://ieeexplore.ieee.org/ielx7/6287639/6514899/10534254.pdf |
| primary_location.version | publishedVersion |
| primary_location.raw_type | journal-article |
| primary_location.license_id | https://openalex.org/licenses/cc-by |
| primary_location.is_accepted | True |
| primary_location.is_published | True |
| primary_location.raw_source_name | IEEE Access |
| primary_location.landing_page_url | https://doi.org/10.1109/access.2024.3402953 |
| publication_date | 2024-01-01 |
| publication_year | 2024 |
| referenced_works | https://openalex.org/W2789029145, https://openalex.org/W2139355829, https://openalex.org/W3134642121, https://openalex.org/W3133432362, https://openalex.org/W4388901718, https://openalex.org/W4391974547, https://openalex.org/W3035606294, https://openalex.org/W2799046095, https://openalex.org/W3175560234, https://openalex.org/W4225805704, https://openalex.org/W4375928705, https://openalex.org/W2965669158, https://openalex.org/W2141568386, https://openalex.org/W2915878580, https://openalex.org/W4319337062, https://openalex.org/W3035302306, https://openalex.org/W2963774720, https://openalex.org/W3170026688, https://openalex.org/W4382393357, https://openalex.org/W2242218935, https://openalex.org/W3000775737, https://openalex.org/W6758681311, https://openalex.org/W2954930822, https://openalex.org/W3195836789, https://openalex.org/W2922435819, https://openalex.org/W3136148319, https://openalex.org/W3085848841, https://openalex.org/W2027755635, https://openalex.org/W2320725294, https://openalex.org/W2605737038, https://openalex.org/W2488588527, https://openalex.org/W2798302728, https://openalex.org/W2163935418, https://openalex.org/W2747898905, https://openalex.org/W4387806968, https://openalex.org/W54257720, https://openalex.org/W6864775960, https://openalex.org/W2996544626, https://openalex.org/W3092755853, https://openalex.org/W4389610328, https://openalex.org/W4390706297, https://openalex.org/W4386453635, https://openalex.org/W4292828587, https://openalex.org/W4284711554, https://openalex.org/W4226313662, https://openalex.org/W2964101377, https://openalex.org/W2999912553, https://openalex.org/W63091017, https://openalex.org/W2950217418, https://openalex.org/W3092721525, https://openalex.org/W4395064917 |
| referenced_works_count | 51 |
| abstract_inverted_index.a | 10 |
| abstract_inverted_index.2, | 119 |
| abstract_inverted_index.4, | 120 |
| abstract_inverted_index.8, | 122 |
| abstract_inverted_index.SR | 69, 181 |
| abstract_inverted_index.an | 66, 79, 85 |
| abstract_inverted_index.as | 168 |
| abstract_inverted_index.at | 117, 215 |
| abstract_inverted_index.be | 207, 213 |
| abstract_inverted_index.in | 13, 41, 71, 127, 160, 164, 173, 183 |
| abstract_inverted_index.of | 2, 24, 38, 48, 58, 95, 145, 179, 201 |
| abstract_inverted_index.on | 100, 209 |
| abstract_inverted_index.to | 73, 189 |
| abstract_inverted_index.Our | 63, 130 |
| abstract_inverted_index.SR. | 129 |
| abstract_inverted_index.The | 0, 176, 204 |
| abstract_inverted_index.all | 161 |
| abstract_inverted_index.and | 21, 31, 50, 61, 84, 121, 158, 170, 172, 193, 199, 211 |
| abstract_inverted_index.are | 132 |
| abstract_inverted_index.but | 148 |
| abstract_inverted_index.can | 54, 212 |
| abstract_inverted_index.for | 27, 92 |
| abstract_inverted_index.has | 186 |
| abstract_inverted_index.not | 139 |
| abstract_inverted_index.our | 104, 137, 180 |
| abstract_inverted_index.the | 35, 46, 56, 93, 101, 143, 187, 197 |
| abstract_inverted_index.was | 106 |
| abstract_inverted_index.PSNR | 169 |
| abstract_inverted_index.They | 134 |
| abstract_inverted_index.This | 76 |
| abstract_inverted_index.also | 149 |
| abstract_inverted_index.code | 205 |
| abstract_inverted_index.four | 110 |
| abstract_inverted_index.like | 155 |
| abstract_inverted_index.only | 140 |
| abstract_inverted_index.show | 135 |
| abstract_inverted_index.such | 167 |
| abstract_inverted_index.that | 136 |
| abstract_inverted_index.will | 206 |
| abstract_inverted_index.with | 18 |
| abstract_inverted_index.SSIM, | 171 |
| abstract_inverted_index.depth | 19 |
| abstract_inverted_index.faces | 43 |
| abstract_inverted_index.image | 68, 115 |
| abstract_inverted_index.model | 70, 77, 105, 138, 182 |
| abstract_inverted_index.other | 152 |
| abstract_inverted_index.these | 74 |
| abstract_inverted_index.views | 23 |
| abstract_inverted_index.which | 53 |
| abstract_inverted_index.GitHub | 210 |
| abstract_inverted_index.SCARED | 102 |
| abstract_inverted_index.Stereo | 3 |
| abstract_inverted_index.across | 109 |
| abstract_inverted_index.hinder | 55 |
| abstract_inverted_index.images | 147 |
| abstract_inverted_index.module | 83, 90 |
| abstract_inverted_index.scales | 118 |
| abstract_inverted_index.stereo | 39 |
| abstract_inverted_index.tested | 108, 162 |
| abstract_inverted_index.within | 6 |
| abstract_inverted_index.CCSBESR | 159 |
| abstract_inverted_index.Imaging | 4 |
| abstract_inverted_index.anatomy | 26 |
| abstract_inverted_index.blurred | 51 |
| abstract_inverted_index.feature | 81, 88 |
| abstract_inverted_index.images, | 52 |
| abstract_inverted_index.imaging | 40, 185 |
| abstract_inverted_index.medical | 14, 59, 191 |
| abstract_inverted_index.methods | 154 |
| abstract_inverted_index.quality | 144 |
| abstract_inverted_index.results | 131 |
| abstract_inverted_index.trained | 99 |
| abstract_inverted_index.E-SEVSR, | 156 |
| abstract_inverted_index.However, | 34 |
| abstract_inverted_index.accessed | 214 |
| abstract_inverted_index.accuracy | 30 |
| abstract_inverted_index.adoption | 1 |
| abstract_inverted_index.advanced | 86 |
| abstract_inverted_index.dataset, | 103 |
| abstract_inverted_index.datasets | 116 |
| abstract_inverted_index.detailed | 22 |
| abstract_inverted_index.enhanced | 28 |
| abstract_inverted_index.enhances | 142 |
| abstract_inverted_index.existing | 153 |
| abstract_inverted_index.features | 78 |
| abstract_inverted_index.imagery. | 97 |
| abstract_inverted_index.imaging, | 15 |
| abstract_inverted_index.internal | 25 |
| abstract_inverted_index.measures | 166 |
| abstract_inverted_index.publicly | 112 |
| abstract_inverted_index.released | 208 |
| abstract_inverted_index.research | 64 |
| abstract_inverted_index.response | 72 |
| abstract_inverted_index.surgeons | 17 |
| abstract_inverted_index.surgery, | 194 |
| abstract_inverted_index.surgical | 32 |
| abstract_inverted_index.tailored | 91 |
| abstract_inverted_index.DCSSRNet, | 157 |
| abstract_inverted_index.Initially | 98 |
| abstract_inverted_index.available | 113 |
| abstract_inverted_index.datasets, | 163 |
| abstract_inverted_index.diagnoses | 60 |
| abstract_inverted_index.endoscopy | 42 |
| abstract_inverted_index.including | 45 |
| abstract_inverted_index.potential | 188 |
| abstract_inverted_index.practical | 36 |
| abstract_inverted_index.precision | 198 |
| abstract_inverted_index.providing | 16 |
| abstract_inverted_index.specific. | 75 |
| abstract_inverted_index.surpasses | 151 |
| abstract_inverted_index.additional | 111 |
| abstract_inverted_index.cross-view | 87 |
| abstract_inverted_index.diagnostic | 29 |
| abstract_inverted_index.endoscopic | 7, 67, 96, 114, 128, 146, 184, 202 |
| abstract_inverted_index.extraction | 82 |
| abstract_inverted_index.generation | 47 |
| abstract_inverted_index.increasing | 196 |
| abstract_inverted_index.innovative | 80 |
| abstract_inverted_index.introduces | 65 |
| abstract_inverted_index.perception | 20 |
| abstract_inverted_index.precision. | 33 |
| abstract_inverted_index.procedures | 8 |
| abstract_inverted_index.represents | 9 |
| abstract_inverted_index.rigorously | 107 |
| abstract_inverted_index.successful | 177 |
| abstract_inverted_index.technology | 5 |
| abstract_inverted_index.advancement | 12 |
| abstract_inverted_index.application | 37, 178 |
| abstract_inverted_index.challenges, | 44 |
| abstract_inverted_index.compelling. | 133 |
| abstract_inverted_index.diagnostics | 192 |
| abstract_inverted_index.interaction | 89 |
| abstract_inverted_index.intricacies | 94 |
| abstract_inverted_index.performance | 125 |
| abstract_inverted_index.procedures. | 203 |
| abstract_inverted_index.qualitative | 174 |
| abstract_inverted_index.consistently | 150 |
| abstract_inverted_index.evaluations. | 175 |
| abstract_inverted_index.improvements | 126 |
| abstract_inverted_index.quantitative | 165 |
| abstract_inverted_index.unparalleled | 124 |
| abstract_inverted_index.demonstrating | 123 |
| abstract_inverted_index.effectiveness | 57, 200 |
| abstract_inverted_index.revolutionize | 190 |
| abstract_inverted_index.significantly | 195 |
| abstract_inverted_index.substantially | 141 |
| abstract_inverted_index.interventions. | 62 |
| abstract_inverted_index.low-resolution | 49 |
| abstract_inverted_index.transformative | 11 |
| abstract_inverted_index.<uri>https://github.com/cu-vtrg-lab/Saliency-Aware-Deep-Learning-Approach-for-Enhanced-Endoscopic-Image-SR</uri>. | 216 |
| cited_by_percentile_year.max | 99 |
| cited_by_percentile_year.min | 98 |
| countries_distinct_count | 1 |
| institutions_distinct_count | 2 |
| citation_normalized_percentile.value | 0.98123694 |
| citation_normalized_percentile.is_in_top_1_percent | False |
| citation_normalized_percentile.is_in_top_10_percent | True |