EAA-Net: Rethinking the Autoencoder Architecture with Intra-class Features for Medical Image Segmentation Article Swipe
YOU?
·
· 2022
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2208.09197
Automatic image segmentation technology is critical to the visual analysis. The autoencoder architecture has satisfying performance in various image segmentation tasks. However, autoencoders based on convolutional neural networks (CNN) seem to encounter a bottleneck in improving the accuracy of semantic segmentation. Increasing the inter-class distance between foreground and background is an inherent characteristic of the segmentation network. However, segmentation networks pay too much attention to the main visual difference between foreground and background, and ignores the detailed edge information, which leads to a reduction in the accuracy of edge segmentation. In this paper, we propose a light-weight end-to-end segmentation framework based on multi-task learning, termed Edge Attention autoencoder Network (EAA-Net), to improve edge segmentation ability. Our approach not only utilizes the segmentation network to obtain inter-class features, but also applies the reconstruction network to extract intra-class features among the foregrounds. We further design a intra-class and inter-class features fusion module -- I2 fusion module. The I2 fusion module is used to merge intra-class and inter-class features, and use a soft attention mechanism to remove invalid background information. Experimental results show that our method performs well in medical image segmentation tasks. EAA-Net is easy to implement and has small calculation cost.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2208.09197
- https://arxiv.org/pdf/2208.09197
- OA Status
- green
- Cited By
- 3
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4292718586
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4292718586Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2208.09197Digital Object Identifier
- Title
-
EAA-Net: Rethinking the Autoencoder Architecture with Intra-class Features for Medical Image SegmentationWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2022Year of publication
- Publication date
-
2022-08-19Full publication date if available
- Authors
-
Shiqiang Ma, Xuejian Li, Jijun Tang, Fei GuoList of authors in order
- Landing page
-
https://arxiv.org/abs/2208.09197Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2208.09197Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2208.09197Direct OA link when available
- Concepts
-
Segmentation, Artificial intelligence, Computer science, Autoencoder, Scale-space segmentation, Pattern recognition (psychology), Convolutional neural network, Segmentation-based object categorization, Image segmentation, Deep learning, Enhanced Data Rates for GSM Evolution, Computer visionTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
3Total citation count in OpenAlex
- Citations by year (recent)
-
2025: 1, 2023: 2Per-year citation counts (last 5 years)
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4292718586 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2208.09197 |
| ids.doi | https://doi.org/10.48550/arxiv.2208.09197 |
| ids.openalex | https://openalex.org/W4292718586 |
| fwci | |
| type | preprint |
| title | EAA-Net: Rethinking the Autoencoder Architecture with Intra-class Features for Medical Image Segmentation |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T10036 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9972000122070312 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Advanced Neural Network Applications |
| topics[1].id | https://openalex.org/T10052 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9954000115394592 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1707 |
| topics[1].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[1].display_name | Medical Image Segmentation Techniques |
| topics[2].id | https://openalex.org/T12422 |
| topics[2].field.id | https://openalex.org/fields/27 |
| topics[2].field.display_name | Medicine |
| topics[2].score | 0.9939000010490417 |
| topics[2].domain.id | https://openalex.org/domains/4 |
| topics[2].domain.display_name | Health Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/2741 |
| topics[2].subfield.display_name | Radiology, Nuclear Medicine and Imaging |
| topics[2].display_name | Radiomics and Machine Learning in Medical Imaging |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C89600930 |
| concepts[0].level | 2 |
| concepts[0].score | 0.8218520879745483 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q1423946 |
| concepts[0].display_name | Segmentation |
| concepts[1].id | https://openalex.org/C154945302 |
| concepts[1].level | 1 |
| concepts[1].score | 0.7744291424751282 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[1].display_name | Artificial intelligence |
| concepts[2].id | https://openalex.org/C41008148 |
| concepts[2].level | 0 |
| concepts[2].score | 0.773240327835083 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[2].display_name | Computer science |
| concepts[3].id | https://openalex.org/C101738243 |
| concepts[3].level | 3 |
| concepts[3].score | 0.6464253664016724 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q786435 |
| concepts[3].display_name | Autoencoder |
| concepts[4].id | https://openalex.org/C65885262 |
| concepts[4].level | 4 |
| concepts[4].score | 0.5861983299255371 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q7429708 |
| concepts[4].display_name | Scale-space segmentation |
| concepts[5].id | https://openalex.org/C153180895 |
| concepts[5].level | 2 |
| concepts[5].score | 0.5812259912490845 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q7148389 |
| concepts[5].display_name | Pattern recognition (psychology) |
| concepts[6].id | https://openalex.org/C81363708 |
| concepts[6].level | 2 |
| concepts[6].score | 0.5682346820831299 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q17084460 |
| concepts[6].display_name | Convolutional neural network |
| concepts[7].id | https://openalex.org/C25694479 |
| concepts[7].level | 5 |
| concepts[7].score | 0.565736711025238 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q7446278 |
| concepts[7].display_name | Segmentation-based object categorization |
| concepts[8].id | https://openalex.org/C124504099 |
| concepts[8].level | 3 |
| concepts[8].score | 0.5466279983520508 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q56933 |
| concepts[8].display_name | Image segmentation |
| concepts[9].id | https://openalex.org/C108583219 |
| concepts[9].level | 2 |
| concepts[9].score | 0.45529061555862427 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q197536 |
| concepts[9].display_name | Deep learning |
| concepts[10].id | https://openalex.org/C162307627 |
| concepts[10].level | 2 |
| concepts[10].score | 0.43221646547317505 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q204833 |
| concepts[10].display_name | Enhanced Data Rates for GSM Evolution |
| concepts[11].id | https://openalex.org/C31972630 |
| concepts[11].level | 1 |
| concepts[11].score | 0.41619497537612915 |
| concepts[11].wikidata | https://www.wikidata.org/wiki/Q844240 |
| concepts[11].display_name | Computer vision |
| keywords[0].id | https://openalex.org/keywords/segmentation |
| keywords[0].score | 0.8218520879745483 |
| keywords[0].display_name | Segmentation |
| keywords[1].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[1].score | 0.7744291424751282 |
| keywords[1].display_name | Artificial intelligence |
| keywords[2].id | https://openalex.org/keywords/computer-science |
| keywords[2].score | 0.773240327835083 |
| keywords[2].display_name | Computer science |
| keywords[3].id | https://openalex.org/keywords/autoencoder |
| keywords[3].score | 0.6464253664016724 |
| keywords[3].display_name | Autoencoder |
| keywords[4].id | https://openalex.org/keywords/scale-space-segmentation |
| keywords[4].score | 0.5861983299255371 |
| keywords[4].display_name | Scale-space segmentation |
| keywords[5].id | https://openalex.org/keywords/pattern-recognition |
| keywords[5].score | 0.5812259912490845 |
| keywords[5].display_name | Pattern recognition (psychology) |
| keywords[6].id | https://openalex.org/keywords/convolutional-neural-network |
| keywords[6].score | 0.5682346820831299 |
| keywords[6].display_name | Convolutional neural network |
| keywords[7].id | https://openalex.org/keywords/segmentation-based-object-categorization |
| keywords[7].score | 0.565736711025238 |
| keywords[7].display_name | Segmentation-based object categorization |
| keywords[8].id | https://openalex.org/keywords/image-segmentation |
| keywords[8].score | 0.5466279983520508 |
| keywords[8].display_name | Image segmentation |
| keywords[9].id | https://openalex.org/keywords/deep-learning |
| keywords[9].score | 0.45529061555862427 |
| keywords[9].display_name | Deep learning |
| keywords[10].id | https://openalex.org/keywords/enhanced-data-rates-for-gsm-evolution |
| keywords[10].score | 0.43221646547317505 |
| keywords[10].display_name | Enhanced Data Rates for GSM Evolution |
| keywords[11].id | https://openalex.org/keywords/computer-vision |
| keywords[11].score | 0.41619497537612915 |
| keywords[11].display_name | Computer vision |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2208.09197 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2208.09197 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2208.09197 |
| locations[1].id | doi:10.48550/arxiv.2208.09197 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2208.09197 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5011505829 |
| authorships[0].author.orcid | https://orcid.org/0009-0004-2329-5873 |
| authorships[0].author.display_name | Shiqiang Ma |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Ma, Shiqiang |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5101810624 |
| authorships[1].author.orcid | https://orcid.org/0000-0001-5536-7940 |
| authorships[1].author.display_name | Xuejian Li |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Li, Xuejian |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5001619694 |
| authorships[2].author.orcid | https://orcid.org/0000-0002-6377-536X |
| authorships[2].author.display_name | Jijun Tang |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Tang, Jijun |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5100702161 |
| authorships[3].author.orcid | https://orcid.org/0000-0001-8346-0798 |
| authorships[3].author.display_name | Fei Guo |
| authorships[3].author_position | last |
| authorships[3].raw_author_name | Guo, Fei |
| authorships[3].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2208.09197 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2022-08-23T00:00:00 |
| display_name | EAA-Net: Rethinking the Autoencoder Architecture with Intra-class Features for Medical Image Segmentation |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T10036 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9972000122070312 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Advanced Neural Network Applications |
| related_works | https://openalex.org/W3144569342, https://openalex.org/W2945274617, https://openalex.org/W2185902295, https://openalex.org/W2103507220, https://openalex.org/W2055202857, https://openalex.org/W2371519352, https://openalex.org/W4205800335, https://openalex.org/W2386644571, https://openalex.org/W2372421320, https://openalex.org/W2551987074 |
| cited_by_count | 3 |
| counts_by_year[0].year | 2025 |
| counts_by_year[0].cited_by_count | 1 |
| counts_by_year[1].year | 2023 |
| counts_by_year[1].cited_by_count | 2 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2208.09197 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2208.09197 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2208.09197 |
| primary_location.id | pmh:oai:arXiv.org:2208.09197 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2208.09197 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2208.09197 |
| publication_date | 2022-08-19 |
| publication_year | 2022 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 32, 82, 95, 143, 168 |
| abstract_inverted_index.-- | 150 |
| abstract_inverted_index.I2 | 151, 155 |
| abstract_inverted_index.In | 90 |
| abstract_inverted_index.We | 140 |
| abstract_inverted_index.an | 50 |
| abstract_inverted_index.in | 16, 34, 84, 185 |
| abstract_inverted_index.is | 4, 49, 158, 191 |
| abstract_inverted_index.of | 38, 53, 87 |
| abstract_inverted_index.on | 24, 101 |
| abstract_inverted_index.to | 6, 30, 64, 81, 110, 123, 133, 160, 172, 193 |
| abstract_inverted_index.we | 93 |
| abstract_inverted_index.Our | 115 |
| abstract_inverted_index.The | 10, 154 |
| abstract_inverted_index.and | 47, 71, 73, 145, 163, 166, 195 |
| abstract_inverted_index.but | 127 |
| abstract_inverted_index.has | 13, 196 |
| abstract_inverted_index.not | 117 |
| abstract_inverted_index.our | 181 |
| abstract_inverted_index.pay | 60 |
| abstract_inverted_index.the | 7, 36, 42, 54, 65, 75, 85, 120, 130, 138 |
| abstract_inverted_index.too | 61 |
| abstract_inverted_index.use | 167 |
| abstract_inverted_index.Edge | 105 |
| abstract_inverted_index.also | 128 |
| abstract_inverted_index.easy | 192 |
| abstract_inverted_index.edge | 77, 88, 112 |
| abstract_inverted_index.main | 66 |
| abstract_inverted_index.much | 62 |
| abstract_inverted_index.only | 118 |
| abstract_inverted_index.seem | 29 |
| abstract_inverted_index.show | 179 |
| abstract_inverted_index.soft | 169 |
| abstract_inverted_index.that | 180 |
| abstract_inverted_index.this | 91 |
| abstract_inverted_index.used | 159 |
| abstract_inverted_index.well | 184 |
| abstract_inverted_index.(CNN) | 28 |
| abstract_inverted_index.among | 137 |
| abstract_inverted_index.based | 23, 100 |
| abstract_inverted_index.cost. | 199 |
| abstract_inverted_index.image | 1, 18, 187 |
| abstract_inverted_index.leads | 80 |
| abstract_inverted_index.merge | 161 |
| abstract_inverted_index.small | 197 |
| abstract_inverted_index.which | 79 |
| abstract_inverted_index.design | 142 |
| abstract_inverted_index.fusion | 148, 152, 156 |
| abstract_inverted_index.method | 182 |
| abstract_inverted_index.module | 149, 157 |
| abstract_inverted_index.neural | 26 |
| abstract_inverted_index.obtain | 124 |
| abstract_inverted_index.paper, | 92 |
| abstract_inverted_index.remove | 173 |
| abstract_inverted_index.tasks. | 20, 189 |
| abstract_inverted_index.termed | 104 |
| abstract_inverted_index.visual | 8, 67 |
| abstract_inverted_index.EAA-Net | 190 |
| abstract_inverted_index.Network | 108 |
| abstract_inverted_index.applies | 129 |
| abstract_inverted_index.between | 45, 69 |
| abstract_inverted_index.extract | 134 |
| abstract_inverted_index.further | 141 |
| abstract_inverted_index.ignores | 74 |
| abstract_inverted_index.improve | 111 |
| abstract_inverted_index.invalid | 174 |
| abstract_inverted_index.medical | 186 |
| abstract_inverted_index.module. | 153 |
| abstract_inverted_index.network | 122, 132 |
| abstract_inverted_index.propose | 94 |
| abstract_inverted_index.results | 178 |
| abstract_inverted_index.various | 17 |
| abstract_inverted_index.However, | 21, 57 |
| abstract_inverted_index.ability. | 114 |
| abstract_inverted_index.accuracy | 37, 86 |
| abstract_inverted_index.approach | 116 |
| abstract_inverted_index.critical | 5 |
| abstract_inverted_index.detailed | 76 |
| abstract_inverted_index.distance | 44 |
| abstract_inverted_index.features | 136, 147 |
| abstract_inverted_index.inherent | 51 |
| abstract_inverted_index.network. | 56 |
| abstract_inverted_index.networks | 27, 59 |
| abstract_inverted_index.performs | 183 |
| abstract_inverted_index.semantic | 39 |
| abstract_inverted_index.utilizes | 119 |
| abstract_inverted_index.Attention | 106 |
| abstract_inverted_index.Automatic | 0 |
| abstract_inverted_index.analysis. | 9 |
| abstract_inverted_index.attention | 63, 170 |
| abstract_inverted_index.encounter | 31 |
| abstract_inverted_index.features, | 126, 165 |
| abstract_inverted_index.framework | 99 |
| abstract_inverted_index.implement | 194 |
| abstract_inverted_index.improving | 35 |
| abstract_inverted_index.learning, | 103 |
| abstract_inverted_index.mechanism | 171 |
| abstract_inverted_index.reduction | 83 |
| abstract_inverted_index.(EAA-Net), | 109 |
| abstract_inverted_index.Increasing | 41 |
| abstract_inverted_index.background | 48, 175 |
| abstract_inverted_index.bottleneck | 33 |
| abstract_inverted_index.difference | 68 |
| abstract_inverted_index.end-to-end | 97 |
| abstract_inverted_index.foreground | 46, 70 |
| abstract_inverted_index.multi-task | 102 |
| abstract_inverted_index.satisfying | 14 |
| abstract_inverted_index.technology | 3 |
| abstract_inverted_index.autoencoder | 11, 107 |
| abstract_inverted_index.background, | 72 |
| abstract_inverted_index.calculation | 198 |
| abstract_inverted_index.inter-class | 43, 125, 146, 164 |
| abstract_inverted_index.intra-class | 135, 144, 162 |
| abstract_inverted_index.performance | 15 |
| abstract_inverted_index.Experimental | 177 |
| abstract_inverted_index.architecture | 12 |
| abstract_inverted_index.autoencoders | 22 |
| abstract_inverted_index.foregrounds. | 139 |
| abstract_inverted_index.information, | 78 |
| abstract_inverted_index.information. | 176 |
| abstract_inverted_index.light-weight | 96 |
| abstract_inverted_index.segmentation | 2, 19, 55, 58, 98, 113, 121, 188 |
| abstract_inverted_index.convolutional | 25 |
| abstract_inverted_index.segmentation. | 40, 89 |
| abstract_inverted_index.characteristic | 52 |
| abstract_inverted_index.reconstruction | 131 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 4 |
| sustainable_development_goals[0].id | https://metadata.un.org/sdg/9 |
| sustainable_development_goals[0].score | 0.41999998688697815 |
| sustainable_development_goals[0].display_name | Industry, innovation and infrastructure |
| citation_normalized_percentile |