Korean Spoken Accent Identification Using T-vector Embeddings Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.11648/j.sr.20251302.11
In this paper, we introduce a spoken accent identification system for the Korean language, which utilize t-vector embeddings extracted from state-of-the-art TitaNet neural network. To implement the Korean spoken accent identification system, we propose two approaches: First, we introduce a collection method of training data for the Korean spoken accent identification. Korean accents can be broadly classified into four categories: standard accent, southern accent, northwestern accent and northeastern accent. Generally, in Korean language, the speech data for standard accent can be easily obtained via different videos and websites, but the rest of the data except standard accent are very rare and therefore difficult to collect. To mitigate the impact of this data scarcity, we introduce a synthetic audio augmentation using Text-to-Speech (TTS) synthesis techniques. This process is done under the condition that the synthetic audio generated by TTS should be retain accent information of original speaker. Second, we propose an approach to build the deep neural network (DNN) for Korean spoken accent identification in a manner that fine-tune the trainable parameters of a pre-trained TitaNet speaker recognition model by using aforementioned training dataset. Based on the trained TitaNet model, the accent identification is performed using t-vector embedding features extracted from that model, and cosine distance function. The experimental results show that our proposed accent identification system is superior to the systems based on other state-of-the-art DNNs such as the x-vector and ECAPA-TDNN.
Related Topics
- Type
- article
- Language
- en
- Landing Page
- https://doi.org/10.11648/j.sr.20251302.11
- http://article.sciencepg.com/pdf/j.sr.20251302.11
- OA Status
- diamond
- References
- 32
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4411854620
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4411854620Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.11648/j.sr.20251302.11Digital Object Identifier
- Title
-
Korean Spoken Accent Identification Using T-vector EmbeddingsWork title
- Type
-
articleOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2025Year of publication
- Publication date
-
2025-06-30Full publication date if available
- Authors
-
Yong Om, Hak Yong KimList of authors in order
- Landing page
-
https://doi.org/10.11648/j.sr.20251302.11Publisher landing page
- PDF URL
-
https://article.sciencepg.com/pdf/j.sr.20251302.11Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
diamondOpen access status per OpenAlex
- OA URL
-
https://article.sciencepg.com/pdf/j.sr.20251302.11Direct OA link when available
- Concepts
-
Stress (linguistics), Computer science, Speech recognition, Identification (biology), Artificial neural network, Pitch accent, Artificial intelligence, Natural language processing, Prosody, Botany, BiologyTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
0Total citation count in OpenAlex
- References (count)
-
32Number of works referenced by this work
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4411854620 |
|---|---|
| doi | https://doi.org/10.11648/j.sr.20251302.11 |
| ids.doi | https://doi.org/10.11648/j.sr.20251302.11 |
| ids.openalex | https://openalex.org/W4411854620 |
| fwci | 0.0 |
| type | article |
| title | Korean Spoken Accent Identification Using T-vector Embeddings |
| biblio.issue | 2 |
| biblio.volume | 13 |
| biblio.last_page | 20 |
| biblio.first_page | 13 |
| topics[0].id | https://openalex.org/T10201 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9997000098228455 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1702 |
| topics[0].subfield.display_name | Artificial Intelligence |
| topics[0].display_name | Speech Recognition and Synthesis |
| topics[1].id | https://openalex.org/T10860 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9950000047683716 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1711 |
| topics[1].subfield.display_name | Signal Processing |
| topics[1].display_name | Speech and Audio Processing |
| topics[2].id | https://openalex.org/T11309 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.982200026512146 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1711 |
| topics[2].subfield.display_name | Signal Processing |
| topics[2].display_name | Music and Audio Processing |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C2776756274 |
| concepts[0].level | 2 |
| concepts[0].score | 0.9307663440704346 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q181767 |
| concepts[0].display_name | Stress (linguistics) |
| concepts[1].id | https://openalex.org/C41008148 |
| concepts[1].level | 0 |
| concepts[1].score | 0.7252787947654724 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[1].display_name | Computer science |
| concepts[2].id | https://openalex.org/C28490314 |
| concepts[2].level | 1 |
| concepts[2].score | 0.6669312119483948 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q189436 |
| concepts[2].display_name | Speech recognition |
| concepts[3].id | https://openalex.org/C116834253 |
| concepts[3].level | 2 |
| concepts[3].score | 0.6393295526504517 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q2039217 |
| concepts[3].display_name | Identification (biology) |
| concepts[4].id | https://openalex.org/C50644808 |
| concepts[4].level | 2 |
| concepts[4].score | 0.567676842212677 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q192776 |
| concepts[4].display_name | Artificial neural network |
| concepts[5].id | https://openalex.org/C2777672088 |
| concepts[5].level | 3 |
| concepts[5].score | 0.5342637300491333 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q1441804 |
| concepts[5].display_name | Pitch accent |
| concepts[6].id | https://openalex.org/C154945302 |
| concepts[6].level | 1 |
| concepts[6].score | 0.5258943438529968 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[6].display_name | Artificial intelligence |
| concepts[7].id | https://openalex.org/C204321447 |
| concepts[7].level | 1 |
| concepts[7].score | 0.5028135180473328 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q30642 |
| concepts[7].display_name | Natural language processing |
| concepts[8].id | https://openalex.org/C542774811 |
| concepts[8].level | 2 |
| concepts[8].score | 0.2681894600391388 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q10880526 |
| concepts[8].display_name | Prosody |
| concepts[9].id | https://openalex.org/C59822182 |
| concepts[9].level | 1 |
| concepts[9].score | 0.0 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q441 |
| concepts[9].display_name | Botany |
| concepts[10].id | https://openalex.org/C86803240 |
| concepts[10].level | 0 |
| concepts[10].score | 0.0 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q420 |
| concepts[10].display_name | Biology |
| keywords[0].id | https://openalex.org/keywords/stress |
| keywords[0].score | 0.9307663440704346 |
| keywords[0].display_name | Stress (linguistics) |
| keywords[1].id | https://openalex.org/keywords/computer-science |
| keywords[1].score | 0.7252787947654724 |
| keywords[1].display_name | Computer science |
| keywords[2].id | https://openalex.org/keywords/speech-recognition |
| keywords[2].score | 0.6669312119483948 |
| keywords[2].display_name | Speech recognition |
| keywords[3].id | https://openalex.org/keywords/identification |
| keywords[3].score | 0.6393295526504517 |
| keywords[3].display_name | Identification (biology) |
| keywords[4].id | https://openalex.org/keywords/artificial-neural-network |
| keywords[4].score | 0.567676842212677 |
| keywords[4].display_name | Artificial neural network |
| keywords[5].id | https://openalex.org/keywords/pitch-accent |
| keywords[5].score | 0.5342637300491333 |
| keywords[5].display_name | Pitch accent |
| keywords[6].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[6].score | 0.5258943438529968 |
| keywords[6].display_name | Artificial intelligence |
| keywords[7].id | https://openalex.org/keywords/natural-language-processing |
| keywords[7].score | 0.5028135180473328 |
| keywords[7].display_name | Natural language processing |
| keywords[8].id | https://openalex.org/keywords/prosody |
| keywords[8].score | 0.2681894600391388 |
| keywords[8].display_name | Prosody |
| language | en |
| locations[0].id | doi:10.11648/j.sr.20251302.11 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4210220575 |
| locations[0].source.issn | 2329-0927, 2329-0935 |
| locations[0].source.type | journal |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | 2329-0927 |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | Science Research |
| locations[0].source.host_organization | https://openalex.org/P4310319272 |
| locations[0].source.host_organization_name | Science Publishing Group |
| locations[0].source.host_organization_lineage | https://openalex.org/P4310319272 |
| locations[0].source.host_organization_lineage_names | Science Publishing Group |
| locations[0].license | cc-by |
| locations[0].pdf_url | http://article.sciencepg.com/pdf/j.sr.20251302.11 |
| locations[0].version | publishedVersion |
| locations[0].raw_type | journal-article |
| locations[0].license_id | https://openalex.org/licenses/cc-by |
| locations[0].is_accepted | True |
| locations[0].is_published | True |
| locations[0].raw_source_name | Science Research |
| locations[0].landing_page_url | https://doi.org/10.11648/j.sr.20251302.11 |
| indexed_in | crossref |
| authorships[0].author.id | https://openalex.org/A5118742834 |
| authorships[0].author.orcid | |
| authorships[0].author.display_name | Yong Om |
| authorships[0].affiliations[0].raw_affiliation_string | Institute of AI Technology, University of Science, Pyongyang, Democratic People’s Republic of Korea |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Yong Om |
| authorships[0].is_corresponding | False |
| authorships[0].raw_affiliation_strings | Institute of AI Technology, University of Science, Pyongyang, Democratic People’s Republic of Korea |
| authorships[1].author.id | https://openalex.org/A5100610781 |
| authorships[1].author.orcid | https://orcid.org/0000-0002-4133-124X |
| authorships[1].author.display_name | Hak Yong Kim |
| authorships[1].affiliations[0].raw_affiliation_string | Institute of AI Technology, University of Science, Pyongyang, Democratic People’s Republic of Korea |
| authorships[1].author_position | last |
| authorships[1].raw_author_name | Hak Kim |
| authorships[1].is_corresponding | False |
| authorships[1].raw_affiliation_strings | Institute of AI Technology, University of Science, Pyongyang, Democratic People’s Republic of Korea |
| has_content.pdf | True |
| has_content.grobid_xml | True |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | http://article.sciencepg.com/pdf/j.sr.20251302.11 |
| open_access.oa_status | diamond |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | Korean Spoken Accent Identification Using T-vector Embeddings |
| has_fulltext | True |
| is_retracted | False |
| updated_date | 2025-11-06T03:46:38.306776 |
| primary_topic.id | https://openalex.org/T10201 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9997000098228455 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1702 |
| primary_topic.subfield.display_name | Artificial Intelligence |
| primary_topic.display_name | Speech Recognition and Synthesis |
| related_works | https://openalex.org/W2088008556, https://openalex.org/W4360877803, https://openalex.org/W4298046075, https://openalex.org/W2334135487, https://openalex.org/W4207066001, https://openalex.org/W2381837697, https://openalex.org/W2072461533, https://openalex.org/W162378616, https://openalex.org/W4251666207, https://openalex.org/W2087397317 |
| cited_by_count | 0 |
| locations_count | 1 |
| best_oa_location.id | doi:10.11648/j.sr.20251302.11 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4210220575 |
| best_oa_location.source.issn | 2329-0927, 2329-0935 |
| best_oa_location.source.type | journal |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | 2329-0927 |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | Science Research |
| best_oa_location.source.host_organization | https://openalex.org/P4310319272 |
| best_oa_location.source.host_organization_name | Science Publishing Group |
| best_oa_location.source.host_organization_lineage | https://openalex.org/P4310319272 |
| best_oa_location.source.host_organization_lineage_names | Science Publishing Group |
| best_oa_location.license | cc-by |
| best_oa_location.pdf_url | http://article.sciencepg.com/pdf/j.sr.20251302.11 |
| best_oa_location.version | publishedVersion |
| best_oa_location.raw_type | journal-article |
| best_oa_location.license_id | https://openalex.org/licenses/cc-by |
| best_oa_location.is_accepted | True |
| best_oa_location.is_published | True |
| best_oa_location.raw_source_name | Science Research |
| best_oa_location.landing_page_url | https://doi.org/10.11648/j.sr.20251302.11 |
| primary_location.id | doi:10.11648/j.sr.20251302.11 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4210220575 |
| primary_location.source.issn | 2329-0927, 2329-0935 |
| primary_location.source.type | journal |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | 2329-0927 |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | Science Research |
| primary_location.source.host_organization | https://openalex.org/P4310319272 |
| primary_location.source.host_organization_name | Science Publishing Group |
| primary_location.source.host_organization_lineage | https://openalex.org/P4310319272 |
| primary_location.source.host_organization_lineage_names | Science Publishing Group |
| primary_location.license | cc-by |
| primary_location.pdf_url | http://article.sciencepg.com/pdf/j.sr.20251302.11 |
| primary_location.version | publishedVersion |
| primary_location.raw_type | journal-article |
| primary_location.license_id | https://openalex.org/licenses/cc-by |
| primary_location.is_accepted | True |
| primary_location.is_published | True |
| primary_location.raw_source_name | Science Research |
| primary_location.landing_page_url | https://doi.org/10.11648/j.sr.20251302.11 |
| publication_date | 2025-06-30 |
| publication_year | 2025 |
| referenced_works | https://openalex.org/W1981320719, https://openalex.org/W2150769028, https://openalex.org/W2408021097, https://openalex.org/W2807627734, https://openalex.org/W2194775991, https://openalex.org/W4206878028, https://openalex.org/W2293441309, https://openalex.org/W3092813500, https://openalex.org/W3115366194, https://openalex.org/W2888867175, https://openalex.org/W2894279617, https://openalex.org/W3211614993, https://openalex.org/W3025384377, https://openalex.org/W2972369255, https://openalex.org/W3024869864, https://openalex.org/W4406858425, https://openalex.org/W4408354960, https://openalex.org/W4406482993, https://openalex.org/W3205878676, https://openalex.org/W3181865509, https://openalex.org/W4385807507, https://openalex.org/W4311000453, https://openalex.org/W3036601975, https://openalex.org/W4392903482, https://openalex.org/W3172148458, https://openalex.org/W3092028330, https://openalex.org/W3095173472, https://openalex.org/W3015537910, https://openalex.org/W2974231335, https://openalex.org/W3209984917, https://openalex.org/W2979578510, https://openalex.org/W3213029956 |
| referenced_works_count | 32 |
| abstract_inverted_index.a | 5, 39, 115, 164, 172 |
| abstract_inverted_index.In | 0 |
| abstract_inverted_index.To | 24, 105 |
| abstract_inverted_index.an | 149 |
| abstract_inverted_index.as | 227 |
| abstract_inverted_index.be | 54, 80, 139 |
| abstract_inverted_index.by | 136, 178 |
| abstract_inverted_index.in | 70, 163 |
| abstract_inverted_index.is | 126, 192, 216 |
| abstract_inverted_index.of | 42, 91, 109, 143, 171 |
| abstract_inverted_index.on | 184, 222 |
| abstract_inverted_index.to | 103, 151, 218 |
| abstract_inverted_index.we | 3, 32, 37, 113, 147 |
| abstract_inverted_index.TTS | 137 |
| abstract_inverted_index.The | 206 |
| abstract_inverted_index.and | 66, 86, 100, 202, 230 |
| abstract_inverted_index.are | 97 |
| abstract_inverted_index.but | 88 |
| abstract_inverted_index.can | 53, 79 |
| abstract_inverted_index.for | 10, 45, 76, 158 |
| abstract_inverted_index.our | 211 |
| abstract_inverted_index.the | 11, 26, 46, 73, 89, 92, 107, 129, 132, 153, 168, 185, 189, 219, 228 |
| abstract_inverted_index.two | 34 |
| abstract_inverted_index.via | 83 |
| abstract_inverted_index.DNNs | 225 |
| abstract_inverted_index.This | 124 |
| abstract_inverted_index.data | 44, 75, 93, 111 |
| abstract_inverted_index.deep | 154 |
| abstract_inverted_index.done | 127 |
| abstract_inverted_index.four | 58 |
| abstract_inverted_index.from | 19, 199 |
| abstract_inverted_index.into | 57 |
| abstract_inverted_index.rare | 99 |
| abstract_inverted_index.rest | 90 |
| abstract_inverted_index.show | 209 |
| abstract_inverted_index.such | 226 |
| abstract_inverted_index.that | 131, 166, 200, 210 |
| abstract_inverted_index.this | 1, 110 |
| abstract_inverted_index.very | 98 |
| abstract_inverted_index.(DNN) | 157 |
| abstract_inverted_index.(TTS) | 121 |
| abstract_inverted_index.Based | 183 |
| abstract_inverted_index.audio | 117, 134 |
| abstract_inverted_index.based | 221 |
| abstract_inverted_index.build | 152 |
| abstract_inverted_index.model | 177 |
| abstract_inverted_index.other | 223 |
| abstract_inverted_index.under | 128 |
| abstract_inverted_index.using | 119, 179, 194 |
| abstract_inverted_index.which | 14 |
| abstract_inverted_index.First, | 36 |
| abstract_inverted_index.Korean | 12, 27, 47, 51, 71, 159 |
| abstract_inverted_index.accent | 7, 29, 49, 65, 78, 96, 141, 161, 190, 213 |
| abstract_inverted_index.cosine | 203 |
| abstract_inverted_index.easily | 81 |
| abstract_inverted_index.except | 94 |
| abstract_inverted_index.impact | 108 |
| abstract_inverted_index.manner | 165 |
| abstract_inverted_index.method | 41 |
| abstract_inverted_index.model, | 188, 201 |
| abstract_inverted_index.neural | 22, 155 |
| abstract_inverted_index.paper, | 2 |
| abstract_inverted_index.retain | 140 |
| abstract_inverted_index.should | 138 |
| abstract_inverted_index.speech | 74 |
| abstract_inverted_index.spoken | 6, 28, 48, 160 |
| abstract_inverted_index.system | 9, 215 |
| abstract_inverted_index.videos | 85 |
| abstract_inverted_index.Second, | 146 |
| abstract_inverted_index.TitaNet | 21, 174, 187 |
| abstract_inverted_index.accent, | 61, 63 |
| abstract_inverted_index.accent. | 68 |
| abstract_inverted_index.accents | 52 |
| abstract_inverted_index.broadly | 55 |
| abstract_inverted_index.network | 156 |
| abstract_inverted_index.process | 125 |
| abstract_inverted_index.propose | 33, 148 |
| abstract_inverted_index.results | 208 |
| abstract_inverted_index.speaker | 175 |
| abstract_inverted_index.system, | 31 |
| abstract_inverted_index.systems | 220 |
| abstract_inverted_index.trained | 186 |
| abstract_inverted_index.utilize | 15 |
| abstract_inverted_index.approach | 150 |
| abstract_inverted_index.collect. | 104 |
| abstract_inverted_index.dataset. | 182 |
| abstract_inverted_index.distance | 204 |
| abstract_inverted_index.features | 197 |
| abstract_inverted_index.mitigate | 106 |
| abstract_inverted_index.network. | 23 |
| abstract_inverted_index.obtained | 82 |
| abstract_inverted_index.original | 144 |
| abstract_inverted_index.proposed | 212 |
| abstract_inverted_index.southern | 62 |
| abstract_inverted_index.speaker. | 145 |
| abstract_inverted_index.standard | 60, 77, 95 |
| abstract_inverted_index.superior | 217 |
| abstract_inverted_index.t-vector | 16, 195 |
| abstract_inverted_index.training | 43, 181 |
| abstract_inverted_index.x-vector | 229 |
| abstract_inverted_index.condition | 130 |
| abstract_inverted_index.different | 84 |
| abstract_inverted_index.difficult | 102 |
| abstract_inverted_index.embedding | 196 |
| abstract_inverted_index.extracted | 18, 198 |
| abstract_inverted_index.fine-tune | 167 |
| abstract_inverted_index.function. | 205 |
| abstract_inverted_index.generated | 135 |
| abstract_inverted_index.implement | 25 |
| abstract_inverted_index.introduce | 4, 38, 114 |
| abstract_inverted_index.language, | 13, 72 |
| abstract_inverted_index.performed | 193 |
| abstract_inverted_index.scarcity, | 112 |
| abstract_inverted_index.synthesis | 122 |
| abstract_inverted_index.synthetic | 116, 133 |
| abstract_inverted_index.therefore | 101 |
| abstract_inverted_index.trainable | 169 |
| abstract_inverted_index.websites, | 87 |
| abstract_inverted_index.Generally, | 69 |
| abstract_inverted_index.classified | 56 |
| abstract_inverted_index.collection | 40 |
| abstract_inverted_index.embeddings | 17 |
| abstract_inverted_index.parameters | 170 |
| abstract_inverted_index.ECAPA-TDNN. | 231 |
| abstract_inverted_index.approaches: | 35 |
| abstract_inverted_index.categories: | 59 |
| abstract_inverted_index.information | 142 |
| abstract_inverted_index.pre-trained | 173 |
| abstract_inverted_index.recognition | 176 |
| abstract_inverted_index.techniques. | 123 |
| abstract_inverted_index.augmentation | 118 |
| abstract_inverted_index.experimental | 207 |
| abstract_inverted_index.northeastern | 67 |
| abstract_inverted_index.northwestern | 64 |
| abstract_inverted_index.Text-to-Speech | 120 |
| abstract_inverted_index.aforementioned | 180 |
| abstract_inverted_index.identification | 8, 30, 162, 191, 214 |
| abstract_inverted_index.identification. | 50 |
| abstract_inverted_index.state-of-the-art | 20, 224 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 2 |
| sustainable_development_goals[0].id | https://metadata.un.org/sdg/4 |
| sustainable_development_goals[0].score | 0.6800000071525574 |
| sustainable_development_goals[0].display_name | Quality Education |
| citation_normalized_percentile.value | 0.11356057 |
| citation_normalized_percentile.is_in_top_1_percent | False |
| citation_normalized_percentile.is_in_top_10_percent | True |