Do Compressed LLMs Forget Knowledge? An Experimental Study with Practical Implications Article Swipe
YOU?
·
· 2023
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2310.00867
Compressing Large Language Models (LLMs) often leads to reduced performance, especially for knowledge-intensive tasks. In this work, we dive into how compression damages LLMs' inherent knowledge and the possible remedies. We start by proposing two conjectures on the nature of the damage: one is certain knowledge being forgotten (or erased) after LLM compression, hence necessitating the compressed model to (re)learn from data with additional parameters; the other presumes that knowledge is internally displaced and hence one requires merely "inference re-direction" with input-side augmentation such as prompting, to recover the knowledge-related performance. Extensive experiments are then designed to (in)validate the two conjectures. We observe the promise of prompting in comparison to model tuning; we further unlock prompting's potential by introducing a variant called Inference-time Dynamic Prompting (IDP), that can effectively increase prompt diversity without incurring any inference overhead. Our experiments consistently suggest that compared to the classical re-training alternatives such as LoRA, prompting with IDP leads to better or comparable post-compression performance recovery, while saving the extra parameter size by 21x and reducing inference latency by 60%. Our experiments hence strongly endorse the conjecture of "knowledge displaced" over "knowledge forgotten", and shed light on a new efficient mechanism to restore compressed LLM performance. We additionally visualize and analyze the different attention and activation patterns between prompted and re-trained models, demonstrating they achieve performance recovery in two different regimes.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2310.00867
- https://arxiv.org/pdf/2310.00867
- OA Status
- green
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4387323970
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4387323970Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2310.00867Digital Object Identifier
- Title
-
Do Compressed LLMs Forget Knowledge? An Experimental Study with Practical ImplicationsWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2023Year of publication
- Publication date
-
2023-10-02Full publication date if available
- Authors
-
Duc Hoang, Minsik Cho, Thomas Merth, Mohammad Rastegari, Zhangyang WangList of authors in order
- Landing page
-
https://arxiv.org/abs/2310.00867Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2310.00867Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2310.00867Direct OA link when available
- Concepts
-
Inference, Computer science, Latency (audio), Overhead (engineering), Cognitive psychology, Artificial intelligence, Psychology, Operating system, TelecommunicationsTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
0Total citation count in OpenAlex
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4387323970 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2310.00867 |
| ids.doi | https://doi.org/10.48550/arxiv.2310.00867 |
| ids.openalex | https://openalex.org/W4387323970 |
| fwci | |
| type | preprint |
| title | Do Compressed LLMs Forget Knowledge? An Experimental Study with Practical Implications |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T10028 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9987999796867371 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1702 |
| topics[0].subfield.display_name | Artificial Intelligence |
| topics[0].display_name | Topic Modeling |
| topics[1].id | https://openalex.org/T12026 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9847999811172485 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1702 |
| topics[1].subfield.display_name | Artificial Intelligence |
| topics[1].display_name | Explainable Artificial Intelligence (XAI) |
| topics[2].id | https://openalex.org/T10181 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9787999987602234 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1702 |
| topics[2].subfield.display_name | Artificial Intelligence |
| topics[2].display_name | Natural Language Processing Techniques |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C2776214188 |
| concepts[0].level | 2 |
| concepts[0].score | 0.8016951680183411 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q408386 |
| concepts[0].display_name | Inference |
| concepts[1].id | https://openalex.org/C41008148 |
| concepts[1].level | 0 |
| concepts[1].score | 0.6539021730422974 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[1].display_name | Computer science |
| concepts[2].id | https://openalex.org/C82876162 |
| concepts[2].level | 2 |
| concepts[2].score | 0.5813801288604736 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q17096504 |
| concepts[2].display_name | Latency (audio) |
| concepts[3].id | https://openalex.org/C2779960059 |
| concepts[3].level | 2 |
| concepts[3].score | 0.5182164311408997 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q7113681 |
| concepts[3].display_name | Overhead (engineering) |
| concepts[4].id | https://openalex.org/C180747234 |
| concepts[4].level | 1 |
| concepts[4].score | 0.44926708936691284 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q23373 |
| concepts[4].display_name | Cognitive psychology |
| concepts[5].id | https://openalex.org/C154945302 |
| concepts[5].level | 1 |
| concepts[5].score | 0.29059186577796936 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[5].display_name | Artificial intelligence |
| concepts[6].id | https://openalex.org/C15744967 |
| concepts[6].level | 0 |
| concepts[6].score | 0.23928549885749817 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q9418 |
| concepts[6].display_name | Psychology |
| concepts[7].id | https://openalex.org/C111919701 |
| concepts[7].level | 1 |
| concepts[7].score | 0.0 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q9135 |
| concepts[7].display_name | Operating system |
| concepts[8].id | https://openalex.org/C76155785 |
| concepts[8].level | 1 |
| concepts[8].score | 0.0 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q418 |
| concepts[8].display_name | Telecommunications |
| keywords[0].id | https://openalex.org/keywords/inference |
| keywords[0].score | 0.8016951680183411 |
| keywords[0].display_name | Inference |
| keywords[1].id | https://openalex.org/keywords/computer-science |
| keywords[1].score | 0.6539021730422974 |
| keywords[1].display_name | Computer science |
| keywords[2].id | https://openalex.org/keywords/latency |
| keywords[2].score | 0.5813801288604736 |
| keywords[2].display_name | Latency (audio) |
| keywords[3].id | https://openalex.org/keywords/overhead |
| keywords[3].score | 0.5182164311408997 |
| keywords[3].display_name | Overhead (engineering) |
| keywords[4].id | https://openalex.org/keywords/cognitive-psychology |
| keywords[4].score | 0.44926708936691284 |
| keywords[4].display_name | Cognitive psychology |
| keywords[5].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[5].score | 0.29059186577796936 |
| keywords[5].display_name | Artificial intelligence |
| keywords[6].id | https://openalex.org/keywords/psychology |
| keywords[6].score | 0.23928549885749817 |
| keywords[6].display_name | Psychology |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2310.00867 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2310.00867 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2310.00867 |
| locations[1].id | doi:10.48550/arxiv.2310.00867 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2310.00867 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5037911735 |
| authorships[0].author.orcid | https://orcid.org/0000-0002-8250-870X |
| authorships[0].author.display_name | Duc Hoang |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Hoang, Duc N. M |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5115076654 |
| authorships[1].author.orcid | https://orcid.org/0000-0003-0481-2682 |
| authorships[1].author.display_name | Minsik Cho |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Cho, Minsik |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5053338486 |
| authorships[2].author.orcid | |
| authorships[2].author.display_name | Thomas Merth |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Merth, Thomas |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5056246621 |
| authorships[3].author.orcid | https://orcid.org/0000-0001-9606-3687 |
| authorships[3].author.display_name | Mohammad Rastegari |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Rastegari, Mohammad |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5048522863 |
| authorships[4].author.orcid | https://orcid.org/0000-0002-2050-5693 |
| authorships[4].author.display_name | Zhangyang Wang |
| authorships[4].author_position | last |
| authorships[4].raw_author_name | Wang, Zhangyang |
| authorships[4].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2310.00867 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2023-10-04T00:00:00 |
| display_name | Do Compressed LLMs Forget Knowledge? An Experimental Study with Practical Implications |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T10028 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9987999796867371 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1702 |
| primary_topic.subfield.display_name | Artificial Intelligence |
| primary_topic.display_name | Topic Modeling |
| related_works | https://openalex.org/W4391375266, https://openalex.org/W2748952813, https://openalex.org/W2390279801, https://openalex.org/W2358668433, https://openalex.org/W2376932109, https://openalex.org/W2001405890, https://openalex.org/W3008625068, https://openalex.org/W3128807919, https://openalex.org/W3176411177, https://openalex.org/W3035501883 |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2310.00867 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2310.00867 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2310.00867 |
| primary_location.id | pmh:oai:arXiv.org:2310.00867 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2310.00867 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2310.00867 |
| publication_date | 2023-10-02 |
| publication_year | 2023 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 119, 193 |
| abstract_inverted_index.In | 14 |
| abstract_inverted_index.We | 30, 101, 202 |
| abstract_inverted_index.as | 84, 149 |
| abstract_inverted_index.by | 32, 117, 168, 174 |
| abstract_inverted_index.in | 107, 223 |
| abstract_inverted_index.is | 43, 70 |
| abstract_inverted_index.of | 39, 105, 183 |
| abstract_inverted_index.on | 36, 192 |
| abstract_inverted_index.or | 157 |
| abstract_inverted_index.to | 7, 58, 86, 96, 109, 143, 155, 197 |
| abstract_inverted_index.we | 17, 112 |
| abstract_inverted_index.(or | 48 |
| abstract_inverted_index.21x | 169 |
| abstract_inverted_index.IDP | 153 |
| abstract_inverted_index.LLM | 51, 200 |
| abstract_inverted_index.Our | 137, 176 |
| abstract_inverted_index.and | 26, 73, 170, 189, 205, 210, 215 |
| abstract_inverted_index.any | 134 |
| abstract_inverted_index.are | 93 |
| abstract_inverted_index.can | 127 |
| abstract_inverted_index.for | 11 |
| abstract_inverted_index.how | 20 |
| abstract_inverted_index.new | 194 |
| abstract_inverted_index.one | 42, 75 |
| abstract_inverted_index.the | 27, 37, 40, 55, 65, 88, 98, 103, 144, 164, 181, 207 |
| abstract_inverted_index.two | 34, 99, 224 |
| abstract_inverted_index.60%. | 175 |
| abstract_inverted_index.data | 61 |
| abstract_inverted_index.dive | 18 |
| abstract_inverted_index.from | 60 |
| abstract_inverted_index.into | 19 |
| abstract_inverted_index.over | 186 |
| abstract_inverted_index.shed | 190 |
| abstract_inverted_index.size | 167 |
| abstract_inverted_index.such | 83, 148 |
| abstract_inverted_index.that | 68, 126, 141 |
| abstract_inverted_index.then | 94 |
| abstract_inverted_index.they | 219 |
| abstract_inverted_index.this | 15 |
| abstract_inverted_index.with | 62, 80, 152 |
| abstract_inverted_index.LLMs' | 23 |
| abstract_inverted_index.Large | 1 |
| abstract_inverted_index.LoRA, | 150 |
| abstract_inverted_index.after | 50 |
| abstract_inverted_index.being | 46 |
| abstract_inverted_index.extra | 165 |
| abstract_inverted_index.hence | 53, 74, 178 |
| abstract_inverted_index.leads | 6, 154 |
| abstract_inverted_index.light | 191 |
| abstract_inverted_index.model | 57, 110 |
| abstract_inverted_index.often | 5 |
| abstract_inverted_index.other | 66 |
| abstract_inverted_index.start | 31 |
| abstract_inverted_index.while | 162 |
| abstract_inverted_index.work, | 16 |
| abstract_inverted_index.(IDP), | 125 |
| abstract_inverted_index.(LLMs) | 4 |
| abstract_inverted_index.Models | 3 |
| abstract_inverted_index.better | 156 |
| abstract_inverted_index.called | 121 |
| abstract_inverted_index.merely | 77 |
| abstract_inverted_index.nature | 38 |
| abstract_inverted_index.prompt | 130 |
| abstract_inverted_index.saving | 163 |
| abstract_inverted_index.tasks. | 13 |
| abstract_inverted_index.unlock | 114 |
| abstract_inverted_index.Dynamic | 123 |
| abstract_inverted_index.achieve | 220 |
| abstract_inverted_index.analyze | 206 |
| abstract_inverted_index.between | 213 |
| abstract_inverted_index.certain | 44 |
| abstract_inverted_index.damage: | 41 |
| abstract_inverted_index.damages | 22 |
| abstract_inverted_index.endorse | 180 |
| abstract_inverted_index.erased) | 49 |
| abstract_inverted_index.further | 113 |
| abstract_inverted_index.latency | 173 |
| abstract_inverted_index.models, | 217 |
| abstract_inverted_index.observe | 102 |
| abstract_inverted_index.promise | 104 |
| abstract_inverted_index.recover | 87 |
| abstract_inverted_index.reduced | 8 |
| abstract_inverted_index.restore | 198 |
| abstract_inverted_index.suggest | 140 |
| abstract_inverted_index.tuning; | 111 |
| abstract_inverted_index.variant | 120 |
| abstract_inverted_index.without | 132 |
| abstract_inverted_index.Language | 2 |
| abstract_inverted_index.compared | 142 |
| abstract_inverted_index.designed | 95 |
| abstract_inverted_index.increase | 129 |
| abstract_inverted_index.inherent | 24 |
| abstract_inverted_index.patterns | 212 |
| abstract_inverted_index.possible | 28 |
| abstract_inverted_index.presumes | 67 |
| abstract_inverted_index.prompted | 214 |
| abstract_inverted_index.recovery | 222 |
| abstract_inverted_index.reducing | 171 |
| abstract_inverted_index.regimes. | 226 |
| abstract_inverted_index.requires | 76 |
| abstract_inverted_index.strongly | 179 |
| abstract_inverted_index.(re)learn | 59 |
| abstract_inverted_index.Extensive | 91 |
| abstract_inverted_index.Prompting | 124 |
| abstract_inverted_index.attention | 209 |
| abstract_inverted_index.classical | 145 |
| abstract_inverted_index.different | 208, 225 |
| abstract_inverted_index.displaced | 72 |
| abstract_inverted_index.diversity | 131 |
| abstract_inverted_index.efficient | 195 |
| abstract_inverted_index.forgotten | 47 |
| abstract_inverted_index.incurring | 133 |
| abstract_inverted_index.inference | 135, 172 |
| abstract_inverted_index.knowledge | 25, 45, 69 |
| abstract_inverted_index.mechanism | 196 |
| abstract_inverted_index.overhead. | 136 |
| abstract_inverted_index.parameter | 166 |
| abstract_inverted_index.potential | 116 |
| abstract_inverted_index.prompting | 106, 151 |
| abstract_inverted_index.proposing | 33 |
| abstract_inverted_index.recovery, | 161 |
| abstract_inverted_index.remedies. | 29 |
| abstract_inverted_index.visualize | 204 |
| abstract_inverted_index."inference | 78 |
| abstract_inverted_index."knowledge | 184, 187 |
| abstract_inverted_index.activation | 211 |
| abstract_inverted_index.additional | 63 |
| abstract_inverted_index.comparable | 158 |
| abstract_inverted_index.comparison | 108 |
| abstract_inverted_index.compressed | 56, 199 |
| abstract_inverted_index.conjecture | 182 |
| abstract_inverted_index.displaced" | 185 |
| abstract_inverted_index.especially | 10 |
| abstract_inverted_index.input-side | 81 |
| abstract_inverted_index.internally | 71 |
| abstract_inverted_index.prompting, | 85 |
| abstract_inverted_index.re-trained | 216 |
| abstract_inverted_index.Compressing | 0 |
| abstract_inverted_index.compression | 21 |
| abstract_inverted_index.conjectures | 35 |
| abstract_inverted_index.effectively | 128 |
| abstract_inverted_index.experiments | 92, 138, 177 |
| abstract_inverted_index.forgotten", | 188 |
| abstract_inverted_index.introducing | 118 |
| abstract_inverted_index.parameters; | 64 |
| abstract_inverted_index.performance | 160, 221 |
| abstract_inverted_index.prompting's | 115 |
| abstract_inverted_index.re-training | 146 |
| abstract_inverted_index.(in)validate | 97 |
| abstract_inverted_index.additionally | 203 |
| abstract_inverted_index.alternatives | 147 |
| abstract_inverted_index.augmentation | 82 |
| abstract_inverted_index.compression, | 52 |
| abstract_inverted_index.conjectures. | 100 |
| abstract_inverted_index.consistently | 139 |
| abstract_inverted_index.performance, | 9 |
| abstract_inverted_index.performance. | 90, 201 |
| abstract_inverted_index.demonstrating | 218 |
| abstract_inverted_index.necessitating | 54 |
| abstract_inverted_index.re-direction" | 79 |
| abstract_inverted_index.Inference-time | 122 |
| abstract_inverted_index.post-compression | 159 |
| abstract_inverted_index.knowledge-related | 89 |
| abstract_inverted_index.knowledge-intensive | 12 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 5 |
| sustainable_development_goals[0].id | https://metadata.un.org/sdg/8 |
| sustainable_development_goals[0].score | 0.5299999713897705 |
| sustainable_development_goals[0].display_name | Decent work and economic growth |
| citation_normalized_percentile |