On Robustness of Finetuned Transformer-based NLP Models Article Swipe
YOU?
·
· 2023
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2305.14453
Transformer-based pretrained models like BERT, GPT-2 and T5 have been finetuned for a large number of natural language processing (NLP) tasks, and have been shown to be very effective. However, while finetuning, what changes across layers in these models with respect to pretrained checkpoints is under-studied. Further, how robust are these models to perturbations in input text? Does the robustness vary depending on the NLP task for which the models have been finetuned? While there exists some work on studying the robustness of BERT finetuned for a few NLP tasks, there is no rigorous study that compares this robustness across encoder only, decoder only and encoder-decoder models. In this paper, we characterize changes between pretrained and finetuned language model representations across layers using two metrics: CKA and STIR. Further, we study the robustness of three language models (BERT, GPT-2 and T5) with eight different text perturbations on classification tasks from the General Language Understanding Evaluation (GLUE) benchmark, and generation tasks like summarization, free-form generation and question generation. GPT-2 representations are more robust than BERT and T5 across multiple types of input perturbation. Although models exhibit good robustness broadly, dropping nouns, verbs or changing characters are the most impactful. Overall, this study provides valuable insights into perturbation-specific weaknesses of popular Transformer-based models, which should be kept in mind when passing inputs. We make the code and models publicly available [https://github.com/PavanNeerudu/Robustness-of-Transformers-models].
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2305.14453
- https://arxiv.org/pdf/2305.14453
- OA Status
- green
- Cited By
- 3
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4378469154
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4378469154Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2305.14453Digital Object Identifier
- Title
-
On Robustness of Finetuned Transformer-based NLP ModelsWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2023Year of publication
- Publication date
-
2023-05-23Full publication date if available
- Authors
-
Pavan Kalyan Reddy Neerudu, Subba Reddy Oota, Mounika Marreddy, Venkateswara Rao Kagita, Manish GuptaList of authors in order
- Landing page
-
https://arxiv.org/abs/2305.14453Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2305.14453Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2305.14453Direct OA link when available
- Concepts
-
Computer science, Robustness (evolution), Encoder, Transformer, Artificial intelligence, Language model, Natural language processing, Automatic summarization, Machine learning, Quantum mechanics, Physics, Voltage, Biochemistry, Operating system, Chemistry, GeneTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
3Total citation count in OpenAlex
- Citations by year (recent)
-
2025: 1, 2024: 2Per-year citation counts (last 5 years)
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4378469154 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2305.14453 |
| ids.doi | https://doi.org/10.48550/arxiv.2305.14453 |
| ids.openalex | https://openalex.org/W4378469154 |
| fwci | |
| type | preprint |
| title | On Robustness of Finetuned Transformer-based NLP Models |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T10028 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9995999932289124 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1702 |
| topics[0].subfield.display_name | Artificial Intelligence |
| topics[0].display_name | Topic Modeling |
| topics[1].id | https://openalex.org/T10181 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9994000196456909 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1702 |
| topics[1].subfield.display_name | Artificial Intelligence |
| topics[1].display_name | Natural Language Processing Techniques |
| topics[2].id | https://openalex.org/T13629 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9941999912261963 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1702 |
| topics[2].subfield.display_name | Artificial Intelligence |
| topics[2].display_name | Text Readability and Simplification |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C41008148 |
| concepts[0].level | 0 |
| concepts[0].score | 0.9043638706207275 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[0].display_name | Computer science |
| concepts[1].id | https://openalex.org/C63479239 |
| concepts[1].level | 3 |
| concepts[1].score | 0.6947975158691406 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q7353546 |
| concepts[1].display_name | Robustness (evolution) |
| concepts[2].id | https://openalex.org/C118505674 |
| concepts[2].level | 2 |
| concepts[2].score | 0.6429831981658936 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q42586063 |
| concepts[2].display_name | Encoder |
| concepts[3].id | https://openalex.org/C66322947 |
| concepts[3].level | 3 |
| concepts[3].score | 0.6035289168357849 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q11658 |
| concepts[3].display_name | Transformer |
| concepts[4].id | https://openalex.org/C154945302 |
| concepts[4].level | 1 |
| concepts[4].score | 0.5943291783332825 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[4].display_name | Artificial intelligence |
| concepts[5].id | https://openalex.org/C137293760 |
| concepts[5].level | 2 |
| concepts[5].score | 0.5618637800216675 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q3621696 |
| concepts[5].display_name | Language model |
| concepts[6].id | https://openalex.org/C204321447 |
| concepts[6].level | 1 |
| concepts[6].score | 0.5239135026931763 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q30642 |
| concepts[6].display_name | Natural language processing |
| concepts[7].id | https://openalex.org/C170858558 |
| concepts[7].level | 2 |
| concepts[7].score | 0.4749186933040619 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q1394144 |
| concepts[7].display_name | Automatic summarization |
| concepts[8].id | https://openalex.org/C119857082 |
| concepts[8].level | 1 |
| concepts[8].score | 0.37524378299713135 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q2539 |
| concepts[8].display_name | Machine learning |
| concepts[9].id | https://openalex.org/C62520636 |
| concepts[9].level | 1 |
| concepts[9].score | 0.0 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q944 |
| concepts[9].display_name | Quantum mechanics |
| concepts[10].id | https://openalex.org/C121332964 |
| concepts[10].level | 0 |
| concepts[10].score | 0.0 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q413 |
| concepts[10].display_name | Physics |
| concepts[11].id | https://openalex.org/C165801399 |
| concepts[11].level | 2 |
| concepts[11].score | 0.0 |
| concepts[11].wikidata | https://www.wikidata.org/wiki/Q25428 |
| concepts[11].display_name | Voltage |
| concepts[12].id | https://openalex.org/C55493867 |
| concepts[12].level | 1 |
| concepts[12].score | 0.0 |
| concepts[12].wikidata | https://www.wikidata.org/wiki/Q7094 |
| concepts[12].display_name | Biochemistry |
| concepts[13].id | https://openalex.org/C111919701 |
| concepts[13].level | 1 |
| concepts[13].score | 0.0 |
| concepts[13].wikidata | https://www.wikidata.org/wiki/Q9135 |
| concepts[13].display_name | Operating system |
| concepts[14].id | https://openalex.org/C185592680 |
| concepts[14].level | 0 |
| concepts[14].score | 0.0 |
| concepts[14].wikidata | https://www.wikidata.org/wiki/Q2329 |
| concepts[14].display_name | Chemistry |
| concepts[15].id | https://openalex.org/C104317684 |
| concepts[15].level | 2 |
| concepts[15].score | 0.0 |
| concepts[15].wikidata | https://www.wikidata.org/wiki/Q7187 |
| concepts[15].display_name | Gene |
| keywords[0].id | https://openalex.org/keywords/computer-science |
| keywords[0].score | 0.9043638706207275 |
| keywords[0].display_name | Computer science |
| keywords[1].id | https://openalex.org/keywords/robustness |
| keywords[1].score | 0.6947975158691406 |
| keywords[1].display_name | Robustness (evolution) |
| keywords[2].id | https://openalex.org/keywords/encoder |
| keywords[2].score | 0.6429831981658936 |
| keywords[2].display_name | Encoder |
| keywords[3].id | https://openalex.org/keywords/transformer |
| keywords[3].score | 0.6035289168357849 |
| keywords[3].display_name | Transformer |
| keywords[4].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[4].score | 0.5943291783332825 |
| keywords[4].display_name | Artificial intelligence |
| keywords[5].id | https://openalex.org/keywords/language-model |
| keywords[5].score | 0.5618637800216675 |
| keywords[5].display_name | Language model |
| keywords[6].id | https://openalex.org/keywords/natural-language-processing |
| keywords[6].score | 0.5239135026931763 |
| keywords[6].display_name | Natural language processing |
| keywords[7].id | https://openalex.org/keywords/automatic-summarization |
| keywords[7].score | 0.4749186933040619 |
| keywords[7].display_name | Automatic summarization |
| keywords[8].id | https://openalex.org/keywords/machine-learning |
| keywords[8].score | 0.37524378299713135 |
| keywords[8].display_name | Machine learning |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2305.14453 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2305.14453 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2305.14453 |
| locations[1].id | doi:10.48550/arxiv.2305.14453 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | cc-by |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | https://openalex.org/licenses/cc-by |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2305.14453 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5092029805 |
| authorships[0].author.orcid | |
| authorships[0].author.display_name | Pavan Kalyan Reddy Neerudu |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Neerudu, Pavan Kalyan Reddy |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5029606497 |
| authorships[1].author.orcid | https://orcid.org/0000-0002-5975-622X |
| authorships[1].author.display_name | Subba Reddy Oota |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Oota, Subba Reddy |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5062052248 |
| authorships[2].author.orcid | https://orcid.org/0000-0003-1184-640X |
| authorships[2].author.display_name | Mounika Marreddy |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Marreddy, Mounika |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5039761970 |
| authorships[3].author.orcid | https://orcid.org/0000-0002-4996-2011 |
| authorships[3].author.display_name | Venkateswara Rao Kagita |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Kagita, Venkateswara Rao |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5101454729 |
| authorships[4].author.orcid | https://orcid.org/0000-0003-0848-6132 |
| authorships[4].author.display_name | Manish Gupta |
| authorships[4].author_position | last |
| authorships[4].raw_author_name | Gupta, Manish |
| authorships[4].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2305.14453 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | On Robustness of Finetuned Transformer-based NLP Models |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T10028 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9995999932289124 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1702 |
| primary_topic.subfield.display_name | Artificial Intelligence |
| primary_topic.display_name | Topic Modeling |
| related_works | https://openalex.org/W4317547544, https://openalex.org/W4313395829, https://openalex.org/W4288365749, https://openalex.org/W2936497627, https://openalex.org/W3013624417, https://openalex.org/W4287826556, https://openalex.org/W3098382480, https://openalex.org/W4287598411, https://openalex.org/W3094871513, https://openalex.org/W3100913109 |
| cited_by_count | 3 |
| counts_by_year[0].year | 2025 |
| counts_by_year[0].cited_by_count | 1 |
| counts_by_year[1].year | 2024 |
| counts_by_year[1].cited_by_count | 2 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2305.14453 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2305.14453 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2305.14453 |
| primary_location.id | pmh:oai:arXiv.org:2305.14453 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2305.14453 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2305.14453 |
| publication_date | 2023-05-23 |
| publication_year | 2023 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 12, 86 |
| abstract_inverted_index.In | 107 |
| abstract_inverted_index.T5 | 7, 175 |
| abstract_inverted_index.We | 220 |
| abstract_inverted_index.be | 26, 213 |
| abstract_inverted_index.in | 36, 54, 215 |
| abstract_inverted_index.is | 44, 91 |
| abstract_inverted_index.no | 92 |
| abstract_inverted_index.of | 15, 82, 133, 179, 207 |
| abstract_inverted_index.on | 62, 78, 146 |
| abstract_inverted_index.or | 191 |
| abstract_inverted_index.to | 25, 41, 52 |
| abstract_inverted_index.we | 110, 129 |
| abstract_inverted_index.CKA | 125 |
| abstract_inverted_index.NLP | 64, 88 |
| abstract_inverted_index.T5) | 140 |
| abstract_inverted_index.and | 6, 21, 104, 115, 126, 139, 157, 164, 174, 224 |
| abstract_inverted_index.are | 49, 169, 194 |
| abstract_inverted_index.few | 87 |
| abstract_inverted_index.for | 11, 66, 85 |
| abstract_inverted_index.how | 47 |
| abstract_inverted_index.the | 58, 63, 68, 80, 131, 150, 195, 222 |
| abstract_inverted_index.two | 123 |
| abstract_inverted_index.BERT | 83, 173 |
| abstract_inverted_index.Does | 57 |
| abstract_inverted_index.been | 9, 23, 71 |
| abstract_inverted_index.code | 223 |
| abstract_inverted_index.from | 149 |
| abstract_inverted_index.good | 185 |
| abstract_inverted_index.have | 8, 22, 70 |
| abstract_inverted_index.into | 204 |
| abstract_inverted_index.kept | 214 |
| abstract_inverted_index.like | 3, 160 |
| abstract_inverted_index.make | 221 |
| abstract_inverted_index.mind | 216 |
| abstract_inverted_index.more | 170 |
| abstract_inverted_index.most | 196 |
| abstract_inverted_index.only | 103 |
| abstract_inverted_index.some | 76 |
| abstract_inverted_index.task | 65 |
| abstract_inverted_index.text | 144 |
| abstract_inverted_index.than | 172 |
| abstract_inverted_index.that | 95 |
| abstract_inverted_index.this | 97, 108, 199 |
| abstract_inverted_index.vary | 60 |
| abstract_inverted_index.very | 27 |
| abstract_inverted_index.what | 32 |
| abstract_inverted_index.when | 217 |
| abstract_inverted_index.with | 39, 141 |
| abstract_inverted_index.work | 77 |
| abstract_inverted_index.(NLP) | 19 |
| abstract_inverted_index.BERT, | 4 |
| abstract_inverted_index.GPT-2 | 5, 138, 167 |
| abstract_inverted_index.STIR. | 127 |
| abstract_inverted_index.While | 73 |
| abstract_inverted_index.eight | 142 |
| abstract_inverted_index.input | 55, 180 |
| abstract_inverted_index.large | 13 |
| abstract_inverted_index.model | 118 |
| abstract_inverted_index.only, | 101 |
| abstract_inverted_index.shown | 24 |
| abstract_inverted_index.study | 94, 130, 200 |
| abstract_inverted_index.tasks | 148, 159 |
| abstract_inverted_index.text? | 56 |
| abstract_inverted_index.there | 74, 90 |
| abstract_inverted_index.these | 37, 50 |
| abstract_inverted_index.three | 134 |
| abstract_inverted_index.types | 178 |
| abstract_inverted_index.using | 122 |
| abstract_inverted_index.verbs | 190 |
| abstract_inverted_index.which | 67, 211 |
| abstract_inverted_index.while | 30 |
| abstract_inverted_index.(BERT, | 137 |
| abstract_inverted_index.(GLUE) | 155 |
| abstract_inverted_index.across | 34, 99, 120, 176 |
| abstract_inverted_index.exists | 75 |
| abstract_inverted_index.layers | 35, 121 |
| abstract_inverted_index.models | 2, 38, 51, 69, 136, 183, 225 |
| abstract_inverted_index.nouns, | 189 |
| abstract_inverted_index.number | 14 |
| abstract_inverted_index.paper, | 109 |
| abstract_inverted_index.robust | 48, 171 |
| abstract_inverted_index.should | 212 |
| abstract_inverted_index.tasks, | 20, 89 |
| abstract_inverted_index.General | 151 |
| abstract_inverted_index.between | 113 |
| abstract_inverted_index.changes | 33, 112 |
| abstract_inverted_index.decoder | 102 |
| abstract_inverted_index.encoder | 100 |
| abstract_inverted_index.exhibit | 184 |
| abstract_inverted_index.inputs. | 219 |
| abstract_inverted_index.models, | 210 |
| abstract_inverted_index.models. | 106 |
| abstract_inverted_index.natural | 16 |
| abstract_inverted_index.passing | 218 |
| abstract_inverted_index.popular | 208 |
| abstract_inverted_index.respect | 40 |
| abstract_inverted_index.Although | 182 |
| abstract_inverted_index.Further, | 46, 128 |
| abstract_inverted_index.However, | 29 |
| abstract_inverted_index.Language | 152 |
| abstract_inverted_index.Overall, | 198 |
| abstract_inverted_index.broadly, | 187 |
| abstract_inverted_index.changing | 192 |
| abstract_inverted_index.compares | 96 |
| abstract_inverted_index.dropping | 188 |
| abstract_inverted_index.insights | 203 |
| abstract_inverted_index.language | 17, 117, 135 |
| abstract_inverted_index.metrics: | 124 |
| abstract_inverted_index.multiple | 177 |
| abstract_inverted_index.provides | 201 |
| abstract_inverted_index.publicly | 226 |
| abstract_inverted_index.question | 165 |
| abstract_inverted_index.rigorous | 93 |
| abstract_inverted_index.studying | 79 |
| abstract_inverted_index.valuable | 202 |
| abstract_inverted_index.available | 227 |
| abstract_inverted_index.depending | 61 |
| abstract_inverted_index.different | 143 |
| abstract_inverted_index.finetuned | 10, 84, 116 |
| abstract_inverted_index.free-form | 162 |
| abstract_inverted_index.Evaluation | 154 |
| abstract_inverted_index.benchmark, | 156 |
| abstract_inverted_index.characters | 193 |
| abstract_inverted_index.effective. | 28 |
| abstract_inverted_index.finetuned? | 72 |
| abstract_inverted_index.generation | 158, 163 |
| abstract_inverted_index.impactful. | 197 |
| abstract_inverted_index.pretrained | 1, 42, 114 |
| abstract_inverted_index.processing | 18 |
| abstract_inverted_index.robustness | 59, 81, 98, 132, 186 |
| abstract_inverted_index.weaknesses | 206 |
| abstract_inverted_index.checkpoints | 43 |
| abstract_inverted_index.finetuning, | 31 |
| abstract_inverted_index.generation. | 166 |
| abstract_inverted_index.characterize | 111 |
| abstract_inverted_index.Understanding | 153 |
| abstract_inverted_index.perturbation. | 181 |
| abstract_inverted_index.perturbations | 53, 145 |
| abstract_inverted_index.classification | 147 |
| abstract_inverted_index.summarization, | 161 |
| abstract_inverted_index.under-studied. | 45 |
| abstract_inverted_index.encoder-decoder | 105 |
| abstract_inverted_index.representations | 119, 168 |
| abstract_inverted_index.Transformer-based | 0, 209 |
| abstract_inverted_index.perturbation-specific | 205 |
| abstract_inverted_index.[https://github.com/PavanNeerudu/Robustness-of-Transformers-models]. | 228 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 5 |
| sustainable_development_goals[0].id | https://metadata.un.org/sdg/4 |
| sustainable_development_goals[0].score | 0.8399999737739563 |
| sustainable_development_goals[0].display_name | Quality Education |
| citation_normalized_percentile |