Towards Zero-shot Commonsense Reasoning with Self-supervised Refinement of Language Models Article Swipe
YOU?
·
· 2021
· Open Access
·
· DOI: https://doi.org/10.18653/v1/2021.emnlp-main.688
Can we get existing language models and refine them for zero-shot commonsense reasoning? This paper presents an initial study exploring the feasibility of zero-shot commonsense reasoning for the Winograd Schema Challenge by formulating the task as self-supervised refinement of a pre-trained language model. In contrast to previous studies that rely on fine-tuning annotated datasets, we seek to boost conceptualization via loss landscape refinement. To this end, we propose a novel self-supervised learning approach that refines the language model utilizing a set of linguistic perturbations of similar concept relationships. Empirical analysis of our conceptually simple framework demonstrates the viability of zero-shot commonsense reasoning on multiple benchmarks.
Related Topics To Compare & Contrast
- Type
- article
- Language
- en
- Landing Page
- https://doi.org/10.18653/v1/2021.emnlp-main.688
- https://aclanthology.org/2021.emnlp-main.688.pdf
- OA Status
- hybrid
- Cited By
- 4
- References
- 37
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W3199329183