What Makes Pre-trained Language Models Better Zero-shot Learners? Article Swipe
YOU?
·
· 2023
· Open Access
·
· DOI: https://doi.org/10.18653/v1/2023.acl-long.128
Current methods for prompt learning in zero-shot scenarios widely rely on a development set with sufficient human-annotated data to select the best-performing prompt template a posteriori. This is not ideal because in a real-world zero-shot scenario of practical relevance, no labelled data is available. Thus, we propose a simple yet effective method for screening reasonable prompt templates in zero-shot text classification: Perplexity Selection (Perplection). We hypothesize that language discrepancy can be used to measure the efficacy of prompt templates, and thereby develop a substantiated perplexity-based scheme allowing for forecasting the performance of prompt templates in advance. Experiments show that our method leads to improved prediction performance in a realistic zero-shot setting, eliminating the need for any labelled examples.
Related Topics To Compare & Contrast
- Type
- article
- Language
- en
- Landing Page
- https://doi.org/10.18653/v1/2023.acl-long.128
- https://aclanthology.org/2023.acl-long.128.pdf
- OA Status
- gold
- Cited By
- 5
- References
- 34
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4385571701