Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP Article Swipe
YOU?
·
· 2016
· Open Access
·
· DOI: https://doi.org/10.18653/v1/w16-25
· OA: W2606283965
The quality of word representations is frequently assessed using correlation with human judgements of word similarity.Here, we question whether such intrinsic evaluation can predict the merits of the representations for downstream tasks.We study the correlation between results on ten word similarity benchmarks and tagger performance on three standard sequence labeling tasks using a variety of word vectors induced from an unannotated corpus of 3.8 billion words, and demonstrate that most intrinsic evaluations are poor predictors of downstream performance.We argue that this issue can be traced in part to a failure to distinguish specific similarity from relatedness in intrinsic evaluation datasets.We make our evaluation tools openly available to facilitate further study.