Stack-propagation: Improved Representation Learning for Syntax Article Swipe
YOU?
·
· 2016
· Open Access
·
· DOI: https://doi.org/10.18653/v1/p16-1147
· OA: W2311095070
Traditional syntax models typically leverage part-of-speech (POS) information by constructing features from hand-tuned templates.We demonstrate that a better approach is to utilize POS tags as a regularizer of learned representations.We propose a simple method for learning a stacked pipeline of models which we call "stack-propagation".We apply this to dependency parsing and tagging, where we use the hidden layer of the tagger network as a representation of the input tokens for the parser.At test time, our parser does not require predicted POS tags.On 19 languages from the Universal Dependencies, our method is 1.3% (absolute) more accurate than a state-of-the-art graph-based approach and 2.7% more accurate than the most comparable greedy model.