Learning Deep Networks from Noisy Labels with Dropout Regularization Article Swipe
YOU?
·
· 2017
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.1705.03419
Large datasets often have unreliable labels-such as those obtained from Amazon's Mechanical Turk or social media platforms-and classifiers trained on mislabeled datasets often exhibit poor performance. We present a simple, effective technique for accounting for label noise when training deep neural networks. We augment a standard deep network with a softmax layer that models the label noise statistics. Then, we train the deep network and noise model jointly via end-to-end stochastic gradient descent on the (perhaps mislabeled) dataset. The augmented model is overdetermined, so in order to encourage the learning of a non-trivial noise model, we apply dropout regularization to the weights of the noise model during training. Numerical experiments on noisy versions of the CIFAR-10 and MNIST datasets show that the proposed dropout technique outperforms state-of-the-art methods.
Related Topics To Compare & Contrast
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/1705.03419
- https://arxiv.org/pdf/1705.03419
- OA Status
- green
- Cited By
- 19
- References
- 12
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W2583188830