Sergey Ioffe
YOU?
Author Swipe
View article: Sample what you cant compress
Sample what you cant compress Open
For learned image representations, basic autoencoders often produce blurry results. Reconstruction quality can be improved by incorporating additional penalties such as adversarial (GAN) and perceptual losses. Arguably, these approaches la…
View article: Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift Open
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates …
View article: Weighted Ensemble Self-Supervised Learning
Weighted Ensemble Self-Supervised Learning Open
Ensembling has proven to be a powerful technique for boosting model performance, uncertainty estimation, and robustness in supervised learning. Advances in self-supervised learning (SSL) enable leveraging large unlabeled corpora for state-…
View article: Towards a Semantic Perceptual Image Metric
Towards a Semantic Perceptual Image Metric Open
We present a full reference, perceptual image metric based on VGG-16, an artificial neural network trained on object classification. We fit the metric to a new database based on 140k unique images annotated with ground truth by human rater…
View article: No Fuss Distance Metric Learning using Proxies
No Fuss Distance Metric Learning using Proxies Open
We address the problem of distance metric learning (DML), defined as learning a distance consistent with a notion of semantic similarity. Traditionally, for this problem supervision is expressed in the form of sets of points that follow an…
View article: Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning Open
Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low c…
View article: Batch Renormalization: Towards Reducing Minibatch Dependence in Batch-Normalized Models
Batch Renormalization: Towards Reducing Minibatch Dependence in Batch-Normalized Models Open
Batch Normalization is quite effective at accelerating and improving the training of deep models. However, its effectiveness diminishes when the training minibatches are small, or do not consist of independent samples. We hypothesize that …
View article: Batch Renormalization: Towards Reducing Minibatch Dependence in Batch-Normalized Models
Batch Renormalization: Towards Reducing Minibatch Dependence in Batch-Normalized Models Open
Batch Normalization is quite effective at accelerating and improving the training of deep models. However, its effectiveness diminishes when the training minibatches are small, or do not consist of independent samples. We hypothesize that …
View article: Inception-v4, Inception-ResNet and the Impact of Residual Connections on\n Learning
Inception-v4, Inception-ResNet and the Impact of Residual Connections on\n Learning Open
Very deep convolutional networks have been central to the largest advances in\nimage recognition performance in recent years. One example is the Inception\narchitecture that has been shown to achieve very good performance at relatively\nlo…
View article: Rethinking the Inception Architecture for Computer Vision
Rethinking the Inception Architecture for Computer Vision Open
Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmar…
View article: Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift Open
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates …