Jong-Chyi Su
YOU?
Author Swipe
View article: Application and Role of Hypothesis Testing in Practice
Application and Role of Hypothesis Testing in Practice Open
This paper explores the reasons and practical implications of hypothesis testing as an important tool for decision making in today's data-driven world. Beginning with the seminal work of Ronald A. Fisher in the early 1900s, the paper trace…
View article: AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving
AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving Open
Autonomous vehicle (AV) systems rely on robust perception models as a cornerstone of safety assurance. However, objects encountered on the road exhibit a long-tailed distribution, with rare or unseen categories posing challenges to a deplo…
View article: RoPAWS: Robust Semi-supervised Representation Learning from Uncurated Data
RoPAWS: Robust Semi-supervised Representation Learning from Uncurated Data Open
Semi-supervised learning aims to train a model using limited labels. State-of-the-art semi-supervised methods for image classification such as PAWS rely on self-supervised representations learned with large-scale unlabeled but curated data…
View article: Tell Me What Happened: Unifying Text-guided Video Completion via Multimodal Masked Video Generation
Tell Me What Happened: Unifying Text-guided Video Completion via Multimodal Masked Video Generation Open
Generating a video given the first several static frames is challenging as it anticipates reasonable future frames with temporal coherence. Besides video prediction, the ability to rewind from the last frame or infilling between the head a…
View article: Semi-Supervised Learning with Taxonomic Labels
Semi-Supervised Learning with Taxonomic Labels Open
We propose techniques to incorporate coarse taxonomic labels to train image classifiers in fine-grained domains. Such labels can often be obtained with a smaller effort for fine-grained domains such as the natural world where categories ar…
View article: Learning from Limited Labeled Data for Visual Recognition
Learning from Limited Labeled Data for Visual Recognition Open
Recent advances in computer vision are in part due to the widespread use of deep neural networks. However, training deep networks require enormous amounts of labeled data which can be a bottleneck. In this thesis, we propose several approa…
View article: The Semi-Supervised iNaturalist Challenge at the FGVC8 Workshop
The Semi-Supervised iNaturalist Challenge at the FGVC8 Workshop Open
Semi-iNat is a challenging dataset for semi-supervised classification with a long-tailed distribution of classes, fine-grained categories, and domain shifts between labeled and unlabeled data. This dataset is behind the second iteration of…
View article: A Realistic Evaluation of Semi-Supervised Learning for Fine-Grained Classification
A Realistic Evaluation of Semi-Supervised Learning for Fine-Grained Classification Open
We evaluate the effectiveness of semi-supervised learning (SSL) on a realistic benchmark where data exhibits considerable class imbalance and contains images from novel classes. Our benchmark consists of two fine-grained classification dat…
View article: The Semi-Supervised iNaturalist-Aves Challenge at FGVC7 Workshop
The Semi-Supervised iNaturalist-Aves Challenge at FGVC7 Workshop Open
This document describes the details and the motivation behind a new dataset we collected for the semi-supervised recognition challenge~\cite{semi-aves} at the FGVC7 workshop at CVPR 2020. The dataset contains 1000 species of birds sampled …
View article: Unsupervised Discovery of Object Landmarks via Contrastive Learning
Unsupervised Discovery of Object Landmarks via Contrastive Learning Open
Given a collection of images, humans are able to discover landmarks of the depicted objects by modeling the shared geometric structure across instances. This idea of geometric equivariance has been widely used for unsupervised discovery of…
View article: On Equivariant and Invariant Learning of Object Landmark Representations
On Equivariant and Invariant Learning of Object Landmark Representations Open
Given a collection of images, humans are able to discover landmarks by modeling the shared geometric structure across instances. This idea of geometric equivariance has been widely used for the unsupervised discovery of object landmark rep…
View article: Active Adversarial Domain Adaptation
Active Adversarial Domain Adaptation Open
We propose an active learning approach for transferring representations across domains. Our approach, active adversarial domain adaptation (AADA), explores a duality between two related problems: adversarial domain alignment and importance…
View article: When Does Self-supervision Improve Few-shot Learning?
When Does Self-supervision Improve Few-shot Learning? Open
We investigate the role of self-supervised learning (SSL) in the context of few-shot learning. Although recent research has shown the benefits of SSL on large unlabeled datasets, its utility on small datasets is relatively unexplored. We f…
View article: Boosting Supervision with Self-Supervision for Few-shot Learning
Boosting Supervision with Self-Supervision for Few-shot Learning Open
We present a technique to improve the transferability of deep representations learned on small labeled datasets by introducing self-supervised tasks as auxiliary loss functions. While recent approaches for self-supervised learning have sho…
View article: A Deeper Look at 3D Shape Classifiers
A Deeper Look at 3D Shape Classifiers Open
We investigate the role of representations and architectures for classifying 3D shapes in terms of their computational efficiency, generalization, and robustness to adversarial transformations. By varying the number of training examples an…
View article: Reasoning about Fine-grained Attribute Phrases using Reference Games
Reasoning about Fine-grained Attribute Phrases using Reference Games Open
We present a framework for learning to describe fine-grained visual differences between instances using attribute phrases. Attribute phrases capture distinguishing aspects of an object (e.g., "propeller on the nose" or "door near the wing"…
View article: Cross Quality Distillation.
Cross Quality Distillation. Open
We propose a technique for training recognition models when high-quality data is available at training time but not at testing time. Our approach, called Cross Quality Distillation (CQD), first trains a model on the high-quality data and e…
View article: Adapting Models to Signal Degradation using Distillation
Adapting Models to Signal Degradation using Distillation Open
Model compression and knowledge distillation have been successfully applied for cross-architecture and cross-domain transfer learning. However, a key requirement is that training examples are in correspondence across the domains. We show t…