Renzhen Wang
YOU?
Author Swipe
View article: Data-Distill-Net: A Data Distillation Approach Tailored for Reply-based Continual Learning
Data-Distill-Net: A Data Distillation Approach Tailored for Reply-based Continual Learning Open
Replay-based continual learning (CL) methods assume that models trained on a small subset can also effectively minimize the empirical risk of the complete dataset. These methods maintain a memory buffer that stores a sampled subset of data…
Singular Value Fine-tuning for Few-Shot Class-Incremental Learning Open
Class-Incremental Learning (CIL) aims to prevent catastrophic forgetting of previously learned classes while sequentially incorporating new ones. The more challenging Few-shot CIL (FSCIL) setting further complicates this by providing only …
View article: SD-LoRA: Scalable Decoupled Low-Rank Adaptation for Class Incremental Learning
SD-LoRA: Scalable Decoupled Low-Rank Adaptation for Class Incremental Learning Open
Continual Learning (CL) with foundation models has recently emerged as a promising paradigm to exploit abundant knowledge acquired during pre-training for tackling sequential tasks. However, existing prompt-based and Low-Rank Adaptation-ba…
Dual-CBA: Improving Online Continual Learning via Dual Continual Bias Adaptors from a Bi-level Optimization Perspective Open
In online continual learning (CL), models trained on changing distributions easily forget previously learned knowledge and bias toward newly received tasks. To address this issue, we present Continual Bias Adaptor (CBA), a bi-level framewo…
CBA: Improving Online Continual Learning via Continual Bias Adaptor Open
Online continual learning (CL) aims to learn new knowledge and consolidate previously learned knowledge from non-stationary data streams. Due to the time-varying training setting, the model learned from a changing distribution easily forge…
Imbalanced Semi-supervised Learning with Bias Adaptive Classifier Open
Pseudo-labeling has proven to be a promising semi-supervised learning (SSL) paradigm. Existing pseudo-labeling methods commonly assume that the class distributions of training data are balanced. However, such an assumption is far from real…
Relational Experience Replay: Continual Learning by Adaptively Tuning Task-wise Relationship Open
Continual learning is a promising machine learning paradigm to learn new tasks while retaining previously learned knowledge over streaming training data. Till now, rehearsal-based methods, keeping a small part of data from old tasks as a m…
Label Hierarchy Transition: Delving into Class Hierarchies to Enhance Deep Classifiers Open
Hierarchical classification aims to sort the object into a hierarchical structure of categories. For example, a bird can be categorized according to a three-level hierarchy of order, family, and species. Existing methods commonly address h…
Unsupervised Local Discrimination for Medical Images Open
Contrastive learning, which aims to capture general representation from unlabeled images to initialize the medical analysis models, has been proven effective in alleviating the high demand for expensive annotations. Current methods mainly …
Residual Moment Loss for Medical Image Segmentation Open
Location information is proven to benefit the deep learning models on capturing the manifold structure of target objects, and accordingly boosts the accuracy of medical image segmentation. However, most existing methods encode the location…
Unsupervised Learning of Local Discriminative Representation for Medical Images Open
Local discriminative representation is needed in many medical image analysis tasks such as identifying sub-types of lesion or segmenting detailed components of anatomical structures. However, the commonly applied supervised representation …
Meta Feature Modulator for Long-tailed Recognition Open
Deep neural networks often degrade significantly when training data suffer from class imbalance problems. Existing approaches, e.g., re-sampling and re-weighting, commonly address this issue by rearranging the label distribution of trainin…
LT-Net: Label Transfer by Learning Reversible Voxel-wise Correspondence for One-shot Medical Image Segmentation Open
We introduce a one-shot segmentation method to alleviate the burden of manual annotation for medical images. The main idea is to treat one-shot segmentation as a classical atlas-based segmentation problem, where voxel-wise correspondence f…