Multi-task learning ≈ Multi-task learning
View article: Multi-task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics
Multi-task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics Open
Numerous deep learning applications benefit from multitask learning with multiple regression and classification objectives. In this paper we make the observation that the performance of such systems is strongly dependent on the relative we…
View article
Understanding the Impact of Value Selection Heuristics in Scheduling Problems Open
It has been observed that value selection heuristics have less impact than other heuristic choices when solving hard combinatorial optimization (CO) problems. It is often thought that this is because more time is spent on unsatisfiable sub…
View article
ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks Open
We present ViLBERT (short for Vision-and-Language BERT), a model for learning task-agnostic joint representations of image content and natural language. We extend the popular BERT architecture to a multi-modal two-stream model, pro-cessing…
View article
An overview of multi-task learning Open
As a promising area in machine learning, multi-task learning (MTL) aims to improve the performance of multiple related learning tasks by leveraging useful information among them. In this paper, we give an overview of MTL by first giving a …
View article
Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts Open
Neural-based multi-task learning has been successfully used in many real-world large-scale applications such as recommendation systems. For example, in movie recommendations, beyond providing users movies which they tend to purchase and wa…
View article
Recurrent Neural Network for Text Classification with Multi-Task Learning Open
Neural network based methods have obtained great progress on a variety of natural language processing tasks. However, in most previous works, the models are learned based on single-task supervised objectives, which often suffer from insuff…
View article
A comprehensive review on ensemble deep learning: Opportunities and challenges Open
In machine learning, two approaches outperform traditional algorithms: ensemble learning and deep learning. The former refers to methods that integrate multiple base models in the same framework to obtain a stronger model that outperforms …
View article
Multi-Task Learning for Dense Prediction Tasks: A Survey Open
With the advent of deep learning, many dense prediction tasks, i.e., tasks that produce pixel-level predictions, have seen significant performance improvements. The typical approach is to learn these tasks in isolation, that is, a separate…
View article
ConViT: improving vision transformers with soft convolutional inductive biases* Open
Convolutional architectures have proven to be extremely successful for vision tasks. Their hard inductive biases enable sample-efficient learning, but come at the cost of a potentially lower performance ceiling. Vision transformers rely on…
View article
A Survey on Multi-Task Learning Open
Multi-Task Learning (MTL) is a learning paradigm in machine learning and its aim is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks. In this paper, we give …
View article
Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis Open
Representation Learning is a significant and challenging task in multimodal learning. Effective modality representations should contain two parts of characteristics: the consistency and the difference. Due to the unified multimodal annota-…
View article
Adversarial Multi-task Learning for Text Classification Open
Neural network models have shown their promising opportunities for multi-task learning, which focus on learning the shared layers to extract the common and task-invariant features. However, in most existing approaches, the extracted shared…
View article
Deep Bayesian Active Learning with Image Data Open
Even though active learning forms an important pillar of machine learning, deep learning tools are not prevalent within it. Deep learning poses several difficulties when used in an active learning setting. First, active learning (AL) metho…
View article
2D/3D Pose Estimation and Action Recognition Using Multitask Deep Learning Open
Action recognition and human pose estimation are closely related but both problems are generally handled as distinct tasks in the literature. In this work, we propose a multitask framework for jointly 2D and 3D pose estimation from still i…
View article
Multitask Prompted Training Enables Zero-Shot Task Generalization Open
Large language models have recently been shown to attain reasonable zero-shot generalization on a diverse set of tasks (Brown et al., 2020). It has been hypothesized that this is a consequence of implicit multitask learning in language mod…
View article
Three scenarios for continual learning Open
Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning. In recent years, numerous methods have been proposed for continual learn…
View article
Transfer Learning in Natural Language Processing Open
The classic supervised machine learning paradigm is based on learning in isolation, a single predictive model for a task using a single dataset. This approach requires a large number of training examples and performs best for well-defined …
View article
A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks Open
Transfer and multi-task learning have traditionally focused on either a single source-target pair or very few, similar tasks. Ideally, the linguistic levels of morphology, syntax and semantics would benefit each other by being trained in a…
View article
Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics Open
Numerous deep learning applications benefit from multi-task learning with multiple regression and classification objectives. In this paper we make the observation that the performance of such systems is strongly dependent on the relative w…
View article
Deep multi-task learning with low level tasks supervised at lower layers Open
In all previous work on deep multi-task learning we are aware of, all task supervisions are on the same (outermost) layer. We present a multi-task learning architecture with deep bi-directional RNNs, where different tasks supervision can h…
View article
GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks Open
Deep multitask networks, in which one neural network produces multiple predictive outputs, can offer better speed and performance than their single-task counterparts but are challenging to train properly. We present a gradient normalizatio…
View article
Deep Bayesian Active Learning with Image Data Open
Even though active learning forms an important pillar of machine learning, deep learning tools are not prevalent within it. Deep learning poses several difficulties when used in an active learning setting. First, active learning (AL) metho…
View article
Multi-Task Learning with Deep Neural Networks: A Survey Open
Multi-task learning (MTL) is a subfield of machine learning in which multiple tasks are simultaneously learned by a shared model. Such approaches offer advantages like improved data efficiency, reduced overfitting through shared representa…
View article
Predicting Materials Properties with Little Data Using Shotgun Transfer Learning Open
There is a growing demand for the use of machine learning (ML) to derive fast-to-evaluate surrogate models of materials properties. In recent years, a broad array of materials property databases have emerged as part of a digital transforma…
View article
A survey on heterogeneous transfer learning Open
Transfer learning has been demonstrated to be effective for many real-world applications as it exploits knowledge present in labeled training data from a source domain to enhance a model’s performance in a target domain, which has little o…
View article
Multi-Task Learning as Multi-Objective Optimization Open
In multi-task learning, multiple tasks are solved jointly, sharing inductive bias between them. Multi-task learning is inherently a multi-objective problem because different tasks may conflict, necessitating a trade-off. A common compromis…
View article
Self-Supervised ECG Representation Learning for Emotion Recognition Open
We exploit a self-supervised deep multi-task learning framework for electrocardiogram (ECG) -based emotion recognition. The proposed solution consists of two stages of learning a) learning ECG representations and b) learning to classify em…
View article
Classification and Visualization of Alzheimer’s Disease using Volumetric Convolutional Neural Network and Transfer Learning Open
Recently, deep-learning-based approaches have been proposed for the classification of neuroimaging data related to Alzheimer’s disease (AD), and significant progress has been made. However, end-to-end learning that is capable of maximizing…
View article
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations Open
The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms. In this paper, we prov…
View article
Self-Supervised Speech Representation Learning: A Review Open
Although supervised deep learning has revolutionized speech and audio\nprocessing, it has necessitated the building of specialist models for\nindividual tasks and application scenarios. It is likewise difficult to apply\nthis to dialects a…