Sequence learning
View article
Convolutional Sequence to Sequence Learning Open
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to re…
View article
Consolidation alters motor sequence-specific distributed representations Open
Functional magnetic resonance imaging (fMRI) studies investigating the acquisition of sequential motor skills in humans have revealed learning-related functional reorganizations of the cortico-striatal and cortico-cerebellar motor systems …
View article
Memory Fusion Network for Multi-view Sequential Learning Open
Multi-view sequential learning is a fundamental problem in machine learning dealing with multi-view sequences. In a multi-view sequence, there exists two forms of interactions between different views: view-specific interactions and cross-v…
View article
Three scenarios for continual learning Open
Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning. In recent years, numerous methods have been proposed for continual learn…
View article
SeqSleepNet: End-to-End Hierarchical Recurrent Neural Network for Sequence-to-Sequence Automatic Sleep Staging Open
Automatic sleep staging has been often treated as a simple classification problem that aims at determining the label of individual target polysomnography epochs one at a time. In this paper, we tackle the task as a sequence-to-sequence cla…
View article
Sequence-to-Point Learning With Neural Networks for Non-Intrusive Load Monitoring Open
Energy disaggregation (a.k.a nonintrusive load monitoring, NILM), a single-channel blind source separation problem, aims to decompose the mains which records the whole house electricity consumption into appliance-wise readings. This proble…
View article
TransformerCPI: improving compound–protein interaction prediction by sequence-based deep learning with self-attention mechanism and label reversal experiments Open
Motivation Identifying compound–protein interaction (CPI) is a crucial task in drug discovery and chemogenomics studies, and proteins without three-dimensional structure account for a large part of potential biological targets, which requi…
View article
Sequence-to-Sequence Learning as Beam-Search Optimization Open
Sequence-to-Sequence (seq2seq) modeling has rapidly become an important generalpurpose NLP tool that has proven effective for many text-generation and sequence-labeling tasks.Seq2seq builds on deep neural language modeling and inherits its…
View article
Brain age prediction using deep learning uncovers associated sequence variants Open
Machine learning algorithms can be trained to estimate age from brain structural MRI. The difference between an individual’s predicted and chronological age, predicted age difference (PAD), is a phenotype of relevance to aging and brain di…
View article
Point2Sequence: Learning the Shape Representation of 3D Point Clouds with an Attention-Based Sequence to Sequence Network Open
Exploring contextual information in the local region is important for shape understanding and analysis. Existing studies often employ hand-crafted or explicit ways to encode contextual information of local regions. However, it is hard to c…
View article
SQLNet: Generating Structured Queries From Natural Language Without Reinforcement Learning Open
Synthesizing SQL queries from natural language is a long-standing open problem and has been attracting considerable interest recently. Toward solving the problem, the de facto approach is to employ a sequence-to-sequence-style model. Such …
View article
BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer Open
Modeling users' dynamic and evolving preferences from their historical behaviors is challenging and crucial for recommendation systems. Previous methods employ sequential neural networks (e.g., Recurrent Neural Network) to encode users' hi…
View article
Optimal Hyperparameters for Deep LSTM-Networks for Sequence Labeling Tasks Open
Selecting optimal parameters for a neural network architecture can often make the difference between mediocre and state-of-the-art performance. However, little is published which parameters and design choices should be evaluated or selecte…
View article
Deep Learning with Gated Recurrent Unit Networks for Financial Sequence Predictions Open
Gated recurrent unit (GRU) networks perform well in sequence learning tasks and overcome the problems of vanishing and explosion of gradients in traditional recurrent neural networks (RNNs) when learning long-term dependencies. Although th…
View article
Incorporating Copying Mechanism in Sequence-to-Sequence Learning Open
We address an important problem in sequence-to-sequence (Seq2Seq) learning referred to as copying, in which certain segments in the input sequence are selectively replicated in the output sequence. A similar phenomenon is observable in hum…
View article
Progress in Neural NLP: Modeling, Learning, and Reasoning Open
Natural language processing (NLP) is a subfield of artificial intelligence that focuses on enabling computers to understand and process human languages. In the last five years, we have witnessed the rapid development of NLP in tasks such a…
View article
Learning to Remember Rare Events Open
Despite recent advances, memory-augmented deep neural networks are still limited when it comes to life-long and one-shot learning, especially in remembering rare events. We present a large-scale life-long memory module for use in deep lear…
View article
Deep Learning Methods for Vessel Trajectory Prediction Based on Recurrent Neural Networks Open
Data-driven methods open up unprecedented possibilities for maritime surveillance using automatic identification system (AIS) data. In this work, we explore deep learning strategies using historical AIS observations to address the problem …
View article
Online and Linear-Time Attention by Enforcing Monotonic Alignments Open
Recurrent neural network models with an attention mechanism have proven to be extremely effective on a wide variety of sequence-to-sequence problems. However, the fact that soft attention mechanisms perform a pass over the entire input seq…
View article
Spatial-Temporal Multi-Cue Network for Continuous Sign Language Recognition Open
Despite the recent success of deep learning in continuous sign language recognition (CSLR), deep models typically focus on the most discriminative features, ignoring other potentially non-trivial and informative contents. Such characterist…
View article
Deep Learning for Load Forecasting: Sequence to Sequence Recurrent Neural Networks With Attention Open
The biggest contributor to global warming is energy production and use. Moreover, a push for electrical vehicle and other economic developments are expected to further increase energy use. To combat these challenges, electrical load foreca…
View article
Graph2Seq: Graph to Sequence Learning with Attention-based Neural Networks Open
The celebrated Sequence to Sequence learning (Seq2Seq) technique and its numerous variants achieve excellent performance on many tasks. However, many machine learning tasks have inputs naturally represented as graphs; existing Seq2Seq mode…
View article
Long Short-Term Memory Recurrent Neural Network for Automatic Speech Recognition Open
Automatic speech recognition (ASR) is one of the most demanding tasks in natural language processing owing to its complexity. Recently, deep learning approaches have been deployed for this task and have been proven to outperform traditiona…
View article
Dissociable effects of surprising rewards on learning and memory. Open
Reward-prediction errors track the extent to which rewards deviate from expectations, and aid in learning. How do such errors in prediction interact with memory for the rewarding episode? Existing findings point to both cooperative and com…
View article
Online Learning: A Comprehensive Survey Open
Online learning represents an important family of machine learning algorithms, in which a learner attempts to resolve an online prediction (or any type of decision-making) task by learning a model/hypothesis from a sequence of data instanc…
View article
Combining Recurrent, Convolutional, and Continuous-time Models with\n Linear State-Space Layers Open
Recurrent neural networks (RNNs), temporal convolutions, and neural\ndifferential equations (NDEs) are popular families of deep learning models for\ntime-series data, each with unique strengths and tradeoffs in modeling power\nand computat…
View article
Exploring Sequence-to-Sequence Learning in Aspect Term Extraction Open
Aspect term extraction (ATE) aims at identifying all aspect terms in a sentence and is usually modeled as a sequence labeling problem. However, sequence labeling based methods cannot make full use of the overall meaning of the whole senten…
View article
The procedural learning deficit hypothesis of language learning disorders: we see some problems Open
Impaired procedural learning has been suggested as a possible cause of developmental dyslexia ( DD ) and specific language impairment ( SLI ). This study examined the relationship between measures of verbal and non‐verbal implicit and expl…
View article
Understanding the Neural Bases of Implicit and Statistical Learning Open
Both implicit learning and statistical learning focus on the ability of learners to pick up on patterns in the environment. It has been suggested that these two lines of research may be combined into a single construct of “implicit statist…
View article
NREM2 and Sleep Spindles Are Instrumental to the Consolidation of Motor Sequence Memories Open
Although numerous studies have convincingly demonstrated that sleep plays a critical role in motor sequence learning (MSL) consolidation, the specific contribution of the different sleep stages in this type of memory consolidation is still…