View article
Why Propensity Scores Should Not Be Used for Matching Open
We show that propensity score matching (PSM), an enormously popular method of preprocessing data for causal inference, often accomplishes the opposite of its intended goal—thus increasing imbalance, inefficiency, model dependence, and bias…
View article
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks Open
Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary ex…
View article
Pruning Convolutional Neural Networks for Resource Efficient Inference Open
We propose a new formulation for pruning convolutional kernels in neural networks to enable efficient inference. We interleave greedy criteria-based pruning with fine-tuning by backpropagation - a computationally efficient procedure that m…
View article
Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned Open
Multi-head self-attention is a key component of the Transformer, a state-of-the-art architecture for neural machine translation. In this work we evaluate the contribution made by individual attention heads to the overall performance of the…
View article
Usability of Symbolic Regression for Hybrid System Identification - System Classes and Parameters (Short Paper) Open
Hybrid systems, which combine both continuous and discrete behavior, are used in many fields, including robotics, biological systems, and control systems. However, due to their complexity, finding an accurate model is a challenge. This pap…
View article
Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks Open
This paper proposed a Soft Filter Pruning (SFP) method to accelerate the inference procedure of deep Convolutional Neural Networks (CNNs). Specifically, the proposed SFP enables the pruned filters to be updated when training the model afte…
View article
Graph Convolution over Pruned Dependency Trees Improves Relation Extraction Open
Dependency trees help relation extraction models capture long-range relations between words. However, existing dependency-based models either neglect crucial information (e.g., negation) by pruning the dependency trees too aggressively, or…
View article
Sequence-Level Knowledge Distillation Open
Neural machine translation (NMT) offers a novel alternative formulation of translation that is potentially simpler than statistical approaches.However to reach competitive performance, NMT models need to be exceedingly large.In this paper …
View article
Rethinking the Value of Network Pruning Open
Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings. A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning. During pruning, a…
View article
Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures Open
State-of-the-art neural networks are getting deeper and wider. While their performance increases with the increasing number of layers and neurons, it is crucial to design an efficient deep architecture in order to reduce computational and …
View article
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. Open
Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary ex…
View article
Pruning Filters for Efficient ConvNets Open
The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various laye…
View article
Once-for-All: Train One Network and Specialize it for Efficient\n Deployment Open
We address the challenging problem of efficient inference across many devices\nand resource constraints, especially on edge devices. Conventional approaches\neither manually design or use neural architecture search (NAS) to find a\nspecial…
View article
To prune, or not to prune: exploring the efficacy of pruning for model compression Open
Model pruning seeks to induce sparsity in a deep neural network's various connection matrices, thereby reducing the number of nonzero-valued parameters in the model. Recent reports (Han et al., 2015; Narang et al., 2017) prune deep network…
View article
Clustering a Chemical Inventory for Safety Assessment of Fragrance Ingredients: Identifying Read-Across Analogs to Address Data Gaps Open
A valuable approach to chemical safety assessment is the use of read-across chemicals to provide safety data to support the assessment of structurally similar chemicals. An inventory of over 6000 discrete organic chemicals used as fragranc…
View article
Embedding Watermarks into Deep Neural Networks Open
Deep neural networks have recently achieved significant progress. Sharing\ntrained models of these deep neural networks is very important in the rapid\nprogress of researching or developing deep neural network systems. At the same\ntime, i…
View article
Attention Guided Graph Convolutional Networks for Relation Extraction Open
Dependency trees convey rich structural information that is proven useful for extracting relations among entities in text. However, how to effectively make use of relevant information while ignoring irrelevant information from the dependen…
View article
The State of Sparsity in Deep Neural Networks Open
We rigorously evaluate three state-of-the-art techniques for inducing sparsity in deep neural networks on two large-scale learning tasks: Transformer trained on WMT 2014 English-to-German, and ResNet-50 trained on ImageNet. Across thousand…
View article
An Update on the Impact of Climate Change in Viticulture and Potential Adaptations Open
Climate change will impose increasingly warm and dry conditions on vineyards. Wine quality and yield are strongly influenced by climatic conditions and depend on complex interactions between temperatures, water availability, plant material…
View article
Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision Open
Harnessing the statistical power of neural networks to perform language understanding and symbolic reasoning is difficult, when it requires executing efficient discrete operations against a large knowledge-base. In this work, we introduce …
View article
Iteratively Pruned Deep Learning Ensembles for COVID-19 Detection in Chest X-Rays Open
We demonstrate use of iteratively pruned deep learning model ensembles for detecting pulmonary manifestations of COVID-19 with chest X-rays. This disease is caused by the novel Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) v…
View article
SNIP: Single-shot Network Pruning based on Connection Sensitivity Open
Pruning large neural networks while maintaining their performance is often desirable due to the reduced space and time complexity. In existing methods, pruning is done within an iterative optimization procedure with either heuristically de…
View article
Channel Pruning for Accelerating Very Deep Neural Networks Open
In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks.Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer, by a LASSO regression bas…
View article
Hello Edge: Keyword Spotting on Microcontrollers Open
Keyword spotting (KWS) is a critical component for enabling speech based user interactions on smart devices. It requires real-time response and high accuracy for good user experience. Recently, neural networks have become an attractive cho…
View article
Rethinking the Value of Network Pruning Open
Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings. A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning. During pruning, a…
View article
Pruning Convolutional Neural Networks for Resource Efficient Transfer Learning. Open
We propose a new formulation for pruning convolutional kernels in neural networks to enable efficient inference. We interleave greedy criteria-based pruning with fine-tuning by backpropagation - a computationally efficient procedure that m…
View article
Scalpel Open
As the size of Deep Neural Networks (DNNs) continues to grow to increase accuracy and solve more complex problems, their energy footprint also scales. Weight pruning reduces DNN model size and the computation by removing redundant weights.…
View article
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain\n Surgeon Open
How to develop slim and accurate deep neural networks has become crucial for\nreal- world applications, especially for those employed in embedded systems.\nThough previous work along this research line has shown some promising results,\nmo…
View article
SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY Open
Pruning large neural networks while maintaining their performance is often desirable due to the reduced space and time complexity. In existing methods, pruning is done within an iterative optimization procedure with either heuristically de…
View article
Multi-Channel Graph Neural Network for Entity Alignment Open
10.18653/v1/P19-1140