Daniel Becking
YOU?
Author Swipe
View article: Neural Network Coding of Difference Updates for Efficient Distributed Learning Communication
Neural Network Coding of Difference Updates for Efficient Distributed Learning Communication Open
Distributed learning requires a frequent communication of neural network update data. For this, we present a set of new compression tools, jointly called differential neural network coding (dNNC). dNNC is specifically tailored to efficient…
View article: Adaptive Differential Filters for Fast and Communication-Efficient Federated Learning
Adaptive Differential Filters for Fast and Communication-Efficient Federated Learning Open
Federated learning (FL) scenarios inherently generate a large communication overhead by frequently transmitting neural network updates between clients and server. To minimize the communication cost, introducing sparsity in conjunction with…
View article: ECQ x : Explainability-Driven Quantization for Low-Bit and Sparse DNNs.
ECQ x : Explainability-Driven Quantization for Low-Bit and Sparse DNNs. Open
The remarkable success of deep neural networks (DNNs) in various applications is accompanied by a significant increase in network parameters and arithmetic operations. Such increases in memory and computational demands make deep learning p…
View article: ECQ$^{\text{x}}$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs
ECQ$^{\text{x}}$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs Open
The remarkable success of deep neural networks (DNNs) in various applications is accompanied by a significant increase in network parameters and arithmetic operations. Such increases in memory and computational demands make deep learning p…
View article: FantastIC4: A Hardware-Software Co-Design Approach for Efficiently Running 4Bit-Compact Multilayer Perceptrons
FantastIC4: A Hardware-Software Co-Design Approach for Efficiently Running 4Bit-Compact Multilayer Perceptrons Open
With the growing demand for deploying deep learning models to the "edge", it is paramount to develop techniques that allow to execute state-of-the-art models within very tight and limited resource constraints. In this work we propose a sof…
View article: FantastIC4: A Hardware-Software Co-Design Approach for Efficiently\n Running 4bit-Compact Multilayer Perceptrons
FantastIC4: A Hardware-Software Co-Design Approach for Efficiently\n Running 4bit-Compact Multilayer Perceptrons Open
With the growing demand for deploying deep learning models to the "edge", it\nis paramount to develop techniques that allow to execute state-of-the-art\nmodels within very tight and limited resource constraints. In this work we\npropose a …
View article: Learning Sparse & Ternary Neural Networks with Entropy-Constrained Trained Ternarization (EC2T)
Learning Sparse & Ternary Neural Networks with Entropy-Constrained Trained Ternarization (EC2T) Open
Deep neural networks (DNN) have shown remarkable success in a variety of\nmachine learning applications. The capacity of these models (i.e., number of\nparameters), endows them with expressive power and allows them to reach the\ndesired pe…