Ganesh Dasika
YOU?
Author Swipe
View article: A Deep Dive Into Understanding The Random Walk-Based Temporal Graph Learning
A Deep Dive Into Understanding The Random Walk-Based Temporal Graph Learning Open
Machine learning on graph data has gained sig-nificant interest because of its applicability to various domainsranging from product recommendations to drug discovery. Whilethere is a rapid growth in the algorithmic community, the com-puter…
View article: A Deep Dive Into Understanding TheRandom Walk-Based Temporal Graph Learning
A Deep Dive Into Understanding TheRandom Walk-Based Temporal Graph Learning Open
Machine learning on graph data has gained significant interest because of its applicability to various domains ranging from product recommendations to drug discovery. While there is a rapid growth in the algorithmic community, the com-pute…
View article: Committees
Committees Open
View article: Rank and run-time aware compression of NLP Applications
Rank and run-time aware compression of NLP Applications Open
Sequence model based NLP applications can be large. Yet, many applications that benefit from them run on small devices with very limited compute and storage capabilities, while still having run-time constraints. As a result, there is a nee…
View article: Pushing the limits of RNN Compression
Pushing the limits of RNN Compression Open
Recurrent Neural Networks (RNN) can be difficult to deploy on resource constrained devices due to their size. As a result, there is a need for compression techniques that can significantly compress RNNs without negatively impacting task ac…
View article: Compressing RNNs for IoT devices by 15-38x using Kronecker Products
Compressing RNNs for IoT devices by 15-38x using Kronecker Products Open
Recurrent Neural Networks (RNN) can be difficult to deploy on resource constrained devices due to their size.As a result, there is a need for compression techniques that can significantly compress RNNs without negatively impacting task acc…
View article: Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs
Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs Open
The Winograd or Cook-Toom class of algorithms help to reduce the overall compute complexity of many modern deep convolutional neural networks (CNNs). Although there has been a lot of research done on model and algorithmic optimization of C…
View article: Ternary Hybrid Neural-Tree Networks for Highly Constrained IoT Applications
Ternary Hybrid Neural-Tree Networks for Highly Constrained IoT Applications Open
Machine learning-based applications are increasingly prevalent in IoT devices. The power and storage constraints of these devices make it particularly challenging to run modern neural networks, limiting the number of new applications that …
View article: Measuring scheduling efficiency of RNNs for NLP applications
Measuring scheduling efficiency of RNNs for NLP applications Open
Recurrent neural networks (RNNs) have shown state of the art results for speech recognition, natural language processing, image captioning and video summarizing applications. Many of these applications run on low-power platforms, so their …
View article: Ternary Hybrid Neural-Tree Networks for Highly Constrained IoT Applications
Ternary Hybrid Neural-Tree Networks for Highly Constrained IoT Applications Open
Machine learning-based applications are increasingly prevalent in IoT devices. The power and storage constraints of these devices make it particularly challenging to run modern neural networks, limiting the number of new applications that …
View article: Efficient Winograd or Cook-Toom Convolution Kernel Implementation on\n Widely Used Mobile CPUs
Efficient Winograd or Cook-Toom Convolution Kernel Implementation on\n Widely Used Mobile CPUs Open
The Winograd or Cook-Toom class of algorithms help to reduce the overall\ncompute complexity of many modern deep convolutional neural networks (CNNs).\nAlthough there has been a lot of research done on model and algorithmic\noptimization o…
View article: Run-Time Efficient RNN Compression for Inference on Edge Devices
Run-Time Efficient RNN Compression for Inference on Edge Devices Open
Recurrent neural networks can be large and compute-intensive, yet many\napplications that benefit from RNNs run on small devices with very limited\ncompute and storage capabilities while still having run-time constraints. As a\nresult, the…
View article: Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs
Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs Open
The Winograd or Cook-Toom class of algorithms help to reduce the overall compute complexity of many modern deep convolutional neural networks (CNNs). Although there has been a lot of research done on model and algorithmic optimization of C…
View article: Guest Editors’ Introduction
Guest Editors’ Introduction Open
No abstract available.
View article: Scalpel
Scalpel Open
As the size of Deep Neural Networks (DNNs) continues to grow to increase accuracy and solve more complex problems, their energy footprint also scales. Weight pruning reduces DNN model size and the computation by removing redundant weights.…
View article: BONSEYES
BONSEYES Open
The Bonseyes EU H2020 collaborative project aims to develop a platform consisting of a Data Marketplace, a Deep Learning Toolbox, and Developer Reference Platforms for organizations wanting to adopt Artificial Intelligence. The project wil…