Naresh R. Shanbhag
YOU?
Author Swipe
View article: Energy-Accuracy Trade-Offs in Massive MIMO Signal Detection Using SRAM-Based In-Memory Computing
Energy-Accuracy Trade-Offs in Massive MIMO Signal Detection Using SRAM-Based In-Memory Computing Open
View article: On the Security Vulnerabilities of MRAM-based In-Memory Computing Architectures against Model Extraction Attacks
On the Security Vulnerabilities of MRAM-based In-Memory Computing Architectures against Model Extraction Attacks Open
View article: Growing Efficient Accurate and Robust Neural Networks on the Edge
Growing Efficient Accurate and Robust Neural Networks on the Edge Open
The ubiquitous deployment of deep learning systems on resource-constrained Edge devices is hindered by their high computational complexity coupled with their fragility to out-of-distribution (OOD) data, especially to naturally occurring co…
View article: Compute SNDR-Boosted 22-nm MRAM-Based In-Memory Computing Macro Using Statistical Error Compensation
Compute SNDR-Boosted 22-nm MRAM-Based In-Memory Computing Macro Using Statistical Error Compensation Open
View article: Enhancing the Accuracy of 6T SRAM-Based In-Memory Architecture via Maximum Likelihood Detection
Enhancing the Accuracy of 6T SRAM-Based In-Memory Architecture via Maximum Likelihood Detection Open
View article: Energy-Accuracy Trade-Offs for Resistive In-Memory Computing Architectures
Energy-Accuracy Trade-Offs for Resistive In-Memory Computing Architectures Open
Resistive in-memory computing (IMC) architectures currently lag behind SRAM IMCs and digital accelerators in both energy efficiency and compute density due to their low compute accuracy. This article proposes the use of signal-to-noise-plu…
View article: On the Robustness of Randomized Ensembles to Adversarial Perturbations
On the Robustness of Randomized Ensembles to Adversarial Perturbations Open
Randomized ensemble classifiers (RECs), where one classifier is randomly selected during inference, have emerged as an attractive alternative to traditional ensembling methods for realizing adversarially robust classifiers with limited com…
View article: Coordinated Science Laboratory 70th Anniversary Symposium: The Future of Computing
Coordinated Science Laboratory 70th Anniversary Symposium: The Future of Computing Open
In 2021, the Coordinated Science Laboratory CSL, an Interdisciplinary Research Unit at the University of Illinois Urbana-Champaign, hosted the Future of Computing Symposium to celebrate its 70th anniversary. CSL's research covers the full …
View article: Adversarial Vulnerability of Randomized Ensembles
Adversarial Vulnerability of Randomized Ensembles Open
Despite the tremendous success of deep neural networks across various tasks, their vulnerability to imperceptible adversarial perturbations has hindered their deployment in the real world. Recently, works on randomized ensembles have empir…
View article: Fundamental Limits on the Computational Accuracy of Resistive Crossbar-based In-memory Architectures
Fundamental Limits on the Computational Accuracy of Resistive Crossbar-based In-memory Architectures Open
In-memory computing (IMC) architectures exhibit an intrinsic trade-off between computational accuracy and energy efficiency. This paper determines the fundamental limits on the compute SNR of MRAM-, ReRAM-, and FeFET-based crossbars by emp…
View article: Benchmarking In-Memory Computing Architectures
Benchmarking In-Memory Computing Architectures Open
In-memory computing (IMC) architectures have emerged as a compelling platform to implement energy-efficient machine learning (ML) systems. However, today, the energy efficiency gains provided by IMC designs seem to be leveling off and it i…
View article: Fundamental Limits on Energy-Delay-Accuracy of In-Memory Architectures in Inference Applications
Fundamental Limits on Energy-Delay-Accuracy of In-Memory Architectures in Inference Applications Open
This paper obtains fundamental limits on the computational precision of in-memory computing architectures (IMCs). An IMC noise model and associated SNR metrics are defined and their interrelationships analyzed to show that the accuracy of …
View article: Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks
Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks Open
Despite their tremendous successes, convolutional neural networks (CNNs) incur high computational/storage costs and are vulnerable to adversarial perturbations. Recent works on robust model compression address these challenges by combining…
View article: Generalized Depthwise-Separable Convolutions for Adversarially Robust\n and Efficient Neural Networks
Generalized Depthwise-Separable Convolutions for Adversarially Robust\n and Efficient Neural Networks Open
Despite their tremendous successes, convolutional neural networks (CNNs)\nincur high computational/storage costs and are vulnerable to adversarial\nperturbations. Recent works on robust model compression address these\nchallenges by combin…
View article: Robustifying $\ell_\infty$ Adversarial Training to the Union of Perturbation Models
Robustifying $\ell_\infty$ Adversarial Training to the Union of Perturbation Models Open
Classical adversarial training (AT) frameworks are designed to achieve high adversarial accuracy against a single attack type, typically $\ell_\infty$ norm-bounded perturbations. Recent extensions in AT have focused on defending against th…
View article: Robustifying 𝓁 ∞ Adversarial Training to the Union of Perturbation Models.
Robustifying 𝓁 ∞ Adversarial Training to the Union of Perturbation Models. Open
View article: Signal Processing Methods to Enhance the Energy Efficiency of In-Memory Computing Architectures
Signal Processing Methods to Enhance the Energy Efficiency of In-Memory Computing Architectures Open
This paper presents signal processing methods to enhance the energy vs. accuracy trade-off of in-memory computing (IMC) architectures. First, an optimal clipping criterion (OCC) for signal quantization is proposed in order to minimize the …
View article: Fundamental Limits on Energy-Delay-Accuracy of In-memory Architectures\n in Inference Applications
Fundamental Limits on Energy-Delay-Accuracy of In-memory Architectures\n in Inference Applications Open
This paper obtains fundamental limits on the computational precision of\nin-memory computing architectures (IMCs). An IMC noise model and associated SNR\nmetrics are defined and their interrelationships analyzed to show that the\naccuracy …
View article: DBQ: A Differentiable Branch Quantizer for Lightweight Deep Neural\n Networks
DBQ: A Differentiable Branch Quantizer for Lightweight Deep Neural\n Networks Open
Deep neural networks have achieved state-of-the art performance on various\ncomputer vision tasks. However, their deployment on resource-constrained\ndevices has been hindered due to their high computational and storage\ncomplexity. While …
View article: Nanotechnology-inspired Information Processing Systems of the Future
Nanotechnology-inspired Information Processing Systems of the Future Open
Nanoscale semiconductor technology has been a key enabler of the computing revolution. It has done so via advances in new materials and manufacturing processes that resulted in the size of the basic building block of computing systems - th…
View article: HarDNN: Feature Map Vulnerability Evaluation in CNNs
HarDNN: Feature Map Vulnerability Evaluation in CNNs Open
As Convolutional Neural Networks (CNNs) are increasingly being employed in safety-critical applications, it is important that they behave reliably in the face of hardware errors. Transient hardware errors may percolate undesirable state du…
View article: DBQ: A Differentiable Branch Quantizer for Lightweight Deep Neural Networks
DBQ: A Differentiable Branch Quantizer for Lightweight Deep Neural Networks Open
View article: Error-Resilient Spintronics via the Shannon- Inspired Model of Computation
Error-Resilient Spintronics via the Shannon- Inspired Model of Computation Open
The energy and delay reductions from CMOS scaling have stagnated, motivating the search for a CMOS replacement. Spintronic devices are one of the promising beyond-CMOS alternatives. However, they exhibit high switching error rates of 1% or…
View article: Binodal, wireless epidermal electronic systems with in-sensor analytics for neonatal intensive care
Binodal, wireless epidermal electronic systems with in-sensor analytics for neonatal intensive care Open
Sensitive sensing Neonatal care, particularly for premature babies, is complicated by the infants' fragility and by the need for a large number of tethered sensors to be attached to their tiny bodies. Chung et al. developed a pair of senso…
View article: Boosted Spin Channel Networks for Energy-Efficient Inference
Boosted Spin Channel Networks for Energy-Efficient Inference Open
Computational scaling beyond silicon electronics based on Moore's law requires the adoption of alternate state variables such as electronic spin. Multiple research efforts are underway exploring both Boolean and non-Boolean design space us…
View article: Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks Open
Efforts to reduce the numerical precision of computations in deep learning training have yielded systems that aggressively quantize weights and activations, yet employ wide high-precision accumulators for partial sums in inner-product oper…
View article: Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep\n Networks
Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep\n Networks Open
Efforts to reduce the numerical precision of computations in deep learning\ntraining have yielded systems that aggressively quantize weights and\nactivations, yet employ wide high-precision accumulators for partial sums in\ninner-product o…
View article: Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm
Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm Open
The high computational and parameter complexity of neural networks makes their training very slow and difficult to deploy on energy and storage-constrained computing systems. Many network complexity reduction techniques have been proposed …
View article: Efficient Local Secret Sharing for Distributed Blockchain Systems
Efficient Local Secret Sharing for Distributed Blockchain Systems Open
Blockchain systems store transaction data in the form of a distributed ledger where each peer is to maintain an identical copy. Blockchain systems resemble repetition codes, incurring high storage cost. Recently, distributed storage blockc…
View article: Generalized Water-filling for Source-aware Energy-efficient SRAMs
Generalized Water-filling for Source-aware Energy-efficient SRAMs Open
Conventional low-power static random access memories (SRAMs) reduce read energy by decreasing the bit-line voltage swings uniformly across the bit-line columns. This is because the read energy is proportional to the bit-line swings. On the…