Corey Lammie
YOU?
Author Swipe
View article: Tutorial: Hardware-Aware Compilation and Simulation for In-Memory Computing
Tutorial: Hardware-Aware Compilation and Simulation for In-Memory Computing Open
View article: Assessing the Performance of Analog Training for Transfer Learning
Assessing the Performance of Analog Training for Transfer Learning Open
View article: Assessing the Performance of Analog Training for Transfer Learning
Assessing the Performance of Analog Training for Transfer Learning Open
Analog in-memory computing is a next-generation computing paradigm that promises fast, parallel, and energy-efficient deep learning training and transfer learning (TL). However, achieving this promise has remained elusive due to a lack of …
View article: The inherent adversarial robustness of analog in-memory computing
The inherent adversarial robustness of analog in-memory computing Open
View article: Efficient transformer adaptation for analog in-memory computing via low-rank adapters
Efficient transformer adaptation for analog in-memory computing via low-rank adapters Open
Analog In-Memory Computing (AIMC) offers a promising solution to the von Neumann bottleneck. However, deploying transformer models on AIMC remains challenging due to their inherent need for flexibility and adaptability across diverse tasks…
View article: The Inherent Adversarial Robustness of Analog In-Memory Computing
The Inherent Adversarial Robustness of Analog In-Memory Computing Open
A key challenge for Deep Neural Network (DNN) algorithms is their vulnerability to adversarial attacks. Inherently non-deterministic compute substrates, such as those based on Analog In-Memory Computing (AIMC), have been speculated to prov…
View article: Kernel Approximation using Analog In-Memory Computing
Kernel Approximation using Analog In-Memory Computing Open
Kernel functions are vital ingredients of several machine learning algorithms, but often incur significant memory and computational costs. We introduce an approach to kernel approximation in machine learning algorithms suitable for mixed-s…
View article: A Precision-Optimized Fixed-Point Near-Memory Digital Processing Unit for Analog In-Memory Computing
A Precision-Optimized Fixed-Point Near-Memory Digital Processing Unit for Analog In-Memory Computing Open
Analog In-Memory Computing (AIMC) is an emerging technology for fast and energy-efficient Deep Learning (DL) inference. However, a certain amount of digital post-processing is required to deal with circuit mismatches and non-idealities ass…
View article: Improving the Accuracy of Analog-Based In-Memory Computing Accelerators Post-Training
Improving the Accuracy of Analog-Based In-Memory Computing Accelerators Post-Training Open
Analog-Based In-Memory Computing (AIMC) inference accelerators can be used to efficiently execute Deep Neural Network (DNN) inference workloads. However, to mitigate accuracy losses, due to circuit and device non-idealities, Hardware-Aware…
View article: LionHeart: A Layer-based Mapping Framework for Heterogeneous Systems with Analog In-Memory Computing Tiles
LionHeart: A Layer-based Mapping Framework for Heterogeneous Systems with Analog In-Memory Computing Tiles Open
When arranged in a crossbar configuration, resistive memory devices can be used to execute Matrix-Vector Multiplications (MVMs), the most dominant operation of many Machine Learning (ML) algorithms, in constant time complexity. Nonetheless…
View article: Guest Editorial Dynamical Neuro-AI Learning Systems: Devices, Circuits, Architecture and Algorithms
Guest Editorial Dynamical Neuro-AI Learning Systems: Devices, Circuits, Architecture and Algorithms Open
This Special Issue of IEEE Journal on Emerging and Selected Topics in Circuits and Systems (JETCAS) is dedicated to demonstrating the latest research progress on dynamical neuro-artificial intelligence (AI) learning systems that bridge the…
View article: Using the IBM analog in-memory hardware acceleration kit for neural network training and inference
Using the IBM analog in-memory hardware acceleration kit for neural network training and inference Open
Analog In-Memory Computing (AIMC) is a promising approach to reduce the latency and energy consumption of Deep Neural Network (DNN) inference and training. However, the noisy and non-linear device characteristics and the non-ideal peripher…
View article: Exploiting the State Dependency of Conductance Variations in Memristive Devices for Accurate In-Memory Computing
Exploiting the State Dependency of Conductance Variations in Memristive Devices for Accurate In-Memory Computing Open
Analog in-memory computing (AIMC) using memristive devices is considered a promising Non-von Neumann approach for deep learning (DL) inference tasks. However, inaccuracies in the programming of devices, that are attributed to conductance v…
View article: Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference
Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference Open
Analog In-Memory Computing (AIMC) is a promising approach to reduce the latency and energy consumption of Deep Neural Network (DNN) inference and training. However, the noisy and non-linear device characteristics, and the non-ideal periphe…
View article: AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing
AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing Open
The advancement of Deep Learning (DL) is driven by efficient Deep Neural Network (DNN) design and new hardware accelerators. Current DNN design is primarily tailored for general-purpose use and deployment on commercially viable platforms. …
View article: Game changers in science and technology - now and beyond
Game changers in science and technology - now and beyond Open
View article: Spike sorting algorithms and their efficient hardware implementation: a comprehensive survey
Spike sorting algorithms and their efficient hardware implementation: a comprehensive survey Open
Objective . Spike sorting is a set of techniques used to analyze extracellular neural recordings, attributing individual spikes to individual neurons. This field has gained significant interest in neuroscience due to advances in implantabl…
View article: Simulation of memristive crossbar arrays for seizure detection and prediction using parallel Convolutional Neural Networks
Simulation of memristive crossbar arrays for seizure detection and prediction using parallel Convolutional Neural Networks Open
For epileptic seizure detection and prediction, to address the computational bottleneck of the von Neumann architecture, we develop an in-memory memristive crossbar-based accelerator simulator. The simulator software is composed of a Pytho…
View article: Toward A Formalized Approach for Spike Sorting Algorithms and Hardware Evaluation
Toward A Formalized Approach for Spike Sorting Algorithms and Hardware Evaluation Open
Spike sorting algorithms are used to separate extracellular recordings of neuronal populations into single-unit spike activities. The development of customized hardware implementing spike sorting algorithms is burgeoning. However, there is…
View article: Seizure Detection and Prediction by Parallel Memristive Convolutional Neural Networks
Seizure Detection and Prediction by Parallel Memristive Convolutional Neural Networks Open
During the past two decades, epileptic seizure detection and prediction algorithms have evolved rapidly. However, despite significant performance improvements, their hardware implementation using conventional technologies, such as Compleme…
View article: Seizure Detection and Prediction by Parallel Memristive Convolutional Neural Networks
Seizure Detection and Prediction by Parallel Memristive Convolutional Neural Networks Open
During the past two decades, epileptic seizure detection and prediction algorithms have evolved rapidly. However, despite significant performance improvements, their hardware implementation using conventional technologies, such as Compleme…
View article: Toward A Formalized Approach for Spike Sorting Algorithms and Hardware Evaluation
Toward A Formalized Approach for Spike Sorting Algorithms and Hardware Evaluation Open
Spike sorting algorithms are used to separate extracellular recordings of neuronal populations into single-unit spike activities. The development of customized hardware implementing spike sorting algorithms is burgeoning. However, there is…
View article: Synthetic Simulations Of Extracellular Recordings (SSOER) Dataset
Synthetic Simulations Of Extracellular Recordings (SSOER) Dataset Open
This dataset contains synthetic data from simulations (for a total duration of 10 minutes) including the activity of one multi-unit and two single-units for different firing rates and signal-to-noise ratio levels. It is intended to be used…
View article: Synthetic Simulations Of Extracellular Recordings (SSOER) Dataset
Synthetic Simulations Of Extracellular Recordings (SSOER) Dataset Open
This dataset contains synthetic data from simulations (for a total duration of 10 minutes) including the activity of one multi-unit and two single-units for different firing rates and signal-to-noise ratio levels. It is intended to be used…
View article: MemTorch: An Open-source Simulation Framework for Memristive Deep Learning Systems
MemTorch: An Open-source Simulation Framework for Memristive Deep Learning Systems Open
View article: Navigating Local Minima in Quantized Spiking Neural Networks
Navigating Local Minima in Quantized Spiking Neural Networks Open
Spiking and Quantized Neural Networks (NNs) are becoming exceedingly important for hyper-efficient implementations of Deep Learning (DL) algorithms. However, these networks face challenges when trained using error backpropagation, due to t…
View article: Design Space Exploration of Dense and Sparse Mapping Schemes for RRAM Architectures
Design Space Exploration of Dense and Sparse Mapping Schemes for RRAM Architectures Open
The impact of device and circuit-level effects in mixed-signal Resistive Random Access Memory (RRAM) accelerators typically manifest as performance degradation of Deep Learning (DL) algorithms, but the degree of impact varies based on algo…
View article: Unsupervised Character Recognition with Graphene Memristive Synapses
Unsupervised Character Recognition with Graphene Memristive Synapses Open
Memristive devices being applied in neuromorphic computing are envisioned to significantly improve the power consumption and speed of future computing platforms. The materials used to fabricate such devices will play a significant role in …
View article: Modeling and simulating in-memory memristive deep learning systems: An overview of current efforts
Modeling and simulating in-memory memristive deep learning systems: An overview of current efforts Open
Deep Learning (DL) systems have demonstrated unparalleled performance in many challenging engineering applications. As the complexity of these systems inevitably increase, they require increased processing capabilities and consume larger a…
View article: Towards Memristive Deep Learning Systems for Real-Time Mobile Epileptic Seizure Prediction
Towards Memristive Deep Learning Systems for Real-Time Mobile Epileptic Seizure Prediction Open
The unpredictability of seizures continues to distress many people with drug-resistant epilepsy. On account of recent technological advances, considerable efforts have been made using different hardware technologies to realize smart device…