Wonjae Ji
YOU?
Author Swipe
View article: Design Strategies of Capacitor‐Based Synaptic Cell for High‐Efficiency Analog Neural Network Training
Design Strategies of Capacitor‐Based Synaptic Cell for High‐Efficiency Analog Neural Network Training Open
Analog in‐memory computing, leveraging resistive switching cross‐point devices known as resistive processing units (RPUs), offers substantial improvements in the performance and energy efficiency of deep neural network (DNN) training. Amon…
View article: Device Specifications for Neural Network Training with Analog Resistive Cross‐Point Arrays Using Tiki‐Taka Algorithms
Device Specifications for Neural Network Training with Analog Resistive Cross‐Point Arrays Using Tiki‐Taka Algorithms Open
Recently, specialized training algorithms for analog cross‐point array‐based neural network accelerators have been introduced to counteract device non‐idealities such as update asymmetry and cycle‐to‐cycle variation, achieving software‐lev…
View article: Retention-aware zero-shifting technique for Tiki-Taka algorithm-based analog deep learning accelerator
Retention-aware zero-shifting technique for Tiki-Taka algorithm-based analog deep learning accelerator Open
We present the fabrication of 4 K-scale electrochemical random-access memory (ECRAM) cross-point arrays for analog neural network training accelerator and an electrical characteristic of an 8 × 8 ECRAM array with a 100% yield, showing exce…
View article: Recent Advances and Future Prospects for Memristive Materials, Devices, and Systems
Recent Advances and Future Prospects for Memristive Materials, Devices, and Systems Open
Memristive technology has been rapidly emerging as a potential alternative to traditional CMOS technology, which is facing fundamental limitations in its development. Since oxide-based resistive switches were demonstrated as memristors in …
View article: Highly Linear and Symmetric Analog Neuromorphic Synapse Based on Metal Oxide Semiconductor Transistors with Self‐Assembled Monolayer for High‐Precision Neural Network Computation (Adv. Electron. Mater. 3/2023)
Highly Linear and Symmetric Analog Neuromorphic Synapse Based on Metal Oxide Semiconductor Transistors with Self‐Assembled Monolayer for High‐Precision Neural Network Computation (Adv. Electron. Mater. 3/2023) Open
Neuromorphic Synapses In article number 2200554, Seyoung Kim, Yoonyoung Chung, and co-workers demonstrate a novel analog synapse device composed of two oxide semiconductor transistors. Precise control of charging and discharging in the sto…
View article: Highly Linear and Symmetric Analog Neuromorphic Synapse Based on Metal Oxide Semiconductor Transistors with Self‐Assembled Monolayer for High‐Precision Neural Network Computation
Highly Linear and Symmetric Analog Neuromorphic Synapse Based on Metal Oxide Semiconductor Transistors with Self‐Assembled Monolayer for High‐Precision Neural Network Computation Open
This work presents an analog neuromorphic synapse device consisting of two oxide semiconductor transistors for high‐precision neural networks. One of the two transistors controls the synaptic weight by charging or discharging the storage n…
View article: Impact of Asymmetric Weight Update on Neural Network Training With Tiki-Taka Algorithm
Impact of Asymmetric Weight Update on Neural Network Training With Tiki-Taka Algorithm Open
Recent progress in novel non-volatile memory-based synaptic device technologies and their feasibility for matrix-vector multiplication (MVM) has ignited active research on implementing analog neural network training accelerators with resis…