Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
CiMSAT: Exploiting SAT Analysis to Attack Compute-in-Memory Architecture Defenses
CCS '24: Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications SecurityPages 3436–3450https://doi.org/10.1145/3658644.3690251Compute-in-memory (CiM) architecture is an emerging energy-efficient processing paradigm that has attracted widespread attention in AI and Internet of Things (IoT) applications. To protect statically stored sensitive data in CiM, designers have ...
- research-articleSeptember 2024
Cryogenic Operation of Computing-In-Memory based Spiking Neural Network
- Laith A. Shamieh,
- Wei-Chun Wang,
- Shida Zhang,
- Rakshith Saligram,
- Amol D. Gaidhane,
- Yu Cao,
- Arijit Raychowdhury,
- Suman Datta,
- Saibal Mukhopadhyay
ISLPED '24: Proceedings of the 29th ACM/IEEE International Symposium on Low Power Electronics and DesignPages 1–6https://doi.org/10.1145/3665314.3670835This paper introduces a Computing-In-Memory based Spiking Neural Network (SNN) architecture for cryogenic operation of CMOS (Cryo-SNN). The paper demonstrates design strategies to improve energy efficiency of Cryo-SNN by coupling low-voltage operation at ...
- research-articleJuly 2024
Benchmarking Test-Time DNN Adaptation at Edge with Compute-In-Memory
ACM Journal on Autonomous Transportation Systems (JATS), Volume 1, Issue 3Article No.: 16, Pages 1–26https://doi.org/10.1145/3665898The prediction accuracy of deep neural networks (DNNs) deployed at the edge can deteriorate over time due to shifts in the data distribution. For heightened robustness, it’s crucial for DNNs to continually refine and improve their predictive capabilities. ...
- research-articleNovember 2024
SHERLOCK: Scheduling Efficient and Reliable Bulk Bitwise Operations in NVMs
- Hamid Farzaneh,
- Joao Paulo De Lima,
- Ali Nezhadi Khelejani,
- Asif Ali Khan,
- Mahta Mayahinia,
- Mehdi Tahoori,
- Jeronimo Castrillon
DAC '24: Proceedings of the 61st ACM/IEEE Design Automation ConferenceArticle No.: 293, Pages 1–6https://doi.org/10.1145/3649329.3658485Bulk bitwise operations are commonplace in application domains such as databases, web search, cryptography, and image processing. The ever-growing volume of data and processing demands of these domains often result in high energy consumption and latency ...
- research-articleNovember 2024
Cross-Layer Exploration and Chip Demonstration of In-Sensor Computing for Large-Area Applications with Differential-Frame ROM-Based Compute-In-Memory
DAC '24: Proceedings of the 61st ACM/IEEE Design Automation ConferenceArticle No.: 262, Pages 1–6https://doi.org/10.1145/3649329.3657324In-sensor computing has emerged as a promising approach to mitigating huge data transmission costs between sensors and processing units. Recently, the emerging application scenarios have raised more demands of sensory technology for large-area and ...
- research-articleApril 2024
BNN-Flip: Enhancing the Fault Tolerance and Security of Compute-in-Memory Enabled Binary Neural Network Accelerators
ASPDAC '24: Proceedings of the 29th Asia and South Pacific Design Automation ConferencePages 146–152https://doi.org/10.1109/ASP-DAC58780.2024.10473947Compute-in-memory based binary neural networks or CiM-BNNs offer high energy/area efficiency for the design of edge deep neural network (DNN) accelerators, with only a mild accuracy reduction. However, for successful deployment, the design of CiM-BNNs ...
- research-articleApril 2024
A Cross-Layer Framework for Design Space and Variation Analysis of Non-Volatile Ferroelectric Capacitor-Based Compute-in-Memory Accelerators
ASPDAC '24: Proceedings of the 29th Asia and South Pacific Design Automation ConferencePages 159–164https://doi.org/10.1109/ASP-DAC58780.2024.10473887Using non-volatile "capacitive" crossbar arrays for compute-in-memory (CIM) offers higher energy and area efficiency compared to "resistive" crossbar arrays. However, the impact of device-to-device (D2D) variation and temporal noise on the system-level ...
- research-articleApril 2024
ZEBRA: A Zero-Bit Robust-Accumulation Compute-in-Memory Approach for Neural Network Acceleration Utilizing Different Bitwise Patterns
- Yiming Chen,
- Guodong Yin,
- Hongtao Zhong,
- Mingyen Lee,
- Huazhong Yang,
- Sumitha George,
- Vijaykrishnan Narayanan,
- Xueqing Li
ASPDAC '24: Proceedings of the 29th Asia and South Pacific Design Automation ConferencePages 153–158https://doi.org/10.1109/ASP-DAC58780.2024.10473851Deploying a lightweight quantized model in compute-in-memory (CIM) might result in significant accuracy degradation due to reduced signal-noise rate (SNR). To address this issue, this paper presents ZEBRA, a zero-bit robust-accumulation CIM approach, ...
- research-articleJune 2023
Scalable Time-Domain Compute-in-Memory BNN Engine with 2.06 POPS/W Energy Efficiency for Edge-AI Devices
GLSVLSI '23: Proceedings of the Great Lakes Symposium on VLSI 2023Pages 665–670https://doi.org/10.1145/3583781.3590220Time-domain (TD) computing has attracted attention for its high computing efficiency and suitability for applications on energy-constrained edge devices. In this paper, we present a time-domain compute-in-memory (TDCIM) macro for binary neural networks (...
- research-articleAugust 2022
CREAM: computing in ReRAM-assisted energy and area-efficient SRAM for neural network acceleration
DAC '22: Proceedings of the 59th ACM/IEEE Design Automation ConferencePages 115–120https://doi.org/10.1145/3489517.3530399Computing-in-memory has been widely explored to accelerate DNN. However, most existing CIM cannot store all NN weights due to limited SRAM capacity for edge AI devices, inducing a large amount off-chip DRAM access. In this paper, a new computing in ReRAM-...
- research-articleMay 2022
Towards ADC-less compute-in-memory accelerators for energy efficient deep learning
DATE '22: Proceedings of the 2022 Conference & Exhibition on Design, Automation & Test in EuropePages 624–627Compute-in-Memory (CiM) hardware has shown great potential in accelerating Deep Neural Networks (DNNs). However, most CiM accelerators for matrix vector multiplication rely on costly analog to digital converters (ADCs) which becomes a bottleneck in ...
- research-articleMarch 2022
Accuracy and Resiliency of Analog Compute-in-Memory Inference Engines
ACM Journal on Emerging Technologies in Computing Systems (JETC), Volume 18, Issue 2Article No.: 33, Pages 1–23https://doi.org/10.1145/3502721Recently, analog compute-in-memory (CIM) architectures based on emerging analog non-volatile memory (NVM) technologies have been explored for deep neural networks (DNNs) to improve scalability, speed, and energy efficiency. Such architectures, however, ...
- research-articleJanuary 2022
Heterogeneous Memory Architecture Accommodating Processing-in-Memory on SoC for AIoT Applications
ASPDAC '22: Proceedings of the 27th Asia and South Pacific Design Automation ConferencePages 383–388https://doi.org/10.1109/ASP-DAC52403.2022.9712544Processing-In-Memory (PIM) technologies is one of most promising candidates for AIoT applications due to its attractive characteristics, such as low computation latency, large throughput and high power efficiency. However, how to efficiently utilize PIM ...
- research-articleJune 2021
A Runtime Reconfigurable Design of Compute-in-Memory–Based Hardware Accelerator for Deep Learning Inference
ACM Transactions on Design Automation of Electronic Systems (TODAES), Volume 26, Issue 6Article No.: 45, Pages 1–18https://doi.org/10.1145/3460436Compute-in-memory (CIM) is an attractive solution to address the “memory wall” challenges for the extensive computation in deep learning hardware accelerators. For custom ASIC design, a specific chip instance is restricted to a specific network during ...
- research-articleDecember 2020
XOR-CIM: compute-in-memory SRAM architecture with embedded XOR encryption
ICCAD '20: Proceedings of the 39th International Conference on Computer-Aided DesignArticle No.: 77, Pages 1–6https://doi.org/10.1145/3400302.3415678Compute-in-memory (CIM) is a promising approach that exploits the analog computation inside the memory array to speed up the vector-matrix multiplication (VMM) for deep neural network (DNN) inference. SRAM has been demonstrated as a mature candidate for ...
- research-articleMarch 2021
Architectural Design of 3D NAND Flash based Compute-in-Memory for Inference Engine
MEMSYS '20: Proceedings of the International Symposium on Memory SystemsPages 77–85https://doi.org/10.1145/3422575.34227793D NAND Flash memory has been proposed as an attractive candidate of inference engine for deep neural network (DNN) owing to its ultra-high density and commercially matured fabrication technology. However, the peripheral circuits require to be modified ...
- research-articleAugust 2020
FeFET-based low-power bitwise logic-in-memory with direct write-back and data-adaptive dynamic sensing interface
- Mingyen Lee,
- Wenjun Tang,
- Bowen Xue,
- Juejian Wu,
- Mingyuan Ma,
- Yu Wang,
- Yongpan Liu,
- Deliang Fan,
- Vijaykrishnan Narayanan,
- Huazhong Yang,
- Xueqing Li
ISLPED '20: Proceedings of the ACM/IEEE International Symposium on Low Power Electronics and DesignPages 127–132https://doi.org/10.1145/3370748.3406572Compute-in-memory (CiM) is a promising method for mitigating the memory wall problem in data-intensive applications. The proposed bitwise logic-in-memory (BLiM) is targeted at data intensive applications, such as database, data encryption. This work ...
- short-paperSeptember 2019
CIMAT: a transpose SRAM-based compute-in-memory architecture for deep neural network on-chip training
MEMSYS '19: Proceedings of the International Symposium on Memory SystemsPages 490–496https://doi.org/10.1145/3357526.3357552Rapid development in deep neural networks (DNNs) is enabling many intelligent applications. However, on-chip training of DNNs is challenging due to the extensive computation and memory bandwidth requirements. To solve the bottleneck of the memory wall ...
- research-articleMay 2019
Ferroelectric FET Based In-Memory Computing for Few-Shot Learning
GLSVLSI '19: Proceedings of the 2019 Great Lakes Symposium on VLSIPages 373–378https://doi.org/10.1145/3299874.3319450As CMOS technology advances, the performance gap between the CPU and main memory has not improved. Furthermore, the hardware deployed for Internet of Things (IoT) applications need to process ever growing volumes of data, which can further exacerbate the ...