Skip to content

BUAA-CI-LAB/Literatures-on-SRAM-based-CIM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

Literature on SRAM-based Compute-In-Memory

This repo serves as a gateway to the evolving field of SRAM-based Compute-In-Memory (CIM), with a focus on accelerating AI applications. It aggregates a diverse array of resources including research papers, tools, and surveys. Managed by our team of Maintainers, we also encourage contributions from the community.

The content is organized into categories derived from the file names, offering a structured overview of the topic:

This curated list aims to provide insights into the integration of CIM technologies for AI, highlighting the latest advancements and methodologies.


Macro Level

  • [JSSC 2023] MACC-SRAM: A Multistep Accumulation Capacitor-Coupling In-Memory Computing SRAM Macro for Deep Convolutional Neural Networks.

    Bo Zhang, Jyotishman Saikia, Jian Meng, et al. [Paper]

  • [CICC 2023] A 65 nm 1.4-6.7 TOPS/W Adaptive-SNR Sparsity-Aware CIM Core with Load Balancing Support for DL workloads.

    Mustafa Fayez Ali, Indranil Chakraborty, Sakshi Choudhary, et al. [Paper]

  • [CICC 2023] A Double-Mode Sparse Compute-In-Memory Macro with Reconfigurable Single and Dual Layer Computation.

    Yuanzhe Zhao, Minglei Zhang, Pengyu He, et al. [Paper]

  • [CICC 2023] iMCU: A 102-μJ, 61-ms Digital In-Memory Computing-based Microcontroller Unit for Edge TinyML.

    Chuan-Tung Lin, Paul Xuanyuanliang Huang, Jonghyun Oh, et al. [Paper]

  • [ISSCC 7.1 2023] A 22nm 832Kb Hybrid-Domain Floating-Point SRAM In-Memory-Compute Macro with 16.2-70.2TFLOPS/W for High-Accuracy AI-Edge Devices.

    Ping-Chun Wu, Jian-Wei Su, Li-Yang Hong, et al. [Paper]

  • [ISSCC 7.2 2023] A 28nm 64-kb 31.6-TFLOPS/W Digital-Domain Floating-Point-Computing-Unit and Double-Bit 6T-SRAM Computing-in-Memory Macro for Floating-Point CNNs.

    An Guo, Xin Si, Xi Chen, et al. [Paper]

  • [ISSCC 7.3 2023] A 28nm 38-to-102-TOPS/W 8b Multiply-Less Approximate Digital SRAM Compute-In-Memory Macro for Neural-Network Inference.

    Yifan He, Haikang Diao, Chen Tang, et al. [Paper]

  • [ISSCC 7.4 2023] A 4nm 6163-TOPS/W/b $\mathbf{4790-TOPS/mm^{2}/b}$ SRAM Based Digital-Computing-in-Memory Macro Supporting Bit-Width Flexibility and Simultaneous MAC and Weight Update.

    Haruki Mori, Wei-Chang Zhao, Cheng-En Lee, et al. [Paper]

  • [ISSCC 7.5 2023] A 28nm Horizontal-Weight-Shift and Vertical-feature-Shift-Based Separate-WL 6T-SRAM Computation-in-Memory Unit-Macro for Edge Depthwise Neural-Networks.

    Bo Wang, Chen Xue, Zhongyuan Feng, et al. [Paper]

  • [ISSCC 7.6 2023] A 70.85-86.27TOPS/W PVT-Insensitive 8b Word-Wise ACIM with Post-Processing Relaxation.

    Sung-En Hsieh, Chun-Hao Wei, Cheng-Xin Xue, et al. [Paper]

  • [ISSCC 7.7 2023] CV-CIM: A 28nm XOR-Derived Similarity-Aware Computation-in-Memory for Cost-Volume Construction.

    Zhiheng Yue, Yang Wang, Huizheng Wang, et al. [Paper]

  • [ISSCC 7.8 2023] A 22nm Delta-Sigma Computing-In-Memory (Δ∑CIM) SRAM Macro with Near-Zero-Mean Outputs and LSB-First ADCs Achieving 21.38TOPS/W for 8b-MAC Edge AI Processing.

    Peiyu Chen, Meng Wu, Wentao Zhao, et al. [Paper]

  • [JSSC 2023] A Charge Domain SRAM Compute-in-Memory Macro With C-2C Ladder-Based 8-Bit MAC Unit in 22-nm FinFET Process for Edge Inference.

    Hechen Wang, Renzhi Liu, Richard Dorrance, et al. [Paper]

  • [JSSC 2023] In Situ Storing 8T SRAM-CIM Macro for Full-Array Boolean Logic and Copy Operations.

    Zhiting Lin, Zhongzhen Tong, Fangming Wang, et al. [Paper]

  • [CICC 2022] An area-efficient 6T-SRAM based Compute-In-Memory architecture with reconfigurable SAR ADCs for energy-efficient deep neural networks in edge ML applications.

    Avishek Biswas, Hetul Sanghvi, Mahesh Mehendale, et al. [Paper]

  • [CICC 2022] 5GHz SRAM for High-Performance Compute Platform in 5nm CMOS.

    R. Mathur, M. Kumar, Vivek Asthana, et al. [Paper]

  • [DAC 2022] CP-SRAM: Charge-Pulsation SRAM Marco for Ultra-High Energy-Efficiency Computing-in-Memory.

    He Zhang, Linjun Jiang, Jianxin Wu, et al. [Paper]

  • [DAC 2022] TAIM: ternary activation in-memory computing hardware with 6T SRAM array.

    Nameun Kang, Hyungjun Kim, Hyunmyung Oh, et al. [Paper]

  • [ISSCC 11.6 2022] A 5-nm 254-TOPS/W 221-TOPS/mm2 Fully-Digital Computing-in-Memory Macro Supporting Wide-Range Dynamic-Voltage-Frequency Scaling and Simultaneous MAC and Write Operations.

    Hidehiro Fujiwara, Haruki Mori, Wei-Chang Zhao, et al. [Paper]

  • [ISSCC 11.7 2022] A 1.041-Mb/mm2 27.38-TOPS/W Signed-INT8 Dynamic-Logic-Based ADC-less SRAM Compute-In-Memory Macro in 28nm with Reconfigurable Bitwise Operation for AI and Embedded Applications.

    Bonan Yan, Jeng-Long Hsu, Pang-Cheng Yu, et al. [Paper]

  • [ISSCC 11.8 2022] A 28nm 1Mb Time-Domain Computing-in-Memory 6T-SRAM Macro with a 6.6ns Latency, 1241GOPS and 37.01TOPS/W for 8b-MAC Operations for Edge-AI Devices.

    Ping-Chun Wu, Jian-Wei Su, Yen-Lin Chung, et al. [Paper]

  • [JSSC 2022] Two-Way Transpose Multibit 6T SRAM Computing-in-Memory Macro for Inference-Training AI Edge Chips.

    Jian-Wei Su, Xin Si, Yen-Chi Chou, et al. [Paper]

  • [VLSI 2022] A 32.2 TOPS/W SRAM Compute-in-Memory Macro Employing a Linear 8-bit C-2C Ladder for Charge Domain Computation in 22nm for Edge Inference.

    Hechen Wang, Renzhi Liu, Richard Dorrance, et al. [Paper]

  • [VLSI C02-5 2022] A 12nm 121-TOPS/W 41.6-TOPS/mm2 All Digital Full Precision SRAM-based Compute-in-Memory with Configurable Bit-width For AI Edge Applications.

    Chia-Fu Lee, Cheng-Han Lu, Cheng-En Lee, et al. [Paper]

  • [VLSI C04-2 2022] Neuro-CIM: A 310.4 TOPS/W Neuromorphic Computing-in-Memory Processor with Low WL/BL activity and Digital-Analog Mixed-mode Neuron Firing.

    Sangyeob Kim, Sangjin Kim, Soyeon Um, et al. [Paper]

  • [CICC 2021] A 128x128 SRAM Macro with Embedded Matrix-Vector Multiplication Exploiting Passive Gain via MOS Capacitor for Machine Learning Application.

    Rezwan A. Rasul, Mike Shuo-Wei Chen [Paper]

  • [CICC 2021] An In-Memory-Computing Charge-Domain Ternary CNN Classifier.

    Xiangxing Yang, Keren Zhu, Xiyuan Tang, et al. [Paper]

  • [DAC 2021] A Charge-Sharing based 8T SRAM In-Memory Computing for Edge DNN Acceleration.

    Kyeongho Lee, Sungsoo Cheon, Joongho Jo, et al. [Paper]

  • [ISSCC 16.3 2021] A 28nm 384kb 6T-SRAM Computation-in-Memory Macro with 8b of Precision for AI Edge Chips.

    Jian-Wei Su, Yen-Chi Chou, Ruhui Liu, et al. [Paper]

  • [ISSCC 16.4 2021] An 89TOPS/W and 16.3TOPS/mm2 All-Digital SRAM-Based Full-Precision Compute-In Memory Macro in 22nm for Machine-Learning Edge Applications.

    Yu-Der Chih, Po-Hao Lee, Hidehiro Fujiwara, et al. [Paper]

  • [ISSCC, 16.1 2021] DIMC: 2219TOPS/W 2569F2/b Digital In-Memory Computing Macro in 28nm Based on Approximate Arithmetic Hardware.

    Dewei Wang, Chuan-Tung Lin, Gregory K. Chen, et al. [Paper]

  • [JSSC 2021] Two-Direction In-Memory Computing Based on 10T SRAM With Horizontal and Vertical Decoupled Read Ports.

    Zhiting Lin, Zhiyong Zhu, Honglan Zhan, et al. [Paper]

  • [JSSC 2021] A Local Computing Cell and 6T SRAM-Based Computing-in-Memory Macro With 8-b MAC Operation for Edge AI Chips.

    Xin Si, Yung-Ning Tu, Wei-Hsing Huang, et al. [Paper]

  • [JSSC 2021] A 7-nm Compute-in-Memory SRAM Macro Supporting Multi-Bit Input, Weight and Output and Achieving 351 TOPS/W and 372.4 GOPS.

    Mahmut E. Sinangil, Burak Erbagci, Rawan Naous, et al. [Paper]

  • [JSSC 2021] Colonnade: A Reconfigurable SRAM-Based Digital Bit-Serial Compute-In-Memory Macro for Processing Neural Networks.

    Hyunjoon Kim, Taegeun Yoo, Tony Tae-Hyoung Kim, et al. [Paper]

  • [JSSC 2021] Cascade Current Mirror to Improve Linearity and Consistency in SRAM In-Memory Computing.

    Zhiting Lin, Honglan Zhan, Zhongwei Chen, et al. [Paper]

  • [JSSC 2021] ±CIM SRAM for Signed In-Memory Broad-Purpose Computing From DSP to Neural Processing.

    Saurabh Jain, Longyang Lin, and Massimo Alioto [Paper]

  • [JSSC 2021] CAP-RAM: A Charge-Domain In-Memory Computing 6T-SRAM for Accurate and Precision-Programmable CNN Inference.

    Zhiyu Chen, Zhanghao Yu, Qing Jin, et al. [Paper]

  • [VLSI JFS2-5 2021] HERMES Core – A 14nm CMOS and PCM-based In-Memory Compute Core using an array of 300ps/LSB Linearized CCO-based ADCs and local digital processing.

    Riduan Khaddam-Aljameh, Milos Stanisavljevic, Jordi Fornt Mas, et al. [Paper]

  • [VLSI JFS2-6 2021] A 20x28 Spins Hybrid In-Memory Annealing Computer Featuring Voltage-Mode Analog Spin Operator for Solving Combinatorial Optimization Problems.

    Junjie Mu, Yuqi Su, Bongjin Kim [Paper]

  • [CICC 2020] A 16K Current-Based 8T SRAM Compute-In-Memory Macro with Decoupled Read/Write and 1-5bit Column ADC.

    Chengshuo Yu, Taegeun Yoo, Tony Tae-Hyoung Kim, et al. [Paper]

  • [DAC 2020] Bit Parallel 6T SRAM In-memory Computing with Reconfigurable Bit-Precision.

    Kyeongho Lee, Jinho Jeong, Sungsoo Cheon, et al. [Paper]

  • [DAC 2020] A Two-way SRAM Array based Accelerator for Deep Neural Network On-chip Training.

    Hongwu Jiang, Shanshi Huang, Xiaochen Peng, et al. [Paper]

  • [DATE 2020] Robust and High-Performance 12-T Interlocked SRAM for In-Memory Computing.

    Neelam Surana, Mili Lavania, Abhishek Barma and Joycee Mekie [Paper]

  • [ICCAD 2020] XOR-CIM: Compute-In-Memory SRAM Architecture with Embedded XOR Encryption.

    Shanshi Huang, Hongwu Jiang, Xiaochen Peng, et al. [Paper]

  • [ICCAD 2020] Energy-efficient XNOR-free In-Memory BNN Accelerator with Input Distribution Regularization.

    Hyungjun Kim, Hyunmyung Oh, Jae-Joon Kim [Paper]

  • [ISSCC 15.2 2020] A 28nm 64Kb Inference-Training Two-Way Transpose Multibit 6T SRAM Compute-in-Memory Macro for AI Edge Chips.

    Jian-Wei Su, Xin Si, Yen-Chi Chou, et al. [Paper]

  • [ISSCC 15.3 2020] A 351TOPS/W and 372.4GOPS Compute-in-Memory SRAM Macro in 7nm FinFET CMOS for Machine-Learning Applications.

    Qing Dong, Mahmut E. Sinangil, Burak Erbagci, et al. [Paper]

  • [ISSCC 15.5 2020] A 28nm 64Kb 6T SRAM Computing-in-Memory Macro with 8b MAC Operation for AI Edge Chips.

    Xin Si, Yung-Ning Tu, Wei-Hsing Huang, et al. [Paper]

  • [ISSCC 31.2 2020] CIM-Spin: A 0.5-to-1.2V Scalable Annealing Processor Using Digital Compute-In-Memory Spin Operators and Register-Based Spins for Combinatorial Optimization Problems.

    Yuqi Su, Hyunjoon Kim, Bongjin Kim [Paper]

  • [JSSC 2020] A 4-Kb 1-to-8-bit Configurable 6T SRAM-Based Computation-in-Memory Unit-Macro for CNN-Based AI Edge Processors.

    Yen-Cheng Chiu, Zhixiao Zhang, Jia-Jing Chen, et al. [Paper]

  • [JSSC 2020] C3SRAM: An In-Memory-Computing SRAM Macro Based on Robust Capacitive Coupling Computing Mechanism.

    Zhewei Jiang, Shihui Yin, Jae-Sun Seo, et al. [Paper]

  • [JSSC 2020] A Twin-8T SRAM Computation-in-Memory Unit-Macro for Multibit CNN-Based AI Edge Processors.

    Xin Si, Jia-Jing Chen, Yung-Ning Tu, et al. [Paper]

  • [JSSC 2020] A 28-nm Compute SRAM With Bit-Serial Logic/Arithmetic Operations for Programmable In-Memory Vector Computing.

    Jingcheng Wang, Xiaowei Wang, Charles Eckert, et al. [Paper]

  • [JSSC 2020] XNOR-SRAM: In-Memory Computing SRAM Macro for Binary/Ternary Deep Neural Networks.

    Shihui Yin, Zhewei Jiang, Jae-Sun Seo, et al. [Paper]

  • [VLSI 2020] Z-PIM: An Energy-Efficient Sparsity-Aware Processing-In-Memory Architecture with Fully-Variable Weight Precision.

    Ji-Hoon Kim, Juhyoung Lee, Jinsu Lee, et al. [Paper]

  • [JSSC 2019] CONV-SRAM: An Energy-Efficient SRAM With In-Memory Dot-Product Computation for Low-Power Convolutional Neural Networks.

    Avishek Biswas, and Anantha P. Chandrakasan [Paper]


Architecture Level

  • [DATE 2024] CiMComp: An Energy Efficient Compute-in-Memory Based Comparator for Convolutional Neural Networks.

    Kavitha S, Binsu J Kailath, B. S. Reniwal [Paper]

  • [DATE 2024] H3DFact: Heterogeneous 3D Integrated CIM for Factorization with Holographic Perceptual Representations.

    Zishen Wan, Che-Kai Liu, Mohamed Ibrahim, et al. [Paper]

  • [DATE 2024] AdaP-CIM: Compute-in-Memory Based Neural Network Accelerator Using Adaptive Posit.

    Jingyu He, Fengbin Tu, Kwang-Ting Cheng, et al. [Paper]

  • [DATE 2024] DAISM: Digital Approximate In-SRAM Multiplier-Based Accelerator for DNN Training and Inference.

    L. Sonnino, S. Shresthamali, Y. He, et al. [Paper]

  • [DAC 2024] Towards Efficient SRAM-PIM Architecture Design by Exploiting Unstructured Bit-Level Sparsity.

    Duan, Cenlin, Jianlei Yang, Yiou Wang, et al. [Paper]

  • [TCAS-I 2024] DCIM-GCN: Digital Computing-in-Memory Accelerator for Graph Convolutional Network.

    Ma, Yufei, Yikan Qiu, Wentao Zhao, et al. [Paper]

  • [DAC 2023] BP-NTT: Fast and Compact in-SRAM Number Theoretic Transform with Bit-Parallel Modular Multiplication.

    Jingyao Zhang, Mohsen Imani, Elaheh Sadredini [Paper]

  • [DAC 2023] Morphable CIM: Improving Operation Intensity and Depthwise Capability for SRAM-CIM Architecture.

    Yun-Chen Lo, Ren-Shuo Liu [Paper]

  • [DATE 2023] Process Variation Resilient Current-Domain Analog In Memory Computing.

    Kailash Prasad, Sai Shubham, Aditya Biswas, et al. [Paper]

  • [DATE 2023] PIC-RAM: Process-Invariant Capacitive Multiplier Based Analog In Memory Computing in 6T SRAM.

    Kailash Prasad, Aditya Biswas, Arpita Kabra, et al. [Paper]

  • [HPCA 2023] EVE: Ephemeral Vector Engines.

    Khalid Al-Hawaj, Tuan Ta, Nick Cebry, et al. [Paper]

  • [HPCA 2023] Dalorex: A Data-Local Program Execution and Architecture for Memory-bound Applications.

    Marcelo Orenes-Vera, Esin Tureci, David Wentzlaff, et al. [Paper]

  • [ISSCC 16.1 2023] MuITCIM: A 28nm $2.24 \mu\mathrm{J}$/Token Attention-Token-Bit Hybrid Sparse Digital CIM-Based Accelerator for Multimodal Transformers.

    Fengbin Tu, Zihan Wu, Yiqi Wang, et al. [Paper]

  • [ISSCC 16.2 2023] A 28nm 53.8TOPS/W 8b Sparse Transformer Accelerator with In-Memory Butterfly Zero Skipper for Unstructured-Pruned NN and CIM-Based Local-Attention-Reusable Engine.

    Shiwei Liu, Peizhe Li, Jinshan Zhang, et al. [Paper]

  • [ISSCC 16.3 2023] A 28nm 16.9-300TOPS/W Computing-in-Memory Processor Supporting Floating-Point NN Inference/Training with Intensive-CIM Sparse-Digital Architecture.

    Jinshan Yue, Chaojie He, Zi Wang, et al. [Paper]

  • [ISSCC 16.4 2023] TensorCIM: A 28nm 3.7nJ/Gather and 8.3TFLOPS/W FP32 Digital-CIM Tensor Processor for MCM-CIM-Based Beyond-NN Acceleration.

    Fengbin Tu, Yiqi Wang, Zihan Wu, et al. [Paper]

  • [ISSCC 16.7 2023] A 40-310TOPS/W SRAM-Based All-Digital Up to 4b In-Memory Computing Multi-Tiled NN Accelerator in FD-SOI 18nm for Deep-Learning Edge Applications.

    Giuseppe Desoli, Nitin Chawla, Thomas Boesch, et al. [Paper]

  • [JSSC 2023] ReDCIM: Reconfigurable Digital Computing- In -Memory Processor With Unified FP/INT Pipeline for Cloud AI Acceleration.

    Fengbin Tu, Yiqi Wang, Zihan Wu, et al. [Paper]

  • [JSSC 2023] TT@CIM: A Tensor-Train In-Memory-Computing Processor Using Bit-Level-Sparsity Optimization and Variable Precision Quantization.

    Guo, Ruiqi and Yue, Zhiheng and Si, et al. [Paper]

  • [JSSC 2023] PIMCA: A Programmable In-Memory Computing Accelerator for Energy-Efficient DNN Inference.

    Bo Zhang, Shihui Yin, Minkyu Kim, et al. [Paper]

  • [JSSC 2023] An In-Memory-Computing Charge-Domain Ternary CNN Classifier.

    Xiangxing Yang, Keren Zhu, Xiyuan Tang, et al. [Paper]

  • [JSSC 2023] TranCIM: Full-Digital Bitline-Transpose CIM-based Sparse Transformer Accelerator With Pipeline/Parallel Reconfigurable Modes.

    Fengbin Tu, Zihan Wu, Yiqi Wang, et al. [Paper]

  • [JSSC 2023] IMPACT: A 1-to-4b 813-TOPS/W 22-nm FD-SOI Compute-in-Memory CNN Accelerator Featuring a 4.2-POPS/W 146-TOPS/mm2 CIM-SRAM With Multi-Bit Analog Batch-Normalization.

    Adrian Kneip, Martin Lefebvre, Julien Verecken, et al. [Paper]

  • [MICRO 2023] MVC: Enabling Fully Coherent Multi-Data-Views through the Memory Hierarchy with Processing in Memory.

    Daichi Fujiki [Paper]

  • [MICRO 2023] MAICC : A Lightweight Many-core Architecture with In-Cache Computing for Multi-DNN Parallel Inference.

    Renhao Fan, Yikai Cui, Qilin Chen, et al. [Paper]

  • [TC 2023] Eidetic: An In-Memory Matrix Multiplication Accelerator for Neural Networks.

    Charles Eckert, Arun Subramaniyan, Xiaowei Wang, et al. [Paper]

  • [TC 2023] An Area-Efficient In-Memory Implementation Method of Arbitrary Boolean Function Based on SRAM Array.

    Sunrui Zhang, Xiaole Cui, Feng Wei, et al. [Paper]

  • [TCAD 2023] SDP: Co-Designing Algorithm, Dataflow, and Architecture for in-SRAM Sparse NN Acceleration.

    Fengbin Tu, Yiqi Wang, Ling Liang, et al. [Paper]

  • [TCAD 2023] Dedicated Instruction Set for Pattern-Based Data Transfers: An Experimental Validation on Systems Containing In-Memory Computing Units.

    Kevin Mambu, Henri-Pierre Charles, and Maha Kooli [Paper]

  • [TCAS-I 2023] SPCIM: Sparsity-Balanced Practical CIM Accelerator With Optimized Spatial-Temporal Multi-Macro Utilization.

    Yiqi Wang, Fengbin Tu, Leibo Liu, et al. [Paper]

  • [TCAS-I 2023] ARBiS: A Hardware-Efficient SRAM CIM CNN Accelerator With Cyclic-Shift Weight Duplication and Parasitic-Capacitance Charge Sharing for AI Edge Application.

    Chenyang Zhao, Jinbei Fang, Jingwen Jiang, et al. [Paper]

  • [TCAS-I 2023] MC-CIM: Compute-in-Memory With Monte-Carlo Dropouts for Bayesian Edge Intelligence.

    Priyesh Shukla, Shamma Nasrin, Nastaran Darabi, et al. [Paper]

  • [TCAS-I 2023] An 82-nW 0.53-pJ/SOP Clock-Free Spiking Neural Network With 40-µs Latency for AIoT Wake-Up Functions Using a Multilevel-Event-Driven Bionic Architecture and Computing-in-Memory Technique.

    Ying Liu, Yufei Ma, Wei He, et al. [Paper]

  • [TCAS-I 2023] TDPRO: Time-Domain-Based Computing-in Memory Engine for Ultra-Low Power ECG Processor.

    Liang Chang, Siqi Yang, Zhiyuan Chang, et al. [Paper]

  • [TCAS-I 2023] WDVR-RAM: A 0.25–1.2 V, 2.6–76 POPS/W Charge-Domain In-Memory-Computing Binarized CNN Accelerator for Dynamic AIoT Workloads.

    Hongtu Zhang, Yuhao Shu, Qi Deng, et al. [Paper]

  • [CICC 2022] T-PIM: A 2.21-to-161.08TOPS/W Processing-In-Memory Accelerator for End-to-End On-Device Training.

    Jaehoon Heo, Junsoo Kim, Wontak Han, et al. [Paper]

  • [DAC 2022] CREAM: Computing in ReRAM-assisted Energy and Area efficient SRAM for Neural Network Acceleration.

    Liukai Xu, Songyuan Liu, Zhi Li, et al. [Paper]

  • [DAC 2022] Processing-in-SRAM Acceleration for Ultra-Low Power Visual 3D Perception.

    Yuquan He, Songyun Qu, Gangliang Lin, et al. [Paper]

  • [DAC 2022] MC-CIM: A Reconfigurable Computation-In-Memory For Efficient Stereo Matching Cost Computation.

    Zhiheng Yue, Yabing Wang, Leibo Liu, et al. [Paper]

  • [DATE 2022] HyperX: A Hybrid RRAM-SRAM partitioned system for error recovery in memristive Xbars.

    Adarsh Kosta, Efstathia Soufleri, Indranil Chakraborty, et al. [Paper]

  • [DATE 2022] AID: Accuracy Improvement of Analog Discharge-Based in-SRAM Multiplication Accelerator.

    *Saeed Seyedfaraji, Baset Mesgari, Semeen Rehman * [Paper]

  • [ISCA 2022] Gearbox: a case for supporting accumulation dispatching and hybrid partitioning in PIM-based accelerators.

    Marzieh Lenjani, Alif Ahmed, Mircea Stan, et al. [Paper]

  • [ISSCC 15.4 2022] A 40nm 60.64TOPS/W ECC-Capable Compute-in-Memory/Digital 2.25MB/768KB RRAM/SRAM System with Embedded Cortex M3 Microprocessor for Edge Recommendation Systems.

    Muya Chang, Samuel D. Spetalnick, Brian Crafton, et al. [Paper]

  • [ISSCC 15.3 2022] COMB-MCM: Computing-on-Memory-Boundary NN Processor with Bipolar Bitwise Sparsity Optimization for Scalable Multi-Chiplet-Module Edge Machine Learning.

    Haozhe Zhu, Bo Jiao, Jinshan Zhang, et al. [Paper]

  • [ISSCC 15.5 2022] A 28nm 29.2TFLOPS/W BF16 and 36.5TOPS/W INT8 Reconfigurable Digital CIM Processor with Unified FP/INT Pipeline and Bitwise In-Memory Booth Multiplication for Cloud Deep Learning Acceleration.

    Fengbin Tu, Yiqi Wang, Zihan Wu, et al. [Paper]

  • [ISSCC 29.3 2022] A 28nm 15.59μJ/Token Full-Digital Bitline-Transpose CIM-Based Sparse Transformer Accelerator with Pipeline/Parallel Reconfigurable Modes.

    Fengbin Tu, Zihan Wu, Yiqi Wang, et al. [Paper]

  • [ISSCC 2022] DIANA: An End-to-End Energy-Efficient Digital and ANAlog Hybrid Neural Network SoC.

    Kodai Ueyoshi, Ioannis A. Papistas, Pouya Houshmand, et al. [Paper]

  • [JETC 2022] Towards a Truly Integrated Vector Processing Unit for Memory-bound Applications Based on a Cost-competitive Computational SRAM Design Solution.

    Maha Kooli, Antoine Heraud, Henri-Pierre Charles, et al. [Paper]

  • [JSSC 2022] Scalable and Programmable Neural Network Inference Accelerator Based on In-Memory Computing.

    Hongyang Jia, Murat Ozatay, Yinqi Tang, et al. [Paper]

  • [TCAD 2022] MARS: Multimacro Architecture SRAM CIM-Based Accelerator With Co-Designed Compressed Neural Networks.

    Syuan-Hao Sie, Jye-Luen Lee, Yi-Ren Chen, et al. [Paper]

  • [TCAS-I 2022] BR-CIM: An Efficient Binary Representation Computation-In-Memory Design.

    Zhiheng Yue, Yabing Wang, Yubin Qin, et al. [Paper]

  • [DATE 2021] Compute-in-Memory Upside Down: A Learning Operator Co-Design Perspective for Scalability.

    Shamma Nasrin, Priyesh Shukla, Shruthi Jaisimha, et al. [Paper]

  • [DATE 2021] Running Efficiently CNNs on the Edge Thanks to Hybrid SRAM-RRAM In-Memory Computing.

    Marco Rios, Flavio Ponzina, Giovanni Ansaloni, et al. [Paper]

  • [ISSCC 15.1 2021] A Programmable Neural-Network Inference Accelerator Based on Scalable In-Memory Computing.

    Hongyang Jia, Murat Ozatay, Yinqi Tang, et al. [Paper]

  • [ISSCC 15.2 2021] A 2.75-to-75.9TOPS/W Computing-in-Memory NN Processor Supporting Set-Associate Block-Wise Zero Skipping and Ping-Pong CIM with Simultaneous Computation and Weight Updating.

    Jinshan Yue, Xiaoyu Feng, Yifan He, et al. [Paper]

  • [ISSCC 15.4 2021] A 5.99-to-691.1TOPS/W Tensor-Train In-Memory-Computing Processor Using Bit-Level-SparsityBased Optimization and Variable-Precision Quantization.

    Ruiqi Guo, Zhiheng Yue, Xin Si, et al. [Paper]

  • [JSSC 2021] A 0.44-μJ/dec, 39.9-μs/dec, Recurrent Attention In-Memory Processor for Keyword Spotting.

    Hassan Dbouk, Sujan K. Gonugondla, Charbel Sakr, et al. [Paper]

  • [JSSC 2021] Z-PIM: A Sparsity-Aware Processing-in-Memory Architecture With Fully Variable Weight Bit-Precision for Energy-Efficient Deep Neural Networks.

    Ji-Hoon Kim, Juhyoung Lee, Jinsu Lee, et al. [Paper]

  • [VLSI JFS2-3 2021] A 6.54-to-26.03 TOPS/W Computing-In-Memory RNN Processor using Input Similarity Optimization and Attention-based Context-breaking with Output Speculation.

    Ruiqi Guo, Hao Li, Ruhui Liu, et al. [Paper]

  • [CICC 2020] KeyRAM: A 0.34 uJ/decision 18 k decisions/s Recurrent Attention In-memory Processor for Keyword Spotting.

    Hassan Dbouk, Sujan K. Gonugondla, Charbel Sakr, et al. [Paper]

  • [DAC 2020] Q-PIM: A Genetic Algorithm based Flexible DNN Quantization Method and Application to Processing-In-Memory Platform.

    Yun Long, Edward Lee, Daehyun Kim, et al. [Paper]

  • [DATE 2020] A Fast and Energy Efficient Computing-in-Memory Architecture for Few-Shot Learning Applications.

    Dayane Reis, Ann Franchesca Laguna, Michael Niemier, et al. [Paper]

  • [ISSCC 14.3 2020] A 65nm Computing-in-Memory-Based CNN Processor with 2.9-to-35.8TOPS/W System Energy Efficiency Using Dynamic-Sparsity Performance-Scaling Architecture and Energy-Efficient Inter/Intra-Macro Data Reuse.

    Jinshan Yue, Zhe Yuan, Xiaoyu Feng, et al. [Paper]

  • [JSSC 2020] A Programmable Heterogeneous Microprocessor Based on Bit-Scalable In-Memory Computing.

    Hongyang Jia, Hossein Valavi, Yinqi Tang, et al. [Paper]

  • [JSSC 2020] A 2× 30k-Spin Multi-Chip Scalable CMOS Annealing Processor Based on a Processing-in-Memory Approach for Solving Large-Scale Combinatorial Optimization Problems.

    Takashi Takemoto, Member, IEEE, et al. [Paper]

  • [JSSC 2020] A 28-nm Compute SRAM With Bit-Serial Logic/Arithmetic Operations forProgrammable In-Memory Vector Computing.

    Jingcheng Wang, Xiaowei Wang, Charles Eckert, et al. [Paper]

  • [MICRO 2020] CATCAM: Constant-time Alteration Ternary CAM with Scalable In-Memory Architecture.

    Dibei Chen, Zhaoshi Li, Tianzhu Xiong, et al. [Paper]

  • [TC 2020] CIMAT: A Compute-In-Memory Architecture for On-chip Training Based on Transpose SRAM Arrays.

    Hongwu Jiang, Xiaochen Peng, Shanshi Huang, et al. [Paper]

  • [DAC 2018] SNrram: an efficient sparse neural network computation architecture based on resistive random-access memory.

    Peiqi Wang, Yu Ji, Chi Hong, et al. [Paper]


Simulation Tools

  • [DATE 2024] X-PIM: Fast Modeling and Validation Framework for Mixed-Signal Processing-in-Memory Using Compressed Equivalent Model in System Verilog.

    I. Jeong, J. -E. Park [Paper]

  • [TCAS-I 2024] NeuroSim V1. 4: Extending Technology Support for Digital Compute-in-Memory Toward 1nm Node.

    Lee, Junmo, Anni Lu, Wantong Li, et al. [Paper]

  • [TCAD 2023] MNSIM 2.0: A Behavior-Level Modeling Tool for Processing-In-Memory Architectures.

    Zhenhua Zhu, Hanbo Sun, Tongxin Xie, et al. [Paper] [GitHub]

  • [ASPDAC 2021] DP-Sim: A Full-stack Simulation Infrastructure for Digital Processing In-Memory Architectures.

    Minxuan Zhou, Mohsen Imani, Yeseong Kim [Paper]

  • [FAI 2021] NeuroSim Simulator for Compute-in-Memory Hardware Accelerator: Validation and Benchmark.

    Anni Lu, Xiaochen Peng, Wantong Li, et al. [Paper] [GitHub]

  • [TCAD 2021] DNN+NeuroSim V2.0: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators for On-Chip Training.

    Xiaochen Peng, Shanshi Huang, Hongwu Jiang, et al. [Paper] [GitHub]

  • [TCAD 2020] Eva-CiM: A System-Level Performance and Energy Evaluation Framework for Computing-in-Memory Architectures.

    Di Gao, Dayane Reis, Xiaobo Sharon Hu, et al. [Paper] [GitHub]


Software Stack

  • [DATE 2024] A Novel March Test Algorithm for Testing 8T SRAM-Based IMC Architectures.

    L. Ammoura, M. -L. Flottes, P. Girard, et al. [Paper]

  • [DATE 2024] PIMLC: Logic Compiler for Bit-Serial Based PIM.

    C. Tang, C. Nie, W. Qian, et al. [Paper]

  • [ASPLOS 2024] CIM-MLC: A Multi-level Compilation Stack for Computing-In-Memory Accelerators.

    Songyun Qu, Shixin Zhao, Bing Li, et al. [Paper]

  • [ASPLOS 2023] Infinity Stream: Portable and Programmer-Friendly In-/Near-Memory Fusion.

    Zhengrong Wang, Christopher Liu, Aman Arora, et al. [Paper]

  • [DAC 2023] AutoDCIM: An Automated Digital CIM Compiler.

    Jia Chen, Fengbin Tu, Kunming Shao, et al. [Paper]

  • [DAC 2023] PIM-HLS: An Automatic Hardware Generation Tool for Heterogeneous Processing-In-Memory-based Neural Network Accelerators.

    Yu Zhu, Zhenhua Zhu, Guohao Dai, et al. [Paper]

  • [ASPDAC 2022] Optimal Data Allocation for Graph Processing in Processing-in-Memory Systems.

    Zerun Li, Xiaoming Chen, Yinhe Han [Paper]

  • [MICRO 2022] Multi-Layer In-Memory Processing.

    Daichi Fujiki, Alireza Khadem, S. Mahlke, et al. [Paper]

  • [TCAS-I 2022] Neural Network Training on In-Memory-Computing Hardware With Radix-4 Gradients.

    Christopher Grimm, and Naveen Verma [Paper]

  • [ASPDAC 2021] Providing Plug N’ Play for Processing-in-Memory Accelerators.

    Paulo C. Santos, Bruno E. Forlin, Luigi Carro [Paper]

  • [DAC 2021] Leveraging Noise and Aggressive Quantization of In-Memory Computing for Robust DNN Hardware Against Adversarial Input and Weight Attacks.

    Sai Kiran Cherupally, Adnan Siraj Rakin, Shihui Yin, et al. [Paper]

  • [DAC 2021] Invited: Accelerating Fully Homomorphic Encryption with Processing in Memory.

    Saransh Gupta, Tajana Simunic Rosing [Paper]


Surveys and Analysis

  • [DAC 2023] Unified Agile Accuracy Assessment in Computing-in-Memory Neural Accelerators by Layerwise Dynamical Isometry.

    Xuan-Jun Chen, Cynthia Kuan, Chia-Lin Yang [Paper]

  • [DAC 2023] Advances and Trends on On-Chip Compute-in-Memory Macros and Accelerators.

    Jae-sun Seo [Paper]

  • [TCAS-I 2022] A Practical Design-Space Analysis of Compute-in-Memory With SRAM.

    Samuel Spetalnick, and Arijit Raychowdhury [Paper]

  • [DATE 2021] Perspectives on Emerging Computation-in-Memory Paradigms.

    Shubham Rai, Mengyun Liu, Anteneh Gebregiorgis, et al. [Paper]

  • [DATE 2021] Modeling and Optimization of SRAM-based In-Memory Computing Hardware Design.

    Jyotishman Saikia, Shihui Yin, Sai Kiran Cherupally, et al. [Paper]

  • [TCAS-I 2021] Challenges and Trends of SRAM-Based Computing-In-Memory for AI Edge Devices.

    Chuan-Jia Jhang, Cheng-Xin Xue, Je-Min Hung, et al. [Paper]

  • [TCAS-I 2021] Impact of Analog Non-Idealities on the Design Space of 6T-SRAM Current-Domain Dot-Product Operators for In-Memory Computing.

    Adrian Kneip, and David Bol [Paper]

  • [TCAS-I 2021] Analysis and Optimization Strategies Toward Reliable and High-Speed 6T Compute SRAM.

    Jian Chen, Wenfeng Zhao, Yuqi Wang, et al. [Paper]

  • [ICCAD 2020] Fundamental Limits on the Precision of In-memory Architectures.

    Sujan K. Gonugondla, Charbel Sakr, Hassan Dbouk, et al. [Paper]


Maintainers

  • Yingjie Qi, Beihang University. [GitHub]
  • Cenlin Duan, Beihang University. [GitHub]
  • Xiaolin He, Beihang University. [GitHub]
  • Yiou Wang, Beihang University. [GitHub]
  • Yikun Wang, Beihang University. [GitHub]
  • Rubing Yang, Beihang University. [GitHub]