FINFET 6T-SRAM ALL-DIGITAL COMPUTE-IN-MEMORY FOR ARTIFICIAL INTELLIGENCE APPLICATIONS: AN OVERVIEW AND ANALYSIS

FinFET 6T-SRAM All-Digital Compute-in-Memory for Artificial Intelligence Applications: An Overview and Analysis

FinFET 6T-SRAM All-Digital Compute-in-Memory for Artificial Intelligence Applications: An Overview and Analysis

Blog Article

Artificial intelligence (AI) has revolutionized present-day life through automation and independent decision-making capabilities.For AI hardware implementations, the 6T-SRAM cell is a suitable candidate due to its performance edge over its counterparts.However, modern AI hardware such as neural networks (NNs) access off-chip data quite often, degrading the read more overall system performance.Compute-in-memory (CIM) reduces off-chip data access transactions.One CIM approach is based on the mixed-signal domain, but it suffers from limited bit precision and signal margin issues.

An alternate emerging approach uses the all-digital signal domain that provides better signal margins and bit precision; however, it will be at the expense of hardware overhead.We have analyzed digital signal domain CIM silicon-verified 6T-SRAM CIM solutions, after classifying them as SRAM-based accelerators, i.e., near-memory computing (NMC), and custom SRAM-based CIM, i.e.

, in-memory-computing (IMC).We have focused on multiply and accumulate (MAC) as the most frequent operation in convolution neural networks (CNNs) and compared state-of-the-art implementations.Neural networks with low weight precision, i.e., th), supply voltage (VDD), and process and here environmental variations.

The HD FinFET 6T-SRAM cell shows 32% lower read access time and 1.09 times better leakage power as compared with the HC cell configuration.The minimum achievable supply voltage is 600 mV without utilization of any read- or write-assist scheme for all cell configurations, while temperature variations show noise margin deviation of up to 22% of the nominal values.

Report this page