Published in PC Hardware

TSMC's SRAM scaling slows

by on19 December 2022


Chip price hikes expected

TSMC's SRAM Scaling has slowed tremendously and while new fabrication nodes are expected to increase performance, cut down power consumption, and increase transistor density prices are likely to rise.

According to WikiChip, circuits have been scaling well with the recent process technologies, but SRAM cells have been lagging behind and ran out of steam when TSMC bought in its 3nm-class production nodes.

When TSMC formally introduced its N3 fabrication technologies earlier this year, it said that the new nodes would provide 1.6x and 1.7x improvements in logic density when compared to its N5 (5nm-class) process.

However, it forgot that  SRAM cells of the new technologies almost do not scale compared to N5. A TSMC paper published at the International Electron Devices Meeting (IEDM) said that TSMC's N3 features an SRAM bitcell size of 0.0199µm^², which is only ~5% smaller compared to N5's 0.021 µm^²SRAM bitcell.

This gets worse with the revamped N3E as it comes with a 0.021 µm^² SRAM bitcell (which roughly translates to 31.8 Mib/mm^²), which means no scaling compared to N5.

Intel's Intel 4 (7nm EUV) reduces SRAM bitcell size to 0.024µm^² from 0.0312µm^² while  Intel 7 (formerly known as 10nm Enhanced SuperFin) manages 27.8 Mib/mm^², which is a bit behind TSMC's HD SRAM density.

An Imec presentation that showed SRAM densities of around 60 Mib/mm^² on a 'beyond 2nm node' with forksheet transistors. Such process technology is years away, and chip designers will have to develop processors with SRAM densities advertised by Intel and TSMC.

Modern CPUs, GPUs, and SoCs, use loads of SRAM for various caches as they process data loads. It is incredibly inefficient to fetch data from memory, especially for artificial intelligence (AI) and machine learning (ML) workloads.

Even general-purpose processors, graphics chips, and smartphone application processors carry huge caches. AMD's Ryzen 9 7950X has 81MB of cache while Nvidia's AD102 uses at least 123MB of SRAM for various caches that Nvidia publicly disclosed. 

The need for caches and SRAM will only increase, but with N3 (which is set to be used for a few products only) and N3E there will be no way to reduce die area occupied by SRAM and mitigate higher costs of the new node compared to N5. This means that the die sizes and prices of high-performance processors will increase.

SRAM cells are prone to defects, and while chip designers can alleviate larger SRAM cells with N3's FinFlex innovations, no one knows what this will do.

TSMC thinks its density-optimised N3S process technology will shrink SRAM bit-cell size compared to N5, but even if it manages this, it is not expected to happen until 2024. It may not manage enough logic performance for AMD, Apple, Nvidia, and Qualcomm.

One way of getting around the slowing SRAM area scaling is doing what AMD did with its 3D V-Cache, using a multi-chiplet design and disaggregating larger caches into separate dies made on a cheaper node.

 

Last modified on 19 December 2022
Rate this item
(1 Vote)