CITIC SEC: Advanced storage restrictions increased, high-end storage industry chain localization is expected to accelerate.
19/01/2025
GMT Eight
CITIC SEC released a research report stating that on the evening of January 15th, BIS revised the "Export Administration Regulations" (EAR), modifying the definition of advanced storage for DRAM. The process node remains at 18nm, while the storage unit area and storage density have changed from 1ynm in December 24 to 1xnm. At the same time, there is an increase in the limit of TSV via holes, further tightening the restrictions on HBM and advanced DRAM, driving the acceleration of domestication of the industry chain. This restriction is aimed at manufacturers and the supply chain, and currently design manufacturers are conducting business as usual. The bank is optimistic about the high-end custom storage business and continues to recommend it. The bank also believes that with the cooperation of local high-end testing and packaging manufacturers and equipment manufacturers in the future, domestic DRAM manufacturers are expected to break through HBM, and companies involved in related links are expected to benefit greatly. The bank is positive about the domestic substitution in the high-end storage industry chain.
Key points from CITIC SEC:
The U.S. has once again upgraded the restrictions on advanced storage to China. This time, the definition of advanced storage for DRAM has been modified, with the process node remaining at 18nm and the storage unit area and density changing from 1ynm in December 24 to 1xnm, while also increasing the limit on TSV via holes.
Specifically, the storage unit area corresponding to 18 nanometer half-pitch has changed from 0.0019 m2 in December 24 to 0.0026m2, and the storage density has changed from 0.288 Gbit/mm2 in December to 0.2 Gb/mm2 (according to the Semiconductor Digest website, the storage unit area for Samsung 1x, which is 18nm 8Gb DDR4 and Micron 1xnm is 0.0026/0.0025m2, and the storage densities for Samsung 1y nm 8 Gb LPDDR4X, Micron 1y nm 8 Gb DDR4, and Micron 1x LPDDR4 are respectively 0.237/0.205/0.191 Gb/mm2. For detailed parameter comparisons, please refer to the appendix); the TSV quantity per die should not exceed 3000. The increase in bandwidth for HBM mainly comes from more IO interfaces/more channels, achieved through an increase in IO transfer rate, with the number of IO interfaces strongly positively correlated with the number of TSV via holes in 3D stacking. Controlling the number of TSV via holes for a single die may potentially limit the technology route for increasing TSV bandwidth.
Current configuration of domestic AI chips: mainly using HBM2/2E, lagging behind foreign products by more than 2 generations.
HBM is a new type of storage that can meet the high-speed transmission requirements of AI chips, introduced in 2014. With the continuous high-speed growth in computing demand, we estimate that by 2025 the global demand for HBM capacity will reach close to 1.7 billion GB, accounting for over 10% of total DRAM shipments and over 30% of the DRAM market value. Domestic training AI chips largely use storage chips such as HBM2 or HBM2E, while the latest foreign products like NVIDIA's new AI chips H200 and B100, B200 are equipped with more advanced HBM3E storage chips. Some AI inference cards such as NVIDIA L series still use GDDR due to cost considerations; in addition, some lightweight AI inference chips may be paired with LPDDR, such as Samsung's planned AI inference chip Mach-1. We believe that this latest U.S. restriction on advanced storage to China will affect domestic manufacturers' procurement of foreign HBM2/2E (primarily from Samsung Electronics and SK Hynix). We anticipate that moving forward, domestic manufacturers may introduce domestically produced HBM to replace foreign products on one hand, while there may also be a possibility of short-term reduction of AI chip supporting storage from HBM to GDDR or other types of special DRAM.
Risk Factors:
-Development of domestic AI chip lags behind expectations
-Production capacity of domestic AI chips falls short of expectations
-Iteration speed of large models falls short of expectations
-Development of domestic semiconductor equipment lags behind expectations
-Expansion of domestic storage chip customers falls short of expectations
-Geopolitical risks, etc.