GF Securities: Inference-driven AI storage is experiencing rapid growth. It is recommended to focus on related targets in the storage industry chain.

date
15:52 16/12/2025
avatar
GMT Eight
AI-driven storage prices continue to rise, with significantly improved gross profit margins for the original manufacturers.
GF SEC released a research report stating that the moment of AI's onslaught, model innovation and CAPEX foundation, and the coordinated development of the AI industry chain; AI-driven storage cycles continue to rise, with expansion and upgrades working together. AI-driven storage prices continue to rise, with significantly increased original factory gross profit margins; DRAM and NAND architecture upgrades bring new demand for equipment; storage OEM mode brings opportunities for industry transformation; interface chips MRDIMM and VPD open up new space; it is suggested to pay attention to related targets in the storage industry chain. The main points of GF SEC are as follows: Storage is the token's score, inference-driven AI storage is growing rapidly Storage in AI servers mainly includes HBM, DRAM, SSD, etc., showing characteristics of performance decreasing step by step, capacity increasing step by step, and cost decreasing step by step. Inference-driven AI storage is growing rapidly. (1) Memory benefits from ultra-long contexts and multimodal inference needs, when processing a large amount of sequential data or multimodal information, high bandwidth and large capacity memory can reduce latency and improve parallel efficiency. (2) SSD and HDD are the tokens' scores. With the rapid growth of AI inference demand, lightweight model deployment drives the rapid increase in storage capacity demand, and it is expected that overall demand will skyrocket to hundreds of EB level in the future. AI inference-driven storage demand growth, vast industry chain space (1) AI & storage server eSSD space is vast. With the rapid growth of long contexts inference, RAG databases, and token scale, AI workloads' demand for high bandwidth, large capacity eSSDs will continue to strengthen, and the market space for eSSD in AI servers and storage servers will further expand. (2) MRDIMM is expected to be applied in large model inference. MRDIMM provides deterministic gains in the KVCache scenario of large model inference, with higher concurrency, longer contexts, lower end-to-end latency, and significantly optimized CPU-GPU memory layout and resource utilization. (3) SPD & VPD chip space is vast. With the continuous expansion of DDR5 penetration, the technical specifications and unit price of SPD chips are higher than those of the DDR4 generation, driving rapid development in the DDR5 SPD market. SSD upgrades are expected to bring VPD growth opportunities. The improvement in SSD performance will drive the upgrade of VPDEEPROM product technical specifications, and the value of VPDEEPROM is expected to further increase with performance upgrades. (4) CXL storage pooling boosts AI inference. CXL enables storage pooling, significantly improving computing efficiency, forming a significant TCO advantage in KVCache-intensive inference. NVIDIA investing in Intel & acquiring Enfabrica, laying out CXL capabilities; Alibaba Cloud launching CXL storage pooling servers to enhance inference throughput. The CXL protocol continues to penetrate the AI field, driving chip demand increases, and as the core carrier of CXL technology implementation, CXL interconnect chips are expected to play a greater role in the AI field. Risk warning AI industry development and demand are lower than expected; AI server shipments are lower than expected, and domestic manufacturers' technology and product progress are lower than expected.