Guotai Haitong: Huawei announces the latest Ascend AI chip roadmap, domestic AI computing power is expected to continue to improve

date
20/09/2025
avatar
GMT Eight
Guotai Junan Securities released a research report stating that domestic AI computing power is expected to continue to improve, and they recommend focusing on domestic AI computing power targets.
Guotai Haitong released a research report stating that domestic AI computing power is expected to continue to improve, and it is recommended to pay attention to domestic AI computing power targets. It is reported that Huawei has announced its latest Ascend AI chip roadmap, with the launch of the new Ascend 950PR chip in 2026Q1, and the Ascend 970 chip in 2028Q4. In addition, Huawei has fully opened up its super node technology, firstly, by opening up the LingQu protocol and super node reference architecture, allowing the industry to develop related products or components based on the technical specifications. Secondly, Huawei has fully opened up the basic hardware of the super nodes, including NPU modules, air-cooled blade servers, liquid-cooled blade servers, AI cards, CPU motherboards, and cascading cards in different forms of hardware, making it easier for customers and partners to incrementally develop and design various products based on LingQu. Key points from Guotai Haitong: Huawei has announced its latest Ascend AI chip roadmap, with the launch of the new Ascend 950PR chip in 2026Q1 and the Ascend 970 chip in 2028Q4. 1. The roadmap shows that Huawei has already released the Ascend 910C in 25Q1. They will launch the new Ascend 950PR chip in 2026Q1, and the Ascend 950DT in the fourth quarter. In 2027Q4, Huawei will release the Ascend 960 chip, followed by the Ascend 970 chip in 2028Q4. 2. Starting with the Ascend 950PR, Huawei's Ascend AI chips will use their self-developed HBM. The Ascend 950 will be equipped with the self-developed HBM HiBL1.0, while the Ascend 950DT will be upgraded to HBM HiZQ2.0. 3. The Ascend 970 has a computing power of 4PFLOPS (FP8)/8PFLOPS (FP4) and an interconnect bandwidth of 4TB/s. It has a HBM memory capacity of 288GB and a bandwidth of 14.4TB/s. For comparison, NVIDIA's Blackwell Ultra GB300 has a computing power of 15PFLOPS (FP4), with 288GB HBM3e and a bandwidth of 8TB/s. The world's most powerful super node, Atlas 950 SuperPoD, is expected to be launched in 2025Q4. 1. As of September 18, 2025, CloudMatrix 384 has deployed over 300 sets of super nodes and serves more than 20 customers. 2. Huawei will launch the world's most powerful super node, Atlas 950 SuperPoD, in 2025Q4. In addition, the new generation product, Atlas 960 SuperPoD, is expected to be launched in 2027Q4. 3. The latest super node products from Huawei, Atlas 950 SuperPoD and Atlas 960 SuperPoD, support 8192 and 15488 Ascend cards, leading in terms of card scale, total computing power, memory capacity, interconnect bandwidth, and other indicators, making them the world's most powerful super nodes for many years to come. 4. Based on the super nodes, Huawei has also released the world's most powerful super node clusters, including the Atlas 950 SuperCluster and the Atlas 960 SuperCluster, with computing power scales exceeding 500,000 cards and reaching a million cards, respectively, making them truly the world's most powerful computing clusters. 5. Huawei has also released the world's first universal super node, TaiShan950 SuperPoD, developed based on Kunpeng 950 and planned to be launched in 2026Q1. Hardware openness and software open source build a comprehensive computing power foundation for all scenarios. 1. Huawei has fully opened its super node technology, firstly by opening up the LingQu protocol and super node reference architecture, allowing the industry to develop related products or components based on the technical specifications. Secondly, Huawei has fully opened the basic hardware of the super nodes, including NPU modules, air-cooled blade servers, liquid-cooled blade servers, AI cards, CPU motherboards, and cascading cards in different forms of hardware, making it easier for customers and partners to incrementally develop and design various products based on LingQu. 2. The LingQu operating system components will also be fully open source, with component code gradually being integrated into multiple upstream open source communities like openEuler. Users can integrate part or all of the source code into existing operating systems according to their actual needs, iterate and maintain versions on their own, or directly integrate the entire component into existing operating systems. Risk Warning: Technological iterations may not meet expectations; ecosystem development may be slow; downstream demand may not meet expectations.