Haitong: AI ASIC is expected to experience explosive growth, focusing on investment opportunities in the industry chain.
22/12/2024
GMT Eight
Haitong released a research report stating that Amazon has launched its own ASIC Trainium2 server, and Broadcom is optimistic about the future market size of AI ASIC and related network services, indicating that ASIC chips have become a trend. With the increasing demand for computing power in inference scenarios combined with more fixed AI inference algorithms, it is expected that ASIC chips will experience explosive growth. It is recommended to pay attention to domestic PCB companies that can participate in the North American AI ASIC chain.
The main points of Haitong are as follows:
ASIC (Application Specific Integrated Circuit) is an integrated circuit designed for specific applications.
ASIC is designed to be optimized for specific applications, achieving higher efficiency and lower energy consumption when processing specific tasks, therefore reaching the ultimate in performance and efficiency. The increasing demand for computing power in inference scenarios combined with more fixed AI algorithms is expected to drive explosive growth in ASIC chips.
Amazon Web Services launched the new Trainium2 server.
According to the official Amazon Web Services account, the new Trainium2 server (Trainium2 Server) is equipped with 16 Trainium accelerators per server, as well as dedicated Nitro accelerator cards and heads. A Trainium server can provide 200 trillion floating-point operations per second, which is 1.25 times that of the largest AI server currently offered by Amazon Web Services. It also has 1.5TB of high-speed HBM memory, which is 2.5 times that of the largest AI server currently available, with a memory bandwidth of 46TB/s. Through the new NeuronLink technology, chips can directly access each other's memory, and this multi-server cluster integrated with NeuronLink technology is referred to as "UltraServer".
Amazon has showcased and released the Trainium2 UltraServer consisting of 64 Trainium2 accelerators, which can provide 832 trillion floating-point operations per second, 6TB of HBM high-speed memory, and a memory bandwidth of 185 TB/s. Compared to existing artificial intelligence training instances offered by Amazon Web Services, the computational capacity of the UltraServer is five times higher, and the memory capacity is ten times that of the largest existing instance.
AI ASIC core company Broadcom is optimistic about the future of AI ASIC.
In Broadcom's fourth quarter fiscal 2024 earnings conference call, the company mentioned that they currently have three large-scale customers who have developed their own multi-generation AI XPU roadmap and plan to deploy at different speeds over the next three years. By 2027, each of them plans to deploy 1 million XPU clusters on a single architecture. Broadcom expects that by 2027, the serviceable addressable market (SAM) for AI XPU and networks will reach $60 to $90 billion, with the company having the potential to capture a leading market share.
Risk warning: Global macroeconomic growth is lower than expected, end demand is lower than expected, and industry capacity clearance is lower than expected.