Lates News
Guotai Haitong released a research report stating that NVIDIA's next-generation Rubin CPX has divided the calculation load of AI inference at the hardware level, upgraded the memory, and provided faster transmission. With the increase in computing speed, the average capacity of DRAM and NAND Flash in various AI extension applications such as smartphones, servers, and laptops has grown. The growth rate is highest in the server domain, with the average capacity of Server DRAM increasing by 17.3% annually until 2024. With the continuous increase in demand for AI servers, the release and production of high-end AI chips like NVIDIA's next-generation Rubin and self-developed ASICs by Cloud Service Providers (CSP) will help boost the quantity and price of high-speed computing DRAM products. It is recommended to pay attention to storage modules.
Latest
3 m ago