Not just GPU! NVIDIA Corporation (NVDA.US) releases new LPU and CPU products at GTC 2026, comprehensively laying out AI data center in every aspect.

date
11:50 17/03/2026
avatar
GMT Eight
On Monday, in the Eastern time of the United States, NVIDIA officially kicked off the GTC conference in San Jose, California.
On Monday, Pacific Time, NVIDIA Corporation (NVDA.US) officially kicked off the GTC conference in San Jose, California, releasing multiple new chips and platforms in one go, from the new generation of Nvidia Groq 3 Language Processing Units (LPUs) to the Vera Central Processing Unit (CPU) server cabinet designed to compete with Intel Corporation (INTC.US) and AMD (AMD.US). It is reported that NVIDIA Corporation has launched a total of five large server cabinets, each catering to different AI data center scenarios. Among them, the most anticipated release is the Nvidia Groq 3 chip. In December of last year, NVIDIA Corporation acquired Groq-related technology rights for $20 billion and brought its founder Jonathan Ross, President Sunny Madra, and core team under its umbrella. The Groq processor specializes in AI inference - the core process of running AI models. When users input commands and receive responses in ChatGPT, Claude, or Gemini, it is the inference technology at play. Unlike the general-purpose GPU from NVIDIA Corporation that can both train and run models, the launch of Groq 3 signifies the company officially owning a dedicated inference chip to meet the urgent demand in the AI market shifting from model training to model application. Ian Buck, Vice President of NVIDIA Corporation for Ultra-Large Scale and High-Performance Computing, stated that while GPUs support larger memory capacities, the Groq 3 LPU memory has faster access speeds. By combining the performance advantages of both, the new Groq 3 LPX platform was born - integrating 128 independent Groq 3 LPUs in a server cabinet, which can increase throughput by 35 times per megawatt when working in conjunction with the Vera Rubin NVL72 rack, creating a 10x revenue potential. "The LPX architecture optimized for trillion-parameter models and million-token contexts, complemented perfectly with Vera Rubin, maximizes efficiency between power consumption, memory, and computing power. This breakthrough in throughput per watt and token performance will drive ultra-high-end trillion-parameter inference services, opening up new growth opportunities for all AI service providers," emphasized NVIDIA Corporation in their official statement. The launch of the LPX server cabinet effectively addresses concerns in the market about NVIDIA Corporation losing its advantage under the impact of emerging inference chip startups. At the same time, the independently deployed Vera CPU rack also garnered attention - this cluster system using 256 liquid-cooled Vera chips marks the first time NVIDIA Corporation has deconstructed the Vera CPU from the "Vera Rubin super chip" (consisting of 1 Vera CPU + 2 Rubin GPUs). With the rise of intelligent AI agents, the strategic value of CPUs is increasingly highlighted. When AI agents need to execute tasks such as browsing web pages or extracting table information, CPU performance directly affects execution efficiency. In scenarios requiring context analysis for GPUs such as data mining and personalized recommendations, the CPU also plays an irreplaceable role. "Vera is the ultimate CPU tailored for intelligent AI workloads," Buck explained, "We have redefined CPU architecture - the Olympus core designed specifically by NVIDIA Corporation for AI execution can achieve faster responses under extreme conditions, perfectly fitting all reinforcement learning scenarios." This is not NVIDIA Corporation's first foray into the CPU field. The agreement reached last month with Meta (META.US) will deploy the largest-scale previous-generation Grace CPU cluster. The independent release of Vera this time marks NVIDIA Corporation officially establishing a "GPU+CPU" dual-drive strategy, aiming at the data center market dominated by Intel Corporation and AMD. In addition to the above products, NVIDIA Corporation also showcased the Bluefield-4 STX storage server cabinet system (achieving performance leap compared to traditional solutions) and the Spectrum-6 SPX network server cabinet. With the continuous growth in demand for AI platforms, NVIDIA Corporation's new product line is expected to further drive revenue in the data center business. In the 2026 fiscal year, its data center revenue reached $193.5 billion, a significant increase from $116.2 billion in the 2025 fiscal year. Among the $650 billion in AI capital expenditures planned by giants such as Amazon.com, Inc. (AMZN.US), Alphabet Inc. Class C (GOOGL.US), Meta, and Microsoft Corporation (MSFT.US) this year, NVIDIA Corporation will undoubtedly receive a considerable share.