AI ASIC and SSD demand soar! Marvell Technology, Inc. (MRVL.US) enjoys the "AI inference dividend" with operating profit surging by 72%.
Customized silicon chip design for ultra-large scale data center customers is no longer a peripheral business, but one of the core growth engines for global chip companies.
Focus on the customization of AI chips for large AI data centers (i.e., AI ASIC), as well as Marvell Technology, Inc. (MRVL.US) as one of the largest partners in the Amazon.com, Inc. AWS Trainium series AI ASICs, announced full-line performance reports and future outlooks that exceeded Wall Street's expectations after the US stock market closed on March 6th Beijing time. Marvell's latest strong performance and outlook, combined with the explosive growth performance data of the larger market-scale AI ASIC leader Broadcom Inc. (AVGO.US) the day before, highlight the increasing demand for cost-effective AI ASIC computing systems with the onset of the AI inference era, the continuous strong demand for storage chips, the surging demand for cloud AI inference computing power, and the trend of focusing on embedding large AI models into enterprise operations through "micro-training". This trend poses a strong challenge to NVIDIA Corporation's near 90% market share in the AI chip monopoly.
Financial data shows that Marvell Technology, Inc. had a revenue of approximately $22.2 billion in the fourth quarter of the fiscal year 2026 ending on January 31st, reaching a record high revenue, with a year-over-year growth of over 20%, slightly higher than the Wall Street analysts' average expectation of approximately $22.1 billion. The fourth quarter's adjusted earnings per share (Non-GAAP EPS) were $0.80, exceeding the Wall Street average expectation of approximately $0.79, and the $0.60 from the same period last year. The GAAP profit reached $4.044 billion, a significant year-over-year increase of 72%, higher than the average Wall Street expectation; the net profit attributable to common shareholders in the same period was approximately $3.961 billion, representing a substantial year-over-year increase of around 97.9%.
Marvell Technology, Inc.'s data center business closely related to AI training/inference super systems contributed approximately $16.5 billion in revenue, accounting for approximately 74% of total revenue, achieving a year-over-year growth of approximately 21%, and an increase of 9% compared to the previous quarter's strong base. The company emphasized in its performance statement that the order volume for its data center business is growing at a "record pace". After the financial report was released, Marvell Technology, Inc.'s stock price surged by over 15% in after-hours trading.
In terms of performance outlook, Marvell's CEO expects revenue to further accelerate in the current fiscal year. The mid-point of Marvell Technology, Inc.'s revenue guidance for the first quarter of the fiscal year 2027 is approximately $24 billion, significantly higher than the analysts' average expectation of approximately $22.7 billion - note that this performance expectation has been continuously raised since the end of January when major U.S. tech giants like Alphabet Inc. Class C, Amazon.com, Inc., and NVIDIA Corporation reported strong performances. Even with continuous adjustments in analyst expectations, Marvell's official outlook remains stronger, indicating a significant challenge to NVIDIA Corporation's AI chip monopoly position.
Marvell Technology, Inc.'s strong performance is driven by the all-encompassing explosion of demand for semiconductor infrastructure in data centers, especially tailored AI ASICs, high-performance communication and control chips, and data center-level eSSD storage controllers. Marvell's strong revenue growth trajectory in recent years largely stems from its data center business, especially products tailored to cloud computing service providers and supercomputing platforms, such as customized AI ASICs, high-bandwidth networking chips, interconnect solutions, and SSD storage controllers. These products are closely intertwined with AI training/inference platforms. The share of data center revenue has been steadily increasing for Marvell and is growing significantly faster than the overall company.
Marvell Technology, Inc. has long focused on accelerating the iteration of customized AI ASIC chip technology, network processors (DPUs/NPUs), SSD controllers, and high-bandwidth interconnect products. These products are experiencing rapid growth due to the exponential demand for large-scale AI model training, inference tasks, and processing of massive data flows as global AI computing demand expands. Customized silicon chip design for large-scale data center customers is no longer on the periphery but is now one of the core growth engines of global chip companies.
In the face of the overwhelming demand for AI infrastructure in data centers, Marvell Technology, Inc.'s strong performance highlights its significant benefits from the three major storage chip manufacturers - Samsung, SK Hynix, and Micron - who have all previously reported results that have greatly benefited from the "storage super cycle." High-performance storage controllers and SSD controllers continue to be essential in the data center ecosystem. In the context of large-scale training/inference systems, I/O bandwidth, access efficiency of persistent storage, and interconnect efficiency are crucial constraints on overall training costs and performance. Marvell Technology, Inc.'s SSD controllers, NVMe/CXL cache controllers, and high-bandwidth storage interconnect product lines are essential components of this wave of demand growth. These highly specialized control ASICs, while not as prominent as the exponentially expanding AI ASIC business, are vital for processing data flows of large parameter AI models, directly enhancing system efficiency and service quality at the data center level.
Standing at the intersection of the semiconductor and AI data center infrastructure analyses, SSD storage chips are perfectly positioned to ride the wave of the AI supercycle, as they address both the expansion of training and inference, serving as a universal toll booth across platforms, architectures, and ecosystems. As the AI era transitions from training dominance to scalability of inference, agents' long context, enhanced retrieval, the demand for capacity, bandwidth, power efficiency, and the efficiency of the data persistent layer will only grow stronger. The storage system that AI data centers heavily rely on comprises not only HBM but also NVMe eSSD tailored for enterprise storage hot layers, experiencing unprecedented structural growth.
With the strong demand driving the AI inference wave globally, prices of DRAM and NAND series storage are expected to continue rising. BNP Paribas recently released a research report predicting a 90% increase in DRAM contract prices in the first quarter of 2026, and NAND prices are expected to rise by 55%. This price trend is not a singular view, as TrendForce recently increased its expectations for the first quarter of 2026, projecting DRAM contract prices to rise by 90% to 95% and NAND Flash contract prices to rise by 55% to 60%. North American cloud computing companies are driving demand for enterprise SSDs, further increasing prices in the first quarter, with potential spikes of 53% to 58%.
With the overwhelming demand in artificial intelligence data centers, the CEO of Marvell Technology, Inc., Matt Murphy, stated in the performance report that the company had set a record for customer orders of customized chips in the fiscal year 2026, and he expects this trend to continue. Murphy mentioned that due to the "continuously strong growth of the data center business," overall revenue for the current fiscal year is expected to accelerate further. He added that the order volume for the data center business is accelerating at a "record pace."
NVIDIA Corporation's AI GPU dominant side needs more versatile AI computing clusters and rapid iteration capabilities for the entire computing system; whereas on the AI inference side, the focus is on unit token costs, latency, and energy efficiency. For example, Alphabet Inc. Class C explicitly positions Ironwood as a TPU generation designed for the "inference era" and highlights the performance, efficiency, cost-effectiveness, and scalability of the cluster. However, Amazon.com, Inc.'s recent actions show the potential of AI ASICs in training large models.
The AI ASIC computing system will undoubtedly continue to weaken NVIDIA Corporation's monopoly premium and market share in the medium to long term, rather than linearly replacing the GPU system. The fundamental reason is that the core competition in the inference era is no longer just "peak computing power" but focuses on each token cost, power consumption, memory bandwidth utilization, interconnect efficiency, and the total ownership cost after software and hardware collaboration. In these indicators, ASICs tailored for specific workloads, data flows, compilers, and interconnects are naturally more cost-effective than general-purpose GPUs. In the future, AI data centers are more likely to see the following: front-end training and general cloud computing power continue to be led by GPUs, while internal inference at a large scale, agent workflows, and fixed high-frequency workloads accelerate the transition to ASICs, ushering in a true era of heterogeneous computing power.
During the front-end training era, the most essential features in the AI field are versatility, software maturity, and rapid adaptation to new model structures, giving GPUs a natural advantage. However, as the industry transitions from "scarce training" to "scale inference, agentization, long contexts, and low latency," core KPIs will shift from "peak computing power" to each token cost, watt throughput, and system-level TCO. This is the fundamental reason for the collective acceleration of ASICs by hyperscalers (cloud computing supergiants).
For example, Alphabet Inc. Class C defines Ironwood TPU as the best computing node for the "inference era" and can be scaled up to 9,216 chips; Microsoft Corporation positions the new AI ASIC Maia 200 directly as an accelerator for cloud computing inference and claims 30% stronger performance per dollar compared to its latest hardware; AWS defines Trainium3 as a chip pursuing the "best token economics," boasting a more than 4x improvement in efficiency, collectively indicating a market concern for NVIDIA Corporation's growth prospects as cloud computing giants initiate an "AI computing cost revolution".
Marvell Technology, Inc.'s CEO, Matt Murphy, stated in the performance report that the custom chip design giant had set a record for customer orders of customized chips in the fiscal year 2026, and he expects this trend to continue. Murphy mentioned that due to the "continuously strong growth of the data center business," overall revenue for the current fiscal year is expected to accelerate further. He added that the order volume for the data center business is accelerating at a "record pace."
Related Articles

HK Stock Market Move | Brainaurora-B (06681) and Nanjing Panda (00553) saw their stock prices rise by over 4%, as the concept of brain-machine interfaces becomes popular again.

HK Stock Market Move | Colored stocks lead the decline, copper and aluminum stocks accumulate significantly during the Spring Festival, and the peak season for colored stocks will face a test.

A-share market opens quickly | Middle East situation continues to disrupt financial markets, A-shares fall again! Oil and gas stocks continue to decline.
HK Stock Market Move | Brainaurora-B (06681) and Nanjing Panda (00553) saw their stock prices rise by over 4%, as the concept of brain-machine interfaces becomes popular again.

HK Stock Market Move | Colored stocks lead the decline, copper and aluminum stocks accumulate significantly during the Spring Festival, and the peak season for colored stocks will face a test.

A-share market opens quickly | Middle East situation continues to disrupt financial markets, A-shares fall again! Oil and gas stocks continue to decline.

RECOMMEND





