Financial Report Preview | Cloud Giants Launch "AI Cost Revolution", Marking the Arrival of the ASIC Era! Marvell Technology, Inc. (MRVL.US) is set to soar in performance.

date
13:01 02/03/2026
avatar
GMT Eight
NVIDIA's AI GPU almost monopolizes the need for more powerful AI computing power clusters and the rapid iteration capability of the entire computing power system on the AI training side, while on the AI inference side, it places more emphasis on the unit token cost, latency, and energy efficiency after the large-scale implementation of cutting-edge AI technology.
Focus on custom AI chips for large AI data centers (i.e. AI ASICs), as well as Marvell Technology, Inc. (MRVL.US), one of the largest partners in the Amazon.com, Inc. AWS Trainium series of AI ASICs, will announce its earnings report after the US stock market on March 5th. Wall Street analysts unanimously expect that under the wave of AI inference and the trend of "micro-training" focusing on embedding AI large models in enterprise operations, cost-effective AI ASICs will launch a strong challenge to NVIDIA Corporation's nearly 90% market share in the AI chip monopoly position. Therefore, analysts expect leaders in AI ASICs like Marvell and the larger market size ASIC leader Broadcom Inc. (AVGO.US) to achieve strong revenue growth data, and management is expected to provide strong performance outlook ranges. In the recently released third quarter financial report for the fiscal year 2026 (performance as of November 1, 2025), Marvell achieved net revenue of approximately $2.075 billion, a year-on-year increase of about 37% and slightly above market expectations. Adjusted earnings per share also exceeded Wall Street's forecasts. The strong growth performance of the company's third quarter reflects the explosive expansion of custom AI ASIC demand brought about by the new construction and expansion frenzy of cloud computing leaders in AI data centers. According to data compiled by Zacks Investment Research, Wall Street analysts expect Marvell's adjusted earnings per share for the fourth quarter to be around $0.79, indicating a potential increase of 31.7% compared to the same period last year. Revenue for the fourth quarter of this chip company is expected to be approximately $2.21 billion, indicating a significant year-on-year growth of 21% based on a strong base from the previous year. For this fiscal year, analysts generally expect earnings per share to be $2.84, indicating a significant increase of 80.9% over the previous year. Analysts' revenue expectations for Marvell Technology, Inc. for this fiscal year and the next fiscal year are $8.18 billion and $10 billion respectively, indicating potential year-on-year growth of 41.8% and 22.3%. Furthermore, with Marvell's completion of the acquisition of a company focused on optical interconnect technology, this will further enhance its technological capabilities in the high-bandwidth, low-latency AI data center infrastructure field. It is expected that this acquisition will gradually contribute to revenue growth in the coming years and help the company expand its share in the AI ecosystem. In the previous earnings report, in addition to strong Q3 performance and strong expansion outlook for the current quarter, the chip company also revealed in its financial disclosure that it will acquire Celestial AI, a startup focusing on optical interconnect I/O chips, for $3.25 billion to strengthen its network product portfolio. Marvell Technology, Inc. CEO Matt Murphy stated during the earnings conference call that Celestial's technology will be used in Marvell Technology, Inc.'s next-generation silicon-photon-related hardware products, and these products are expected to contribute an additional and potentially $10 billion super blue ocean market for Marvell Technology, Inc. Murphy and other senior executives also stated that it is expected to start generating substantial revenue contributions from the Celestial AI business from the second half of fiscal year 2028, achieve an annual revenue operating expectation scale of approximately $500 million by the fourth quarter of fiscal year 2028, and double this revenue expectation to $1 billion by the fourth quarter of fiscal year 2029. Concerns about the future of NVIDIA Corporation are legitimate The global wave of generative AI has accelerated the development of AI chips by cloud computing and chip giants, as they rush to design the fastest and most efficient AI computing infrastructure clusters for advanced large AI data centers. Marvell and its largest competitor, Broadcom Inc., focus on leveraging their absolute advantage in high-speed interconnection and chip IP fields to work with cloud computing giants such as Amazon.com, Inc., Alphabet Inc. Class C, and Microsoft Corporation to jointly create AI ASIC computing clusters tailored to their specific AI data center needs. This ASIC business has grown to become a very important part of both companies' businesses, with Broadcom Inc.'s collaboration with Alphabet Inc. Class C to create a TPU AI computing cluster being a typical example of an AI ASIC technology route. Peter DeSantis, the new head of artificial intelligence infrastructure at Amazon.com, Inc., stated in a media interview last Friday: "If we can build models on our own AI chips, we can build these models at a fraction of the cost of pure AI model providers." DeSantis also added: "Building super-sized AI data centers does pose some cost issues. If we ultimately want AI to change everything, the cost must be different." It is widely believed that NVIDIA Corporation, the "AI chip superpower," currently still dominates the vast majority of the core areas of AI computing infrastructure, particularly the artificial intelligence chip market. This chip giant led by Jensen Huang just announced quarterly performance and next quarter performance guidance that far exceeded expectations, but its stock price fell by 5% on Thursday, mainly due to increasing market concerns that hyperscalers have recently announced a series of initiatives to launch AI ASIC chips based on their proprietary technologies, casting more doubt on NVIDIA Corporation's long-term absolute dominance in the global AI infrastructure core area of AI chips. Undoubtedly, with companies like Anthropic, known as an "OpenAI rival," investing billions of dollars to purchase one million TPU chips and Meta, the parent company of Facebook, considering spending billions of dollars in late 2026 or 2027 to purchase Alphabet Inc. Class C TPU AI computing infrastructure, including for Meta's vast AI data center construction, coupled with Amazon.com, Inc.'s announcement that it will attempt to use Trainium and Inferentia to develop AI large models, all indicate that as cloud computing giants embark on an "AI computing cost revolution" to advance their proprietary AI ASIC penetration scales, concerns about the future of NVIDIA Corporation are indeed legitimate. The wave of AI inference is coming, and NVIDIA Corporation's "monopoly share" faces a fierce challenge Without a doubt, significant constraints in terms of economy and power have forced Microsoft Corporation, Amazon.com, Inc., Alphabet Inc. Class C, and Meta to develop their own AI chips based on the AI ASIC technology route within their cloud computing internal systems. The core purpose is to make their AI computing clusters more cost-effective and energy-efficient. The construction costs of super-sized AI data centers similar to "Stargate" are high, so tech giants are increasingly demanding that AI computing systems become more economical. Under power constraints, tech giants strive to achieve the utmost in "unit token cost, unit watt output," belonging to a flourishing era of AI ASIC technology routes. In addition, similar to NVIDIA Corporation's Blackwell architecture advanced AI GPU computing clusters experiencing long-term supply shortages, high costs, and constraints from supply chain bottlenecks and delivery schedules, self-developed AI ASICs undoubtedly provide a "second curve of capacity" and are more proactive in procurement negotiations, product pricing, and cloud computing service gross margins. Coupled with cloud computing giants like Alphabet Inc. Class C, Microsoft Corporation, etc., being able to design "chip-interconnect-system-compiler/runtime-scheduling-observation/reliability" integrated systems, to improve computing infrastructure utilization and reduce TCO. NVIDIA Corporation's AI GPU, which almost monopolizes the AI training side, needs more powerful general-purpose AI computing clusters and rapid iteration capabilities for the entire computing system, while the AI inference side values unit token cost, latency, and energy efficiency after the large-scale deployment of cutting-edge AI technologies. For example, Alphabet Inc. Class C has positioned Ironwood as a TPU generation "born for the AI inference era," emphasizing performance/efficiency/AI computing cluster cost-effectiveness and scalability. However, Amazon.com, Inc.'s latest actions demonstrate the strong potential of AI ASICs to train large models. The AI ASIC computing system will undoubtedly continue to weaken NVIDIA Corporation's monopoly premium and market share in the medium to long term, rather than linearly replace the GPU system. The fundamental reason is that the core competition in the inference era is no longer just "peak computing power," but rather unit token cost, power consumption, memory bandwidth utilization, interconnect efficiency, and the total cost after software-hardware coordination. In these metrics, ASICs tailored for specific workloads that integrate data flows, compilers, interconnects, naturally achieve higher cost-effectiveness than general-purpose GPUs. However, for NVIDIA Corporation and AMD, this largely implies that marginal suppression is a real existence, but it is more likely to manifest as a decline in bargaining power, sharing being eaten away, compression of valuation premiums, rather than an absolute collapse in demand. Under the super wave of AI inference, AI ASICs will continue to impact NVIDIA Corporation's dominance in the GPU monopoly landscape, but the impact is more about reshaping industry profit pools and customer procurement structures, rather than rendering the GPU expansion logic obsolete. AWS officially positions Trainium/Inferentia as dedicated accelerators for generative AI training and inference, with Trainium2 offering about 30%40% better price performance compared to their AI GPU cloud instances; and Alphabet Inc. Class C recently publicized that Gemini 2.0's training and inference runs 100% on TPUs. This indicates that "super-large cloud computing companies using self-developed ASICs to undertake core model training/inference" is no longer just a conceptual verification, but entering a replicable industrialization stage.