The golden age of AI ASIC is coming! The wave of reasoning is sweeping across the globe, with Broadcom Inc. (AVGO.US) unveiling a blueprint worth billions of dollars directly into NVIDIA Corporation's territory.
Broadcom announced better-than-expected quarterly results and future performance outlook, and also announced a stock buyback plan worth up to $10 billion. The company expects sales of its artificial intelligence chips to exceed $100 billion next year.
One of the biggest winners in the global AI boom, Broadcom Inc. (AVGO.US), announced its first quarter fiscal 2026 earnings report data as of February 1, and provided guidance for the second quarter. Overall, Broadcom Inc.'s latest performance data and management's outlook for the next fiscal quarter exceeded Wall Street analysts' expectations, especially the prospect of AI chip revenue reaching 100 billion dollars further validating the notion that "the AI boom is still in the early stages of computing infrastructure supply shortage." This highlights the increasing demand for cloud-based AI inference computing power as the AI inference era approaches, and the trend of "micro training" focusing on embedding AI large models into enterprise operations, presenting a stronger challenge to NVIDIA Corporation's near 90% market share monopolistic position in the AI chip sector.
Broadcom Inc. is one of the core chip suppliers for Apple Inc. and other major tech companies, as well as a key supplier of high-performance Ethernet switch chips for global large-scale AI data centers, as well as AI ASICs, which are crucial for AI training/inference and are the core supply force for custom AI chips for cloud computing giants.
After announcing strong performance and future prospects, Broadcom Inc.'s stock price surged over 5% in after-hours trading in the US stock market, leading to price increases for participants in the AI chip industry chain such as Taiwan Semiconductor Manufacturing Co., Ltd. Sponsored ADR and Micron, reviving the recent lukewarm "AI faith" single-handedly, proving to the market that technology giants like Alphabet Inc. Class C, Meta, as well as OpenAI, Anthropic, and other AI leaders are maintaining strong momentum in AI infrastructure expenditure, and demonstrating the explosive growth in computing power demand for top global AI application platforms like Gemini, Claude, and ChatGPT. In addition, Broadcom Inc.'s management announced a stock buyback plan with a scale of up to 10 billion dollars, emphasizing that the buyback plan will continue until the end of the year, indicating that its efforts to seize the unprecedented opportunity of AI computing power expenditure are yielding significant results.
The most important aspect of this performance report is that Broadcom Inc.'s CEO announced that the revenue related to "AI chips" next year will exceed 100 billion dollars, marking a significant market share and technological advancement in the AI chip sector dominated by NVIDIA Corporation, the "AI chip superpower". Analysts on Wall Street are extremely optimistic about the revenue prospects of Broadcom Inc.'s AI chip business, with target stock prices concentrated between $450 and $535 over the next 12 months. In comparison, Broadcom Inc.'s stock closed at $317.53 on Wednesday.
"We have a very clear grasp of reaching this milestone in 2027," he said in a conference call with Wall Street analysts. "We have also ensured the chip supply chain needed to achieve this goal."
The company expects current quarter revenue associated with AI to be $10.7 billion, meaning that achieving an annualized revenue level of 100 billion dollars will further significantly increase global AI computing power demand. Under the leadership of Chen Fuyang, Broadcom Inc. increasingly links its fate to the unprecedented AI infrastructure frenzy, positioning itself as a cost-effective and energy-efficient alternative for AI computing power with its custom semiconductor business. Broadcom Inc.'s latest target for AI-related revenue of 100 billion dollars includes revenue from the fiercely competitive "AI ASIC computing power clusters" against NVIDIA Corporation's AI GPU dominance, as well as revenue from AI networking chips - high-performance Ethernet switch chips revenue.
In terms of the latest performance indicators, in the first quarter ending February 1, Broadcom Inc.'s total revenue increased to $19.3 billion, a significant increase of 29% year-over-year; Adjusted earnings per share, excluding certain items, were $2.05, both of which were higher than analysts' average revenue expectations of around $19.2 billion and earnings per share expectations of around $2.03.
Broadcom Inc. stated that revenue associated with AI doubled during the period, reaching $8.4 billion, significantly outpacing the company's previous expectations. In a statement, Chen Fuyang said that this growth "is being driven by strong demand for custom AI ASIC accelerators and high-performance AI networking equipment." Semiconductor solutions revenue for Q1, including AI ASICs and smartphone RF chips, reached $12.515 billion, up 52% year over year.
Chen Fuyang stated in the conference call that he expects OpenAI to begin large-scale shipping of AI ASIC chips jointly developed with Broadcom Inc. next year, with a computing power scale expected to exceed 1 gigawatt. He also stated that the demand for Alphabet Inc. Class Cs TPU is very strong and will accelerate further in 2027. Broadcom Inc. also plans to ship its AI ASIC chips jointly developed with the AI application leader Anthropic, which uses Alphabet Inc. Class C's TPU to achieve 1 gigawatt of computing power capacity this year and over 3 gigawatts next year.
In terms of the most anticipated performance outlook, the company expects total revenue for the second quarter ending May 3 to be approximately $22 billion, indicating a year-over-year increase of about 47%, significantly higher than the average prediction of approximately $20.5 billion by Wall Street analysts, although some analysts predict revenue higher than $22 billion.
This year, there has been strong skepticism in the market about the future of AI computing power leaders such as Broadcom Inc. and NVIDIA Corporation, with concerns that the sustained spending on AI computing power in the billions of dollars may not be sustainable. As of the close, Broadcom Inc.'s stock price has fallen by 8.3% since the beginning of the year. Investors are increasingly concerned about the unprecedented AI spending bubble, and even NVIDIA Corporation's explosive growth financial report last month failed to boost investor optimism, with the stock price dropping significantly after the report. The key question is whether this wave of AI enthusiasm will continue into the next decade or even twenty years, and whether the unprecedented global AI computing power spending that could reach trillions of dollars before 2030 can bring more optimistic revenue prospects than expenditure.
TPU at full power! The golden age of AI ASIC is here
In recent years, thanks to the huge orders for custom AI ASIC chips for AI chip leaders such as Alphabet Inc. Class C, OpenAI, and Anthropic PBC, Broadcom Inc.'s market value has surged, surpassing $1.5 trillion. The increasing interest from global enterprises in configuring AI computing power clusters with Alphabet Inc. Class C's TPU (Tensor Processing Unit) AI has also benefited Broadcom Inc., which has long partnered with this tech giant to develop TPU core chips. At the same time, Broadcom Inc. has just shipped the first batch of its new generation of computing processors, with about six other large customers expected to adopt this generation of ASIC products this year.
In addition to Broadcom Inc.'s custom AI ASIC chip business, the company continues to upgrade its high-performance network equipment to better connect the strong computing resources needed to run AI models. Chen Fuyang has also established a large software business benefiting from the wave of cloud-based AI training and inference.
Broadcom Inc.'s strong financial report is sufficient to confirm the unprecedented strong growth logic of AI ASIC with "financial report-level evidence." The global generative AI boom has accelerated the development of AI chips by cloud computing and chip giants, as they compete to design the fastest and most energy-efficient AI computing clusters for advanced AI data centers. Broadcom Inc. and its main competitor Maywell primarily focus on leveraging their absolute advantage in high-speed interconnects and chip IPs to collaborate with cloud computing giants such as Amazon.com, Inc., Alphabet Inc. Class C, and Microsoft Corporation to create custom AI ASIC computing clusters tailored to their specific AI data center needs. This ASIC business has become a very important business for both companies, with Broadcom Inc. partnering with Alphabet Inc. Class C to develop the TPU AI computing cluster being a typical example of an ASIC technology route.
Undoubtedly, economic and energy constraints are forcing Microsoft Corporation, Amazon.com, Inc., Alphabet Inc. Class C, and Meta, the parent company of Facebook, to develop self-made ASICs for their cloud computing internal systems to address the cost and power issues. Tech giants are striving for optimized "unit token cost, unit watt output" under the power constraints, and the flourishing era of AI ASIC technology is undoubtedly here.
Furthermore, with the high demand and high costs of NVIDIA Corporation's Blackwell architecture advanced AI GPU computing clusters, the self-developed AI ASIC can undoubtedly provide "second-curve capacity," and be more proactive in procurement negotiations, product pricing, and cloud computing service margins, coupled with the integrated design of "chips-interconnect-system-compiler/runtime-scheduling-observation/reliability" by cloud computing giants like Alphabet Inc. Class C, Microsoft Corporation, can improve computing infrastructure utilization and reduce TCO.
NVIDIA Corporation's AI GPUs, which dominate AI training, require more powerful AI computing clusters with greater versatility and rapid iteration capabilities across the entire computing system, while on the inference side, after the scaling of cutting-edge AI technologies, the emphasis shifts to unit token cost, latency, and energy efficiency. For example, Alphabet Inc. Class C explicitly positions Ironwood as a TPU generation "born for the AI inference era", emphasizing performance/energy efficiency/AI computing cluster cost-effectiveness and scalability. However, Amazon.com, Inc.'s recent actions demonstrate the strong potential of AI ASIC for training large models.
The AI ASIC computing system will undoubtedly continue to weaken NVIDIA Corporation's monopoly premium and some market share in the medium to long term, rather than linearly replacing the GPU system. The fundamental reason for this is that the core competition in the inference era is no longer just "peak computing power", but every token cost, power consumption, memory bandwidth utilization, interconnect efficiency, and the total ownership cost after software and hardware coordination. ASICs tailored for specific workloads are naturally more cost-effective than general-purpose GPUs. In the future, AI data centers are more likely to see the continued dominance of GPUs in advanced training and general cloud computing power, while large-scale internal inference, Agent workflows, and fixed high-frequency loads will accelerate the shift to ASICs. Data centers will enter a true era of heterogeneous computing power.
Broadcom Inc. will lead the AI ASIC revolution! Wall Street is bullish on Broadcom Inc.'s stock hitting new highs
Amazon.com, Inc.'s AWS officially positions its AI ASIC computing cluster - Trainium/Inferentia - as dedicated accelerators for generative AI training and inference, with Trainium2 offering approximately 30%-40% better price performance compared to its AI GPU cloud instances. Alphabet Inc. Class C has recently announced that Gemini 2.0's training and inference run 100% on TPU. These indicate that "large cloud computing factories using commercial ASICs for core model training/inference" are no longer just concepts but are entering a reproducible industrialization stage.
In the era of cutting-edge training, what the AI field needs most is universality, software maturity, and rapid adaptation to new model structures, which is why GPUs naturally have the upper hand. However, as the industry moves from "scarce training" to "scalable inference, agent-driven, long context, low latency", core KPIs will shift from "peak computing power" to each token cost, each watt throughput, and system-level TCO. This is the fundamental reason for the collective acceleration of hyperscalers' ASICs, with Alphabet Inc. Class C defining Ironwood TPU as the best computing cluster for the "inference era" and scalable up to 9,216 chips; Microsoft Corporation is positioning its new AI ASIC Maia 200 directly as an accelerator for cloud computing inference, claiming 30% superior performance per dollar compared to its current latest generation hardware; while AWS defines Trainium3 as a chip that pursues the "best token economics", boasting over 4 times the efficiency improvement, all indicating the market concern over NVIDIA Corporation's growth prospects is justified as cloud computing giants initiate an "AI computing cost revolution" to promote AI ASIC penetration on a larger scale.
According to a research report from Counterpoint Research, Broadcom Inc. will continue to maintain its absolute leading position in AI data center server ASIC design partnerships by 2027, with a market share of 60%. Counterpoint also predicts that by 2028, the shipment volume of AI server ASICs will exceed 15 million units, surpassing the overall shipment volume of data center AI GPUs.
Counterpoint expects that with Alphabet Inc. Class C, Amazon.com, Inc., Apple Inc., Microsoft Corporation, ByteDance, and OpenAI accelerating the deployment of massive AI server computing power clusters for training and inference workloads, ASIC shipments are expected to more than double by 2027. Counterpoint states that this rapid growth is driven by the demand for Alphabet Inc. Class C's TPU infrastructure (to support the Gemini project), the ongoing expansion of Amazon.com, Inc.'s Trainium cluster, and the capacity increase brought about by Meta's MTIA and Microsoft Corporation's Maia ASIC chips as their internal product lines expand.
Wall Street analysts are extremely optimistic about the revenue and profit growth prospects of Broadcom Inc.'s AI-related business, with target stock prices ranging from $450 to $535 over the next 12 months. In comparison, Broadcom Inc.'s stock closed at $317.53 on Wednesday. Among the 55 Wall Street analysts tracking the stock, 96% have given a "buy" rating, with an average target price of around $454.
Wall Street's "long-term bull market logic" for Broadcom Inc. primarily revolves around three core points: explosive growth in the AI computing power business - Broadcom Inc., as a key technology partner for Google TPU AI computing clusters, directly benefits from the expanding AI capital expenditures of cloud giants (such as Alphabet Inc. Class C, Meta, and OpenAI); increasingly large order backlogs; and the stability of infrastructure software (VMware) - the successful acquisition and integration of VMware Cloud Foundation (VCF) into Broadcom Inc. has provided a strong cash flow and infrastructure software growth engine closely related to cloud-based AI training and inference.
Related Articles

HK Stock Market Move | INFINITIES TECH(01961) soared over 93% at one point, planning to issue additional shares at a premium of 14.75% to expand its gaming and AI business.

A-share market morning express | Multiple factors resonate for global asset rebound! A-share three major indexes collectively opened higher, with the ChiNext Index opening up more than 2%

Joining hands with the "Ning Wang" subsidiary, MINIEYE (02431) "unlocks" new possibilities for intelligent driving.
HK Stock Market Move | INFINITIES TECH(01961) soared over 93% at one point, planning to issue additional shares at a premium of 14.75% to expand its gaming and AI business.

A-share market morning express | Multiple factors resonate for global asset rebound! A-share three major indexes collectively opened higher, with the ChiNext Index opening up more than 2%

Joining hands with the "Ning Wang" subsidiary, MINIEYE (02431) "unlocks" new possibilities for intelligent driving.

RECOMMEND





