Broadcom Inc. single-handedly rejuvenates the "AI faith"! With assistance from DeepSeek, ASIC demand is poised for explosive growth.

date
07/03/2025
avatar
GMT Eight
One of the biggest winners in the global AI boom, Broadcom Inc. (AVGO.US), announced its fiscal report for the first quarter of the 2025 fiscal year ending on February 2nd, on the morning of March 7th Beijing time. Broadcom Inc. is one of the core chip suppliers for Apple Inc. (AAPL.US) and other major tech companies, as well as a key supplier for large AI data center Ethernet switch chips and customized AI chips crucial for AI training/inference. The stock price of Broadcom Inc. surged nearly 20% in after-hours trading on the US stock market mainly due to strong performance and optimistic performance forecasts, proving to investors that tech giants like Alphabet Inc. Class C, Meta, and leading AI companies like OpenAI are still heavily investing in AI computing power. The strong financial report and outlook from Broadcom Inc. can be considered a revival of faith in AI among tech investors in the US, driving chip stocks collectively higher in after-hours trading, a feat that even the "AI chip dominator" NVIDIA Corporation (NVDA.US) had failed to accomplish before. The mixed results from Salesforce, Inc. (CRM.US), Marvell Technology, Inc. (MRVL.US), and NVIDIA Corporation led some cautious investors who believed in the "monetization and commercialization path of AI" to sell off popular tech stocks - they generally thought that the AI investment frenzy had led to a tech stock bubble. Moreover, the market's expectation of the US economy falling into a "stagnation inflation" under the Trump administration's "tariff storm" has added to the continuous decline of US tech stocks since the end of February. However, Broadcom Inc., as a strong force in the AI ASIC field, has shown investors with its strong performance and optimistic outlook that the demand for AI computing power is still experiencing explosive growth under the paradigm of DeepSeek-led "ultra-low-cost AI large-model computing power." This, combined with the "efficiency revolution" in AI training and inference levels, advancing future AI large model development towards the core focus of "low-cost" and "high-performance," signifies a stronger demand expansion trajectory compared to the AI boom of 2023-2024. In this context of a rapid increase in cloud AI inference computing power demand, future clients like Alphabet Inc. Class C, OpenAI, and Meta are likely to continue investing heavily with Broadcom Inc. in developing AI ASIC chips. In the earnings forecast released on Thursday in the US Eastern Time, Broadcom Inc. stated that it expects sales for the three months ending on May 4th to reach around $14.9 billion, surpassing analysts' average expectation of $14.6 billion, which has been continuously raised since the beginning of the year, highlighting the continuous increase in demand for Broadcom Inc.'s Ethernet switch chips and AI ASIC chips from major enterprises, especially tech giants like Alphabet Inc. Class C and Meta. However, the expectation from Broadcom Inc. was slightly below the highest expectation of $15.1 billion set by some analysts. Both revenue and EPS for the first quarter exceeded analysts' average expectations raised since the beginning of this year. Overall, Broadcom Inc.'s performance outlook demonstrates that the historical global AI computing deployment and AI spending frenzy are still ongoing. As the most core beneficiary of this unprecedented wave, Broadcom Inc. continues to invest in new infrastructure with its massive data center clients. It is worth noting that the AI boom had once propelled the chip company's market value to exceed $1 trillion, but in 2025, with the caution among US tech stock investors due to the pressure of Trump's tariffs and concerns about excess computing power, they are actively seeking evidence of the continued AI frenzy, and Broadcom Inc.'s performance at this moment indeed proves that the boom is still ongoing. Given the continued explosive growth in demand for Broadcom Inc.'s Ethernet switch chips and AI ASIC chips, Wall Street generally has a bullish outlook on Broadcom Inc.'s stock price. Morgan Stanley analyst Harlan Sur recently stated in a report that due to the demand for AI computing power and energy efficiency, tech giants like Alphabet Inc. Class C, Microsoft Corporation, Meta, and Amazon.com, Inc. are increasingly using AI ASIC on a large scale, making Broadcom Inc., with its immense business exposure, likely a key beneficiary. Morgan Stanley gave Broadcom Inc. an "overweight" rating and set its target stock price for the next 12 months at $250, compared to Broadcom Inc.'s closing price near $179 on Thursday. The day before Broadcom Inc. released its financial report, another well-known Wall Street investment firm, Evercore ISI, raised Broadcom Inc.'s target stock price from $250 to $267. The strong demand for "large-scale customer" AI ASICs continues to strengthen. Broadcom Inc.'s stock price surged nearly 20% in after-hours trading on the US stock market. Previously, the stock closed at $179.45 during normal trading hours, with a cumulative decline of 23% since 2025. Broadcom Inc.'s CEO Hock Tan mentioned during the earnings conference that expenditures related to artificial intelligence are essential for the company's first fiscal quarter ending on February 2nd.The key DRIVE to the growth in performance. He revealed that sales in the artificial intelligence field for this quarter are expected to reach 4.4 billion US dollars. Performance data shows that Broadcom Inc.'s revenue related to AI in the first quarter increased by 77% year-on-year, reaching 4.1 billion US dollars, mainly due to the increasing adoption rate of the company's customized AI accelerator - the AI ASIC chip.Before the release of this financial report, Broadcom Inc.'s biggest competitor in the AI ASIC field, Marvell Technology, Inc., announced their performance on Wednesday which did not receive market recognition. Despite the company's recent quarter revenue growth of 27% and forecasting an acceleration in growth for this quarter, investors believed that the growth was below expectations, leading to a 20% plunge in their stock price on Thursday. As of February 2nd, the first quarter performance showed that excluding special items, Broadcom Inc.'s first quarter earnings per share were $1.60, with revenue increasing by 25% year-on-year to $14.92 billion. According to data compiled by Bloomberg, analysts had previously estimated earnings per share at $1.50 and revenue at $14.6 billion, both of which Broadcom Inc. exceeded. In other performance indicators, Broadcom Inc.'s first quarter semiconductor business revenue reached $8.2 billion, an 11% increase year-on-year, surpassing analysts' expectations of $8.1 billion; the first quarter operating profit reached $6.26 billion, a 200% increase year-on-year; under non-GAAP guidelines, Broadcom Inc.'s first quarter adjusted EBITDA profit was approximately $10.083 billion, a 41% increase, non-GAAP net profit was approximately $7.823 billion, a 49% increase year-on-year. Although Broadcom Inc. produces various chips including core connectivity components for iPhones and Network-1 Technologies, Inc., investors have recently shown more interest in their custom chip business, specifically in the AI ASIC business. This business department helps large data center clients such as Alphabet Inc. Class C, Meta, and OpenAI develop dedicated chips for creating and running artificial intelligence applications. Additionally, after successfully acquiring VMware, Broadcom Inc. has become an important supplier of enterprise business management operations software and network software. During a conference call with analysts, CEO Hock Tan stated that Broadcom Inc. is accelerating the provision of AI ASIC chips for "super-scale customers," such as Meta, Alphabet Inc. Class C, OpenAI, and tech giants like Apple Inc. He highlighted that in certain AI application scenarios, Broadcom Inc.'s custom semiconductors have performance advantages over NVIDIA Corporation's general AI acceleration chips Blackwell or Hopper architecture AI GPUs. Hock Tan revealed during the performance meeting that the company is actively expanding its group of new "super-scale customers." There are currently three such customers, with four more in the collaboration process, two of which are expected to become significant revenue generators. "Our super-scale partners are still actively investing," he emphasized. Hock Tan also expects to complete the chip design for custom processors (XPU) for two super-scale customers this year. It is worth noting that the revenue from these potential new super-scale customers of Broadcom Inc. has not been reflected in the company's current revenue forecast in the AI field (expected to be in the range of $60-90 billion by 2027). This suggests that Broadcom Inc.'s market opportunities may exceed current market expectations. During the previous earnings meeting, Broadcom Inc.'s management projected that by the 2027 fiscal year, the potential market size for AI components (Ethernet chips + AI ASIC) designed for global data center operators could reach as high as $60-90 billion. Hock Tan emphasized during the conference call, "In the next three years, the opportunities related to AI chips are incredibly vast." "The company is collaborating with large cloud computing customers to develop customized AI chips. We currently have three super-scale cloud customers, each of whom has developed their own multi-generational 'AI XPU' roadmap and plans to deploy at different speeds over the next three years. We believe that by 2027, each of them plans to deploy 1 million XPU clusters on a single architecture." Here, XPU refers to a "scalable" processor architecture, typically referring to AI ASICs, FPGAs, and other custom AI accelerator hardware in addition to NVIDIA Corporation's AI GPUs. Broadcom Inc.'s Ethernet switch chips are mainly used in data centers and server cluster devices, responsible for processing and transmitting data flow efficiently and quickly. Broadcom Inc.'s Ethernet chips are essential for building AI hardware infrastructure because they ensure fast data transmission between GPU processors, storage systems, and networks, which is crucial for applications like ChatGPT which require processing large amounts of data input and real-time processing capabilities, such as Dall-E text-to-image and Sora video generation models. Furthermore, with their absolute technological leadership in chip-to-chip communication and high-speed data transfer between chips, Broadcom Inc. has become a key player in the custom ASIC chip market for AI in recent years, such as Google's TPU AI accelerator chip, where Broadcom Inc. and Google's teams jointly participate in the development of the TPU AI accelerator chip. In addition to chip design, Broadcom Inc. provides essential chip-to-chip communication intellectual property to Google and is responsible for manufacturing, testing, and packaging new chips, making them a crucial part of Google's AI chip development process.Phabet Inc. Class C expands new AI data centers to provide protection.Through a series of high-profile acquisitions, Chen Fuyang has built one of the most valuable companies in the chip industry. The software business department he reorganized after acquiring VMware is expected to be on par with the semiconductor business. This extensive layout makes Broadcom Inc.'s forecast outlook a barometer of demand for the entire technology industry. The financial report shows that Broadcom Inc.'s semiconductor business department had quarterly revenue of $8.21 billion, an 11% year-on-year growth; software department revenue of $6.7 billion, both business data exceeding expectations. Alphabet Inc. Class C and Meta leading the "AI ASIC super wave" The incredibly strong demand for Ethernet switch chips and AI ASIC chips closely related to AI training/inference systems can be clearly seen in Broadcom Inc.'s consistently strong revenue data for the beginning of the 2024 and 2025 fiscal years, especially with customized AI ASIC chips becoming an increasingly important revenue source for Broadcom Inc. With its unique chip-to-chip communication technology and numerous patents in data transmission technology, Broadcom Inc. has become a key player in the AI chip field. Based on Broadcom Inc.'s unique chip-to-chip communication technology and data transmission patents, Broadcom Inc. is currently the most important player in the AI hardware field in the market for AI ASIC chips. Not only does Alphabet Inc. Class C continue to select Broadcom Inc. to collaborate in designing and developing customized AI ASIC chips, but giants like Apple Inc. and Meta, as well as more data center service operators, are expected to work with Broadcom Inc. in the long term to build high-performance AI ASICs. With DeepSeek leading to a significant decrease in AI training costs, and a sudden decrease in Token costs at the inference end, AI agents and generative AI software are expected to accelerate penetration into various industries. From the responses of Western tech giants like Microsoft Corporation, Meta, and ASML Holding NV ADR, it is clear that they are impressed by DeepSeek's innovation and are not deterred in their determination to invest massively in AI. They believe that the new technology route led by DeepSeek has the potential to bring about a general decrease in AI costs, especially for the much larger market size of AI applications at the edge, there is an inevitable much larger demand for cloud-based AI inference computing power. Microsoft Corporation CEO Nadella previously mentioned the "Jevons paradox" - when technological innovation significantly increases efficiency, resource consumption not only does not decrease, but actually increases, which can be transplanted into the field of AI computing power, in which the trend of the massive increase in the scale of AI model applications will bring unprecedented demand for AI inference computing power. For example, after DeepSeek was integrated into WeChat, there are often instances where DeepSeek's deep thinking cannot meet customer demands, verifying that the current AI computing infrastructure is far from sufficient to meet AI computing requirements. Recently, Morgan Stanley pointed out in a research report that the expected huge capital expenditure of US tech giants in 2025, as well as the "Stargate" project planning to invest $500 billion over the next four years, of which $100 billion will be deployed soon, indicates that NVIDIA Corporation's AI GPU, as well as the demand for AI ASICs introduced by ASIC manufacturers, remains very strong. In addition, the massive penetration of DeepSeek's large models into various industries in China will ignite a new surge in demand for the AI chip industry chain. Morgan Stanley has raised its overall expenditure expectations for North American cloud computing giants in 2025, increasing the growth rate from an expected 29% year-on-year to 32%, meaning that the capital expenditure of the top ten cloud service giants in North America in 2025 is expected to reach $350 billion, mainly based on the trend of the significant expansion of cloud-based AI inference computing power demand. As US tech giants continue to invest heavily in the field of artificial intelligence, the biggest beneficiaries are likely to be AI ASIC giants, such as Broadcom Inc., Marvell Technology, Inc., and Taiwan's MediaTek. Microsoft Corporation, Amazon.com, Inc., Alphabet Inc. Class C and Meta, as well as the leading generative AI OpenAI, all without exception collaborate with Broadcom Inc. or other ASIC giants to develop and deploy AI ASIC chips for mass inference AI computing. Therefore, the future market share expansion trend of AI ASIC is expected to be much stronger than that of AI GPUs, eventually reaching a balanced share, rather than the current situation where AI GPUs dominate the AI chip field with a market share of up to 90%. However, this transition is not immediate. As AGI is still in the research and development process, the flexibility and versatility of AI GPUs still remain the exclusive capabilities on which AI training heavily relies. The demand for operator flexibility and network structure variability in large-scale AI models such as the GPT family and the open-source Llama family remains high in the "research exploration" or "rapid iteration" stages - this is a major reason why general-purpose GPUs still have an advantage. During the earnings conference calls of Alphabet Inc. Class C and Meta, Pichai and Zuckerberg both expressed their intention to increase collaboration with chip manufacturer Broadcom Inc. to launch self-developed AI ASICs. The two tech giants' efforts in AI ASIC technology areOur strategic partners are all leaders in custom chip manufacturing, such as Broadcom Inc. For example, the TPU (Tensor Processing Unit) developed by Alphabet Inc. Class C in collaboration with Broadcom Inc. is a typical AI ASIC. Meta previously collaborated with Broadcom Inc. on the design of Meta's first and second generation AI training/inference acceleration processors. It is expected that Meta and Broadcom Inc. will accelerate the development of Meta's next generation AI chip, MTIA 3, by 2025. OpenAI, which received a large investment from Microsoft Corporation and deep cooperation, announced in October last year that it will work with Broadcom Inc. to develop OpenAI's first AI ASIC chip.With the gradual convergence of large model architectures towards several mature paradigms (such as standardized Transformer decoders, Diffusion model pipelines), ASICs can more easily digest mainstream inference edge computing workloads. Some cloud service providers or industry giants will deeply integrate software stacks, making ASICs compatible with common network operators and providing excellent developer tools, which will accelerate the popularization of ASIC inference in normalized/massive scenarios. Looking ahead at the future prospects of computing power, NVIDIA Corporation's AI GPUs may focus more on ultra-large-scale cutting-edge exploratory training, rapidly changing multimodal or new structural experiments, as well as general computing power for HPC, graphic rendering, visual analytics, etc. AI ASICs will focus on extreme optimization of deep learning specific operators/data flows, specializing in stable structural inference, high-throughput batches, and high energy efficiency ratio. For example, if a cloud platform's AI workload heavily utilizes common operators in CNN/Transformer (such as matrix multiplication, convolution, LayerNorm, Attention, etc.), most AI ASICs will be deeply customized for these operators; after the fixed transformation of image recognition (ResNet series, ViT), Transformer-based automatic speech recognition (Transformer ASR), Transformer Decoder-only, and partial multimodal pipelines, all can be optimized to the extreme based on ASICs. As predicted by Morgan Stanley in the research report, in the long run, both will coexist harmoniously, and the market share of AI ASICs is expected to expand significantly in the mid-term. NVIDIA Corporation's general-purpose GPUs will focus on complex and volatile scenarios, while ASICs will focus on high-frequency stable, large-scale AI inference workloads and a portion of mature, stable fixed training processes.

Contact: contact@gmteight.com