From "Stargate" to AWS computing power big orders, OpenAI signs AI computing power contracts furiously, NVIDIA Corporation (NVDA.US) and storage giants are winning big.

date
10:20 04/11/2025
avatar
GMT Eight
OpenAI is globally sweeping AI computing power infrastructure, signing AI computing power supply agreements with Oracle, SoftBank, and Amazon in line, and NVIDIA is considered the "market maker." HBM and data center storage are the super big winners second only to NVIDIA.
Leading global AI applications leader OpenAI, with a valuation as high as $500 billion, has in the past month entered into AI computing resource supply agreements with a number of tech giants, with the scale expanding to hundreds of billions of dollars, approaching the $1.4 trillion AI computing infrastructure expenditure plan envisioned by OpenAI CEO Sam IAT Automobile Technology. For "AI chip leader" NVIDIA Corporation (NVDA.US), the three major masters of HBM storage systemsSK Hynix, Samsung, and Micron, as well as Western Digital Corporation, SanDisk, and Seagate, among other enterprise-level data center storage product giants, they are the biggest beneficiaries of OpenAI's recent large-scale AI computing infrastructure contracts (from the $500 billion "Stargate" AI superstructure project, to the latest Amazon.com, Inc. AWS computing power super contract). In many of the massive AI computing infrastructure resource supply agreements that OpenAI has signed, NVIDIA Corporation's highly popular and increasingly powerful AI GPU computing cluster has become the most critical beneficiary of the massive AI computing infrastructure orders, followed by giants focusing on high-performance storage products for data centers. In addition, other AI chip leaders and cloud computing service providers are also sharing in the "super huge cake" brought by OpenAI. The latest deal announced on Monday morning Eastern Time is with Amazon.com, Inc.'s global cloud computing service provider AWS (Amazon Web Services). As part of this seven-year, $38 billion AI computing agreement, OpenAI will obtain the right to use hundreds of thousands of NVIDIA Corporation AI GPU computing devices through AWS, including NVIDIA Corporation's GB200 and GB300 AI GPUs, which will run in clusters and can scale to "tens of millions" of CPUs to rapidly expand their generative AI and agentic reasoning workloads. Shortly before this, OpenAI updated its cloud-based AI computing supply agreement with one of its major supporters, Microsoft Corporation (MSFT.US), adding the purchase of up to $250 billion in Azure cloud computing services. The agreement includes an important condition that Microsoft Corporation will no longer have priority access to OpenAI's high-performance computing resources. Moving the timeline back slightly, last month, OpenAI disclosed a long-term computing supply cooperation with AI ASIC leader Broadcom Inc. (AVGO.US), jointly developing and deploying a custom AI ASIC computing cluster with up to 10 gigawatts of power. According to reports, chip design leader Arm (ARM.US) is also part of the computing infrastructure agreement between Broadcom Inc. and OpenAI, helping OpenAI build a server-level central processing unit chip (i.e., an ARM architecture data center server CPU) to work with the AI ASIC computing clusters being jointly designed by OpenAI and Broadcom Inc. to support AI training/inference workloads. Shortly before the announcement of this large-scale AI ASIC computing agreement with Broadcom Inc., OpenAI had just signed an innovative "equity-AI computing supply bet contract" with AMD (AMD.US), AMD's longest-running and strongest competitor in the data center and PC fields. OpenAI will deploy approximately 6 gigawatts of AMD AI GPU computing clusters over the next few years as part of the agreement. The initial deployment of the 1 gigawatt AMD Instinct MI450 GPU computing cluster is expected to begin in the second half of 2026. As part of the agreement with AMD, AMD granted OpenAI a warrant to purchase up to 160 million shares of AMD common stock in the future, meaning that once OpenAI completes the deployment of the GPU computing clusters and the AMD stock price reaches certain milestones, OpenAI will be able to acquire these shares almost at no cost, potentially becoming a shareholder in this $400 billion chip giant by about 10%. In addition to its insatiable demand for AI computing infrastructure, OpenAI has quickly built up numerous partnerships. These include an agreement reached last week with digital payment giant PayPal (PYPL.US) to embed its digital wallet in ChatGPT. This was followed by a similar e-commerce ecosystem cooperation agreement with Shopify (SHOP.US), Etsy (ETSY.US), and Walmart Inc. (WMT.US) led by Sam Altman, the AI unicorn's founder. OpenAI is also working with CRM cloud software giant Salesforce (CRM.US) to directly integrate ChatGPT into the Slack system, allowing CRM teams to quickly distill insights, draft content, and summarize complex conversations. Thermo Fisher Scientific Inc., a leader in life sciences and medical diagnostic testing, announced last month that it would integrate OpenAI's API into its key business and operational processes, such as product development, service delivery, and customer interaction. All of these intensive contracts were announced shortly after the $500 billion "Stargate Project" announced earlier this year, in which OpenAI and Oracle Corporation were deeply involved (the Stargate AI superstructure project, also known as the modern Manhattan Project). The project's core partners include Oracle Corporation (ORCL.US), SoftBank, Arm, Microsoft Corporation, and NVIDIA Corporation. OpenAI has signed a billion-dollar AI computing infrastructure supply agreement with Oracle Corporation, mainly for the "Stargate"-related AI infrastructure project. OpenAI is also involved in Stargate branch projects around the world, such as the "Stargate Global Branch" projects in the United Arab Emirates, Argentina, and Norway, and is deeply cooperating with local data center operators, where AI computing infrastructure tends to be concentrated on NVIDIA Corporation AI GPU computing clusters. This series of major agreements seems to be paving the way for OpenAI's upcoming initial public offering (IPO). According to reports, this IPO may take place at the end of 2026 or the beginning of 2027, when the creator of ChatGPT could see their valuation pushed up to $1 trillion by Wall Street. According to historical data, this would be the second-highest IPO market capitalization in global stock market history, after Saudi energy giant Saudi Aramco's $1.7 trillion IPO in December 2019, which raised about $25.6 billion in financing. This starting scale for OpenAI would far exceed Meta Platforms' (then known as "Facebook") $81.3 billion market capitalization when it went public in 2012. OpenAI's latest valuation is around $500 billion. All AI infrastructure projects can be seen with the presence of NVIDIA Corporation AI GPUs and high-performance data center storage products From the nearly $1 trillion in AI computing infrastructure cumulative agreements that OpenAI has already signed, it can be seen that these super AI infrastructure projects are heavily reliant on NVIDIA Corporation AI GPU computing clusters, as well as enterprise-level high-performance storage products for data centers (core products including HBM storage systems, enterprise SSD/HDD, server-level DDR5, etc.). In the unprecedented AI investment cycle centered on AI model updates and the expansion/construction of AI data centers, core AI component manufacturers like NVIDIA Corporation are undoubtedly the biggest winners; closely following them are high-end memory suppliers like HBM (SK Hynix, Samsung, Micron), and enterprise-level high-performance storage manufacturers serving AI data centers (near-line HDDs and data center SSDs). These two segments are driving the "AI computing storage" dual-wheel AI investment cycle, with HBM storage systems being the first tier of storage products closely following AI GPU/AI ASIC computing clusters, while enterprise-level HDD/SSD products are another major beneficiary in the construction frenzy of AI infrastructure to accommodate the flood of AI data storage. The "Stargate" project led by OpenAI is expected to consume up to 40% of global DRAM storage product output, and agreements with Samsung, SK Hynix, and other major players for up to 900,000 DRAM wafers per month have been signed, focusing on DDR5 and HBM. The demand for AI is driving record profits for the leader in HBM memory, SK Hynix, achieving record operating profits of 11.4 trillion Korean won (about $8 billion) and revealing that orders for a full range of storage chips such as HBM and enterprise-level NAND for next year have been sold out, with stock prices tripling this year, while Micron and Samsung stock prices have also seen triple-digit increases in the unprecedented bull market narrative for the storage supercycle. Goldman Sachs Group, Inc., a Wall Street giant, has issued a research report stating that the extremely strong demand for generative artificial intelligence (Gen AI) from enterprises has driven higher AI server shipments and higher HBM density in each AI GPU, significantly increasing their total estimates for the HBM market. They expect the HBM market to grow at a compound annual growth rate (CAGR) of 100% from $23 billion in 2023 to $302 billion in 2026. Goldman Sachs Group, Inc. predicts that the undersupply situation in the HBM market will continue in the coming years, benefiting major players such as SK Hynix, Samsung, and Micron. In the unprecedented "AI computing race" closely related to the global acceleration of AI training/inference infrastructure, Morgan Stanley and other Wall Street giants have declared that the "storage supercycle" has arrived, with demand skyrocketing for enterprise-level storage drives driving data storage product giants like Seagate (STX.US), SanDisk (SNDK.US), and Western Digital Corporation (WDC.US) to see stock price increases of over three digits this year, significantly outperforming the US stock market and even global stock markets. Morgan Stanley stated in a research report that the core storage chip demand closely associated with large enterprises and government agencies investing heavily in AI in this unprecedented frenzy of AI infrastructure construction remained extremely strong, driving significant revenue growth for data center storage businesses, including HBM storage systems, server-grade DDR5, and enterprise-level SSDs. It is reported that Samsung has already suspended DDR5 DRAM contract quotations for October, leading SK Hynix, Micron, and other storage OEMs to follow suit, resulting in a "cut-off" in the supply chain, with the expected resumption of quotations delayed until mid-November. Industry insiders point out that in the fourth quarter, upstream OEMs are only offering quotations to tech leaders or first-tier cloud giants, almost completely releasing DDR5 capacity to other general customers, with storage products now fully entering a seller's market. The unstoppable flood of AI computing power, with a market value of $5 trillion, is far from the limit for NVIDIA Corporation? With Huang Renxun releasing a series of positive catalysts at the GTC conference, and with Microsoft Corporation, Alphabet Inc. Class C, and Meta, the parent company of Facebook, signaling continued heavy investment in AI computing infrastructure to build large-scale AI data centers in their latest earnings calls, the global AI chip industry chain has plunged into a long-term bull market atmosphere of "bullish frenzy," especially with the "AI chip superpower" NVIDIA Corporation (NVDA.US) breaking through and standing firm at the $5 trillion mark, becoming the first company globally to reach a market value of $5 trillion. Recently, global prices for high-performance DRAM and NAND series storage products have continued to rise, along with OpenAI, one of the world's highest valued AI startups, securing over $1 trillion in AI computing infrastructure transactions, and "chip foundry king" Taiwan Semiconductor Manufacturing Co., Ltd. Sponsored ADR, storage giants Samsung and SK Hynix posting extremely strong earnings and raising revenue growth expectations for 2025 and 2026, significantly strengthening the "long-term bullish narrative" for the AI GPU, ASIC, HBM, data center SSD storage systems, liquid cooling systems, core power equipment, and other AI computing infrastructure sectors. The AI computing demands brought on by generative AI applications and AI intelligent agents dominating the inference side are a vast ocean of AI computing needs that are expected to drive exponential growth in the AI computing infrastructure market, with AI inference systems also being the largest revenue source for Huang Renxun's future NVIDIA Corporation. The continued explosive expansion of AI computing demand worldwide, coupled with the increasingly significant investments in AI infrastructure projects led by the US government, and with tech giants continuously investing heavily in building large data centers, largely mean that for investors who have long been fond of NVIDIA Corporation and the AI computing industry chain, the "AI faith" sweeping the globe will continue to be a "super catalytic" factor for the stock prices of AI computing leaders such as NVIDIA Corporation, Taiwan Semiconductor Manufacturing Co., Ltd. Sponsored ADR, Micron, SK Hynix, Seagate, Western Digital Corporation, among others, driving these companies to continue the "bullish curve." The most exciting news for Wall Street analysts in recent times is undoubtedly Huang Renxun's projection of a $500 billion data center revenue visibility for 2025 to 2026, which is the "future five-quarter cumulative" data center revenue from the Blackwell and next-generation Rubin architecture AI GPU product lines. From a revenue perspective, the projected revenue for the five quarters of Blackwell and Rubin exceeds around $500 billion, and it is noteworthy that this figure only includes Blackwell and Rubin, without any other NVIDIA high-performance networking, automotive chips, and HPC business revenues, and does not include any expectations for the Chinese market. Sell-side and buy-side investment institutions on Wall Street have already begun incorporating this astonishing expectation into their investment models, which is why Loop Capital has called for a highest target price of $350, meaning that NVIDIA Corporation's market value will reach $8.5 trillion, about 70% higher than NVIDIA Corporation's latest closing price of $206.88. Not long ago, the highest target price on Wall Street was given by HSBC, at $320, boldly countering the recent market trend of the "AI bubble theory." In the view of top Wall Street institutions such as Loop Capital, Cantor Fitzgerald, HSBC, Goldman Sachs Group, Inc., and Morgan Stanley, NVIDIA Corporation will still be the core beneficiary of the trillion-dollar AI expenditure wave. In the eyes of these institutions, the incessant rise in NVIDIA Corporation's stock price is far from over, with Wall Street analysts continuously raising their 12-month target stock prices for NVIDIA Corporation, and more and more analysts are setting their sights on the milestone target point of $300. --- Please note that the translation may not be 100% accurate as the original text is quite technical and specific in nature.