OpenAI is considering renting chips from NVIDIA Corporation (NVDA.US) in order to save 10%-15% of its expenses.

date
24/09/2025
avatar
GMT Eight
OpenAI is discussing with NVIDIA the feasibility of leasing artificial intelligence chips, aiming to reduce the cost pressure in the large-scale new data center cooperation between the two parties.
According to reports, OpenAI is discussing the feasibility of renting artificial intelligence chips with NVIDIA Corporation (NVDA.US) in order to reduce cost pressures in their large-scale new data center cooperation. Informed sources have revealed that by renting server chips instead of purchasing them directly, OpenAI can save 10% to 15% of their cost expenses. By using a GPU rental model, OpenAI does not need to raise additional funds for chip purchases and can receive the chips faster. Currently, the developer of ChatGPT led by Sam Altman has rented NVIDIA Corporation chips from cloud service providers such as Microsoft Corporation (MSFT.US) and Oracle Corporation (ORCL.US). The new rental agreement with NVIDIA Corporation may set a five-year term, similar to the current rental agreements with OpenAI and Oracle Corporation. The background of this rental cooperation can be traced back to NVIDIA Corporation's recent strategic plan: they plan to invest up to $100 billion in the OpenAI supported by Microsoft Corporation, to build and deploy at least a 10 gigawatt-scale AI data center relying on NVIDIA Corporation's system. The first phase of the project is expected to start in the second half of 2026, deploying on NVIDIA Corporation's Vera Rubin platform. In regards to this massive project, Altman explained its profound significance in a blog post. He emphasized that with 10 gigawatts of computing power, artificial intelligence may find a cure for cancer or provide personalized tutoring to every student globally. "If we are limited by computing power scale and are forced to make choices among many important applications, no one would want to see this happen, so we must break through our limits." He further expounded that in the future, the capacity to establish an additional 1 gigawatt of AI infrastructure per week may be created to continuously expand computing power. Altman also added that in the coming months, more details of the plans and partnership information will be disclosed. He specifically mentioned, "later this year, we will share specific financing plans; given the direct link between computing power expansion and revenue growth, we have some interesting new ideas."