OpenAI has been exposed to "dislike" NVIDIA Corporation (NVDA.US) AI chips, Ultraman personally responded: the crazy statement is baseless.

date
20:15 03/02/2026
avatar
GMT Eight
According to informed sources, OpenAI is dissatisfied with the performance of several of NVIDIA's latest AI chips and has been seeking alternative solutions since last year.
According to sources, OpenAI is dissatisfied with the performance of several of NVIDIA Corporation's latest AI chips and has been seeking alternative solutions since last year. The report points out that OpenAI's strategic adjustment is primarily due to the company's increasing focus on AI inference-specific chips. While NVIDIA Corporation still dominates the field of large-scale training chips, AI inference chips have emerged as a new battleground for industry competition. AI inference refers to the process in which AI models, trained with acquired knowledge, analyze new unlabelled data to make predictions, decisions, or generate output. In response to the related reports, OpenAI CEO, Altman, stated on the X platform, "We are very willing to cooperate with NVIDIA Corporation, as they have built the world's top AI chips. We hope to be NVIDIA Corporation's core major customer for a long time to come. I really can't understand where these crazy claims are coming from." A spokesperson for NVIDIA Corporation also stated in an email, "Customers continue to choose NVIDIA Corporation's inference chips because we can provide the best performance and total cost of ownership advantages in deployed at scale." The report emphasizes that OpenAI's search for alternative solutions in the inference chip market is a challenge to NVIDIA Corporation's dominant position in AI chips, and this situation arises while both are in the midst of investment negotiations. In September 2025, NVIDIA Corporation announced plans to invest up to $100 billion in OpenAI, supported by Microsoft Corporation, to build and deploy at least 10 gigawatts of AI data centers based on NVIDIA Corporation's systems. However, news last Friday indicated that the investment plan had stalled due to internal questioning by some at NVIDIA Corporation. NVIDIA Corporation CEO Huang Renxun privately told business partners that the initial agreement is not binding and has not been finalized. Last weekend, Huang Renxun further stated that the proposed $100 billion investment in OpenAI by NVIDIA Corporation had "never been a formal commitment" and that the AI chip giant would "evaluate each round of OpenAI's financing one by one." He also mentioned that the investment amount in OpenAI's current fundraising round would not reach $100 billion but added, "We will invest a huge amount of money, which could be the largest investment in the company's history." When asked about the reports of dissatisfaction with OpenAI, Huang Renxun bluntly stated last Saturday, "This is complete nonsense." On the other hand, the report also mentioned that OpenAI has partnered with companies like AMD to purchase GPUs that compete with NVIDIA Corporation. The adjustment in OpenAI's product roadmap not only changes the types of computing resources the company needs but also hinders its negotiations with NVIDIA Corporation. An OpenAI spokesperson stated that the majority of the company's inference computing cluster is still supported by NVIDIA Corporation chips, and the unit performance of NVIDIA Corporation's inference chips remains the industry's best. Sachin Katti, OpenAI's head of computation infrastructure, also stated on the X platform, "Our partnership with NVIDIA Corporation is fundamental. Whether it is model training or inference, NVIDIA Corporation is our most important partner, and the entire computing cluster of the company runs on NVIDIA Corporation GPUs. This is not a simple vendor partnership but a deep and continuous design collaboration." He further added, "As a result, we consider NVIDIA Corporation as the core of our training and inference computing system and plan to expand the peripheral ecosystem through partnerships with Cerebras, AMD, Broadcom Inc., and other companies. This strategy allows us to achieve faster technological iterations and broader deployment while supporting the explosive growth of AI application demands in real-world scenarios without sacrificing performance and reliability." Sources have revealed that in specific scenarios such as software development and AI integration with other software systems, NVIDIA Corporation's hardware falls short of meeting OpenAI's demands for chatGPT users to generate answers. One source disclosed that the new hardware needed by OpenAI would meet approximately 10% of the company's inference computing needs. It is reported that OpenAI has engaged in discussions with startups such as Cerebras and Groq to acquire high-performance inference chips. However, the report indicates that a $20 billion technology licensing agreement between NVIDIA Corporation and Groq directly led to the termination of discussions between OpenAI and Groq. Additionally, the report mentions that since last year, when this ChatGPT developer began looking for GPU alternatives, they focused on a group of companies developing storage-compute integrated chips - chips that integrate large-capacity static random-access memory (SRAM) with other chip components on the same silicon wafer. The report explains that integrating as much high-cost SRAM as possible into a single chip allows AI systems such as ChatGPT to have a significant speed advantage in processing requests from millions of users. The report adds that AI inference requires significantly more memory than training, with the time spent accessing data from memory far exceeding the time spent on mathematical calculations. Traditional GPU technology from NVIDIA Corporation and AMD relies on external memory, which increases data processing time and thereby reduces the interaction response speed between users and chatGPT. The report also notes that OpenAI's internal dissatisfaction with NVIDIA Corporation's hardware is particularly evident in their product, Codex, which is one of the core products OpenAI is currently focusing on promoting, and internal employees attribute some performance shortcomings of Codex to NVIDIA Corporation's GPU hardware. During a press conference on January 30th, Altman stated that customers using OpenAI's coding model have "extremely high demands on code generation speed." He revealed that one way OpenAI meets this demand is through their recent partnership with Cerebras, while the importance of response speed for regular ChatGPT users is relatively low. It is worth noting that after OpenAI expressed doubts about NVIDIA Corporation's technology, NVIDIA Corporation contacted companies like Cerebras and Groq, specializing in high-capacity SRAM chips, to explore potential acquisitions. Cerebras rejected NVIDIA Corporation's acquisition proposal and reached a commercial cooperation agreement with OpenAI last month. It is reported that Groq had previously discussed computational supply cooperation with OpenAI and also attracted investor interest, aiming to complete a new fundraising round at an estimated valuation of around $14 billion. As of the time of writing, Groq has not responded to requests for comment. The report shows that in December 2025, NVIDIA Corporation reached a non-exclusive all-cash technology licensing agreement with Groq. While this agreement allows other companies to license Groq's technology, with NVIDIA Corporation taking away Groq's core chip design team, Groq's business focus has shifted towards selling cloud-native software.