From reasoning to training: Meta (META.US) announces the upgrade of its self-developed chip strategy, with the CFO calling customized chips the "core pillar."

date
08:22 05/03/2026
avatar
GMT Eight
Despite Meta recently reaching a major deal with top chip manufacturers, the company's CFO still clearly stated that the company is committed to expanding the application boundaries of custom chips.
Despite recent major deals with top chip manufacturers, Meta Platforms Inc. (META.US) Chief Financial Officer Susan Li stated clearly on Wednesday that the company is still committed to expanding the application boundaries of custom chips. She pointed out that because some of Meta's workloads have highly customized attributes, self-developed chips can better adapt to internal specific algorithm requirements. Currently, Meta has achieved large-scale deployment of custom chips in the core ranking and recommendation system area, and the future strategic focus will be gradually expanding this capability to the training area of artificial intelligence models. Despite not being a traditional cloud computing provider, Meta is one of the world's largest data center operators for training and running artificial intelligence models. Over the past few weeks, the company has entered into multiple significant agreements with industry leaders NVIDIA Corporation (NVDA.US) and competitor AMD (AMD.US) to purchase chips and devices to support artificial intelligence workloads. Meanwhile, the social media parent company continues to advance its internal artificial intelligence processor development process. Susan Li emphasized in her remarks that Meta is adapting to diverse task requirements by purchasing different types of chips. "Based on current knowledge and actual needs, we are systematically evaluating the most suitable chip solutions for each application scenario," she pointed out, "and custom chips have always been a core pillar of this strategic layout." This statement signifies that Meta's self-developed chip project (MTIA) has entered a crucial advanced stage. Since the public announcement of the MTIA plan in 2023, Meta's initial research and development focus has primarily been on inference, aiming to improve the operational efficiency of Facebook and Instagram recommendation systems and reduce dependence on NVIDIA Corporation's general GPUs. With the eruption of the generative AI wave, Meta's demand for computing power is exponentially increasing, and focusing solely on inference is insufficient to support its large model strategy. Susan Li's latest statement sends a clear signal to the market: although there are doubts in the industry about the research and development threshold of top AI training chips, Meta still firmly regards "self-developed training chips" as its ultimate goal for infrastructure transformation. However, the road to autonomous computing power is not easy. Recent reports suggest that Meta has encountered certain technical bottlenecks in the process of developing cutting-edge training chips, and there are even rumors that some high-performance projects may face pacing adjustments. To balance the immediate high-performance computing power gap with the long-term self-developed goal, Meta is currently adopting a flexible and diversified supply strategy. On one hand, Meta has reportedly reached an agreement with Alphabet Inc. Class C to accelerate the development of large models in the current stage by leasing Alphabet Inc. Class C's TPU resources; on the other hand, the company still maintains a deep procurement relationship with NVIDIA Corporation. Susan Li's emphasis on "gradually expanding over time" implies that Meta will take a steady transitional approach, first making breakthroughs in specific customized tasks, before eventually conquering the computing power highland of general large model training. From an industry perspective, Meta's chip-making process reflects the common logic of ultra-large-scale cloud vendors in the AI era - full-stack self-development. By deeply coupling chip architecture with proprietary models like Llama, Meta not only stands to significantly reduce hardware procurement and energy costs in long-term operations but also avoids being restricted by supply chain fluctuations. Although transitioning from inference in recommendation systems to training complex models poses significant architectural challenges, Meta, with its vast application scenarios and ample cash flow, is seeking to redefine the power balance between internet giants and hardware suppliers.