The smart body wave is coming, and CPU welcomes the "cultural renaissance moment"! Intel, AMD, and ARM stock prices skyrocketed hand in hand.
The distribution of weights in the value chain surrounding AI computing power infrastructure is also beginning to shift. In the next round of excess alpha returns, it will no longer only belong to the leading names in the AI GPU/AI ASIC field, but will systematically diffuse to CPU, storage, PCB, liquid cooling systems, ABF substrates, and a wide range of wafer foundries and other full-stack AI computing infrastructure layers.
On Friday, in the early trading session of the US stock market, the two major x86 architecture CPU giants - Intel Corporation (INTC.US) and AMD (AMD.US) saw their stock prices reach historic highs together, marking a new milestone in the x86 architecture. The veteran chip giant Intel Corporation (INTC.US) surged by over 27% on the back of strong performance exceeding expectations. Another dominant player in the high-performance x86 architecture AI data center server CPU sector, AMD, also saw a significant rise with its stock price soaring over 14% to hit a new high. Similarly, the owner of the ARM instruction set architecture, Arm Holdings Plc (ARM.US), saw its stock price skyrocket to a historic high as well, highlighting the growing investor interest in the energy-efficient and low-power advantages of the ARM architecture.
With the launch of Anthropic's Claude Cowork and other super AI agent tools capable of autonomous task execution like OpenClaw, there was a sudden surge in the AI intelligent agent wave in 2026, sweeping across the globe. The bottleneck in AI computing architecture is shifting from GPU-based matrix multiplication throughput to data center CPUs focusing on control flow, task scheduling, and memory/IO coordination. High-performance CPUs for ultra-large-scale AI data centers are facing a severe supply shortage.
In the past two years, the AI narrative has been dominated by GPUs, with CPUs playing a secondary role in the AI arms race. However, with the rise of open-source AI workflows like OpenClaw dominated by agent-type AI intelligent agents, the market has realized the crucial role of CPUs in enabling efficient operation of GPU clusters. This shift signifies the resurgence of CPUs from being underestimated infrastructure components to once again becoming essential in the core stage of AI data centers, reminiscent of a "Renaissance" revival in a retro wave.
As we enter the era of AI intelligent agents, the computational system is transitioning from a focus on GPUs to a more complex heterogeneous computing paradigm where CPUs are tasked with large-scale scheduling, data movement, memory management, model invocation, toolchain orchestration, inference request distribution, database retrieval, network communication, and security isolation. In other words, CPUs are no longer just "background components" in AI data centers but have become the central nervous system and scheduling brain of AI factories. This transformation aligns with the core imagery of the "Renaissance": a traditional computing power architecture that has been undervalued by the market and overshadowed by GPUs regaining its relevance and pricing power in the capital market.
The rapid development of AI data centers has pushed Intel Corporation's data center CPUs into a supply shortage situation. The lead time for demand for high-performance server CPUs, which are in high demand, has stretched up to six months, leading to a general price increase of 10% for these CPUs targeted at data centers. This is why the stock price of Intel Corporation, which had languished for a year and a half, has surged over 120% this year, hitting a new all-time high.
The Middle East conflict has been unable to dampen the "AI bull market" narrative! CPUs are no longer monopolizing computing power, with the wave of intelligent agents driving a surge in CPU demand.
Morgan Stanley, Stifel, DA Davidson, and other Wall Street financial giants believe that the two major PC and data center CPU giants - Intel Corporation (INTC.US) and AMD (AMD.US) are in the most advantageous core position to benefit from the record-breaking surge in demand for data center CPUs. Additionally, top Wall Street analysts believe that storage chip giants will also benefit from the exponential expansion of CPU demand, with Morgan Stanley seeing major US storage vendors Micron Technology Inc (MU.US) and SanDisk (SNDK.US) in a prime position.
With the South Korean stock market benchmark - KOSPI, dominated by heavyweights Samsung and SK Hynix, hitting a new high amid deteriorating political pressure from the GEO Group Inc, as well as the Taiwan Semiconductor Manufacturing Co., Ltd. Sponsored ADR, known as the "chip outsourcing king," driving Taiwan's stock market to a historic high, along with the Philadelphia Semiconductor Index experiencing a record 17 consecutive gains, investors are increasingly convinced that the "AI computing investment theme" will overshadow all market noise.
Simultaneously, the weight distribution in the value chain surrounding AI computing infrastructure is beginning to shift towards CPU, storage, PCB, liquid cooling systems, ABF substrates, and extensive wafer foundry services, marking a systemic expansion across all AI computing infrastructure layers. In this transition, Wall Street financial giants like Morgan Stanley believe that CPUs for data centers and DRAM/NAND storage chips could be the most core beneficiaries of the AI computing subcategories.
In the intelligent agent era, a significant portion of the workload is not only consumed by token generation on GPUs but also involves tasks dominated by CPUs such as Python interpretation, web crawling, database retrieval, RAG index access, lexical processing, task queue scheduling, RPC/IPC communication, KV state updates, and more. This shift emphasizes that the key to user experience lies not just in the peak computational power of a single GPU but in whether the CPU has enough cores, concurrent threads, cache levels, memory bandwidth, PCIe/CXL/interconnect scheduling abilities to support high-frequency tool calls and high-density task switching. If the CPU cores, memory subsystem, or I/O scheduling is insufficient, even with ample nominal computing power, GPUs will face a collapse in utilization due to data preparation, task coordination, system wait times, etc.
It is undeniable that the bottleneck in AI computing architecture is shifting from GPU-centered matrix multiplication throughput to data center CPUs focusing on control flow, task scheduling, and memory/IO coordination. This transformation is driven by a fundamental shift in the paradigm of workload patterns. CPUs are no longer just general-purpose computing chips but have become the control plane processor, system orchestrator, and resource scheduler of the intelligent agent era. The notion that the "underestimated CPU is becoming a new bottleneck in AI" is not an emotional judgment but a logical outcome of AI workloads evolving from "computational inference problems" to "complex system engineering problems."
In the early stages of large model inference, the focus was on tasks like "single request - single generation," with CPUs mainly handling data movement, request routing, and basic scheduling as auxiliary control functions. However, in the era of intelligent agents and reinforcement learning, the system workload has evolved from simple forward inference to a complex closed-loop process that includes task planning, tool calling, sub-agent coordination, environmental interaction, state management, and result validation. The orchestration layer at the heart of these tasks is inherently CPU-intensive in control flow, branch decision-making, system calls, and memory access, tasks that cannot be efficiently substituted by GPUs. Therefore, CPUs are transitioning from being background players to becoming the new bottleneck that determines system throughput, latency, and resource utilization.
The latest forecast data from Morgan Stanley indicates that the explosion of intelligent agents signals a structural shift from computation to orchestration, leading to an estimated incremental market space of $32.5 billion to $60 billion for CPUs by 2030, substantially expanding the total addressable market (TAM) for server-level CPUs to $82.5 billion to $110 billion. A TrendForce report predicts that in the era of intelligent agents, the CPU:GPU ratio may undergo a significant reevaluation, shifting from the traditional 1:4 to 1:8 in AI data centers to 1:1 to 1:2.
On Wall Street, analysts are bullish on AMD and ARM, believing that their upward momentum is far from over. At the time of writing, Intel Corporation's stock price hovers around $85, having surged by more than 27% intraday, exceeding the optimistic target prices of most analysts on Wall Street. However, AMD and ARM still have some distance to go to reach the highest target prices set by Wall Street.
In a recent investor report led by Morgan Stanley analyst Joseph Moore, it was stated: "The evident beneficiaries of the CPU strength - Intel Corporation and AMD - have relatively complex strategic frameworks to some extent, but the exponential expansion in demand for server CPUs is crucial for both their profit prospects."
"Between the two, we favor AMD; additionally, at this juncture, we believe that storage chip manufacturers offer significantly better risk-return ratios, as storage themes are one of the direct beneficiaries of the expansion in CPU demand." stated the analysts led by Joseph Moore from Morgan Stanley.
Following Intel Corporation's strong earnings report, the DA Davidson team led by senior Wall Street analyst Gil Luria chose to upgrade the rating of AMD (AMD.US) shares before the Friday pre-market session and raised the 12-month target price to $375, the highest on Wall Street. At the time of writing, AMD's stock price soared by 14% to around $348.
"We have upgraded our rating on AMD shares from neutral to buy and raised our target price from $220 to $375. The basis for this is the structural growth in CPU demand, and AMD's role in this great wave of data center construction is gaining significant visibility. Given Intel Corporation's performance far exceeding expectations, there is significant upside potential in AMD's performance expectations, which will be reflected starting from AMD's performance for the March quarter to be announced on May 5th." stated the DA Davidson team led by Gil Luria.
"We believe that Intel Corporation's performance is a precursor to a significant leap in AMD's CPU business, and we believe that the structural shift towards agent-style AI workloads is creating unprecedented demand for server CPUs. We believe that, given our assessment of future demand exceeding supply in the foreseeable future, AMD is in a favorable position to significantly raise prices across the entire product portfolio to support and expand profit margins." added the DA Davidson team led by Gil Luria.
In terms of the bullish thesis on ARM by Wall Street, the core logic has shifted from being a "smartphone IP licensing company" to becoming one of the core beneficiaries of AI data center CPUs and the wave of Agentic AI infrastructure. Notably, the renowned investment firm Guggenheim recently raised ARM's target price to a high of $240 on Wall Street, viewing ARM's transition from a traditional smartphone IP licensor to a direct participant in AI data center silicon and supercomputing platforms.
Recent announcements have revealed that the US cloud computing and e-commerce behemoth, Amazon.com, Inc. (AMZN.US), and the parent company of Facebook, Meta Platforms Inc. (META.US), have entered into a multi-billion-dollar long-term agreement. The social media giant will be leasing hundreds of thousands of Amazon.com, Inc.'s in-house ARM architecture general-purpose data center server CPUs for its large-scale AI data centers being built to support the massive AI inference workload for Facebook and Instagram users.
Graviton is the ARM architecture general-purpose server CPU developed in-house by Amazon.com, Inc.'s AWS cloud computing division, primarily responsible for general-purpose computing, scheduling, data preprocessing/post-processing, service orchestration, and some AI inference-related scheduling and coordination tasks in AI data centers.
For a company like Meta, which handles massive AI agent, recommendation, advertising, content generation, and query response tasks on a daily basis, many tasks do not require expensive GPU involvement throughout. Leveraging high-density ARM architecture like Graviton instead of Intel Corporation's x86 architecture CPUs for handling peripheral inference service loads in large volumes can reduce unit request costs, free up GPUs for more valuable training/inference tasks, and improve overall cluster total cost of ownership. Arm emphasizes that the expansion of AI data centers is making CPU-intensive orchestration, data processing, and system control on the low-power, high-efficiency ARM architecture CPU a critical bottleneck, with AWS's fifth-generation Graviton featuring 192 cores reflecting the rising demand for CPU density.
ARM stands out as one of the biggest winners in the global AI frenzy, with NVIDIA Corporation's in-house Grace CPU based on ARM architecture, Amazon.com, Inc.'s in-house data center Graviton server processor also using ARM architecture, as well as the Google Axion Processors built on ARM Neoverse for Google's Alphabet Inc. Class C's first in-house ARM architecture data center CPU and Microsoft Corporation's Azure Cobalt 100 in-house ARM architecture data center CPU. The ARM architecture is evolving from being the "king of smartphones" to becoming one of the foundational pillars of AI cloud computing in the AI era.
The RISC-based architecture adopted by ARM enables server CPUs based on its design to have significant advantages in energy efficiency and low power consumption compared to Intel Corporation's x86 architecture, particularly in executing AI inference/training tasks. This feature makes ARM architecture particularly suitable for data center server applications, allowing it to efficiently complement AI GPUs to meet the ever-increasing demand for AI inference/training computational power.
Related Articles

GF Securities: Overall revenue growth in Q4 has slowed slightly, with varying profitability performance in overseas sports shoes and clothing.

GMTEight List of A-share restricted sales and lifting restrictions | April 27th

Quantum computing race now diverges: Goldman Sachs reduces bets, while Morgan Stanley defies trend by increasing holdings in 50-person team.
GF Securities: Overall revenue growth in Q4 has slowed slightly, with varying profitability performance in overseas sports shoes and clothing.

GMTEight List of A-share restricted sales and lifting restrictions | April 27th

Quantum computing race now diverges: Goldman Sachs reduces bets, while Morgan Stanley defies trend by increasing holdings in 50-person team.






