Nvidia Shift Set to Double Server-Memory Prices by 2026

date
22:49 19/11/2025
avatar
GMT Eight
Nvidia’s shift to smartphone-style LPDDR chips in AI servers is expected to strain supply and drive server-memory prices to double by late 2026.

A new report from Counterpoint Research on Wednesday warned that Nvidia’s decision to switch to smartphone-style memory chips in its artificial intelligence servers could drive server-memory prices to double by late 2026. Over the past two months, global electronics supply chains have already experienced shortages of legacy memory components as manufacturers redirected production toward higher-end chips suited for AI-driven semiconductors. Counterpoint, however, highlighted an emerging challenge tied to Nvidia’s recent strategy to lower AI server power consumption by shifting from DDR5 memory, traditionally used in servers, to LPDDR, a low-power chip commonly used in smartphones and tablets.

Because AI servers require significantly more memory capacity than consumer handsets, this transition is expected to generate a surge in demand that current production systems are not yet prepared to support. Major memory suppliers such as Samsung Electronics, SK Hynix and Micron are already navigating limited supplies of older dynamic random-access memory products after cutting output to prioritize high-bandwidth memory, which is essential for advanced accelerators driving the global AI expansion.

Counterpoint noted that shortages in lower-end products could spread further up the market as chipmakers evaluate whether to reallocate additional factory capacity to LPDDR production to meet Nvidia’s requirements. The firm stressed that Nvidia’s move effectively makes it a buyer on the scale of a major smartphone manufacturer, representing a significant disruption for the supply chain, which is not positioned to adjust immediately to demand of this magnitude.

According to the report, server-memory prices are projected to roughly double by the end of 2026. Rising memory costs would increase operational expenses for cloud service providers and AI developers, adding additional pressure to data center budgets already strained by heavy investment in GPUs and power infrastructure upgrades.